'How to detect corners of a square with Python OpenCV?
In the image below, I am using OpenCV harris corner detector to detect only the corners for the squares (and the smaller squares within the outer squares). However, I am also getting corners detected for the numbers on the side of the image. How do I get this to focus only on the squares and not the numbers? I need a method to ignore the numbers when performing OpenCV corner detection. The code, input image and output image are below:
import cv2 as cv
img = cv.imread(filename)
gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
gray = np.float32(gray)
dst = cv.cornerHarris(gray, 2, 3, 0.04)
dst = cv.dilate(dst,None)
# Threshold for an optimal value, it may vary depending on the image.
img[dst>0.01*dst.max()]=[0,0,255]
cv.imshow('dst', img)
Input image
Output from Harris corner detector
Solution 1:[1]
Here's a potential approach using traditional image processing:
Obtain binary image. We load the image, convert to grayscale, Gaussian blur, then adaptive threshold to obtain a black/white binary image. We then remove small noise using contour area filtering. At this stage we also create two blank masks.
Detect horizontal and vertical lines. Now we isolate horizontal lines by creating a horizontal shaped kernel and perform morphological operations. To detect vertical lines, we do the same but with a vertical shaped kernel. We draw the detected lines onto separate masks.
Find intersection points. The idea is that if we combine the horizontal and vertical masks, the intersection points will be the corners. We can perform a bitwise-and operation on the two masks. Finally we find the centroid of each intersection point and highlight corners by drawing a circle.
Here's a visualization of the pipeline
Input image ->
binary image
Detected horizontal lines ->
horizontal mask
Detected vertical lines ->
vertical mask
Bitwise-and both masks ->
detected intersection points ->
corners ->
cleaned up corners
The results aren't perfect but it's pretty close. The problem comes from the noise on the vertical mask due to the slanted image. If the image was centered without an angle, the results would be ideal. You can probably fine tune the kernel sizes or iterations to get better results.
Code
import cv2
import numpy as np
# Load image, create horizontal/vertical masks, Gaussian blur, Adaptive threshold
image = cv2.imread('1.png')
original = image.copy()
horizontal_mask = np.zeros(image.shape, dtype=np.uint8)
vertical_mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.adaptiveThreshold(blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 23, 7)
# Remove small noise on thresholded image
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 150:
cv2.drawContours(thresh, [c], -1, 0, -1)
# Detect horizontal lines
dilate_horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10,1))
dilate_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, dilate_horizontal_kernel, iterations=1)
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (40,1))
detected_lines = cv2.morphologyEx(dilate_horizontal, cv2.MORPH_OPEN, horizontal_kernel, iterations=1)
cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(image, [c], -1, (36,255,12), 2)
cv2.drawContours(horizontal_mask, [c], -1, (255,255,255), 2)
# Remove extra horizontal lines using contour area filtering
horizontal_mask = cv2.cvtColor(horizontal_mask,cv2.COLOR_BGR2GRAY)
cnts = cv2.findContours(horizontal_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area > 1000 or area < 100:
cv2.drawContours(horizontal_mask, [c], -1, 0, -1)
# Detect vertical
dilate_vertical_kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (1,7))
dilate_vertical = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, dilate_vertical_kernel, iterations=1)
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (1,2))
detected_lines = cv2.morphologyEx(dilate_vertical, cv2.MORPH_OPEN, vertical_kernel, iterations=4)
cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(image, [c], -1, (36,255,12), 2)
cv2.drawContours(vertical_mask, [c], -1, (255,255,255), 2)
# Find intersection points
vertical_mask = cv2.cvtColor(vertical_mask,cv2.COLOR_BGR2GRAY)
combined = cv2.bitwise_and(horizontal_mask, vertical_mask)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2,2))
combined = cv2.morphologyEx(combined, cv2.MORPH_OPEN, kernel, iterations=1)
# Highlight corners
cnts = cv2.findContours(combined, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
# Find centroid and draw center point
try:
M = cv2.moments(c)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
cv2.circle(original, (cx, cy), 3, (36,255,12), -1)
except ZeroDivisionError:
pass
cv2.imshow('thresh', thresh)
cv2.imshow('horizontal_mask', horizontal_mask)
cv2.imshow('vertical_mask', vertical_mask)
cv2.imshow('combined', combined)
cv2.imshow('original', original)
cv2.imshow('image', image)
cv2.waitKey()
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 |