In my previous post, I described how to set up Python and OpenCV on your computer. Now I will show you how to use OpenCV’s computer vision capabilities to detect an object.
OpenCV’s SimpleBlobDetector will be the primary function that we will be using. With the SimpleBlobDetector, you can distinguish blobs in your image based on different parameters such as color, size, and shape.
As an OpenCV novice, I searched Google to help me get started with the Python OpenCV code. You will find that OpenCV is very powerful and extensive, but unfortunately it is not well documented. Some classes and functions are described well, but some just list a method’s parameters with a terse description. I suppose we can’t have everything. On the bright side, there are many tutorials and examples to help you out.
Here are a few tutorials that we found helpful:
- Blob Detection using OpenCV – a nice brief introduction to SimpleBlobDetector.
- Ball Tracking with OpenCV – this example is more extensive, and he has a nice animated gif at the top of his page showing the ball tracking in action. We use cv2.inRange() like he does, but we then use SimpleBlobDetector_Params() instead of findContours().
- OpenCV’s Python Tutorials Page – I don’t have the patience to go through tutorials when I just need a quick solution, but I did look through a few of the tutorials on this page when the need arose. We based some of our color threshold code on the example shown if you go into the Image Processing in OpenCV section and then to the Changing Colorspaces tutorial.
When we were testing out our detection, we took a bunch of jpg photos with our webcam under different conditions and we put them in the ./images directory. In this code example, we loop through the image files and we try to detect the purple circles on our drone for each image.
The full code is detectDrone.py and is up on Github with the rest of the project. Here is the beginning of the code. We set up our import statements, and then we need to undistort the image. For our webcam, the image is distorted around the edges – like a fishbowl effect.
import numpy as np
# Load webcam calibration values for undistort()
# calibration values calculated using cv2.calibrateCamera() previously
# for our webcam
blobsNotFound = 
images = glob.glob('images\\*.jpg')
for fname in images:
orig_img = cv2.imread(fname)
# undistort and crop
dst = cv2.undistort(orig_img, mtx, dist, None, newcameramtx)
x,y,w,h = roi
crop_frame = dst[y:y+h, x:x+w]
Now to the heart of our code. We run cv2.GaussianBlur() to blur the image and which helps remove noise. The webcam image is in the BGR (Blue Green Red) color space and we need it in HSV (Hue Saturation Value), so the next call is cv2.cvtColor(image, cv2.COLOR_BGR2HSV).
We need to separate the purple circles from the rest of the image. We do this by using cv2.inRange() and passing in the range HSV values that we want separated out from the image. We had to do some experimentation to get the correct values for our purple circles. We used this range-detector script in the imutils library to help us determine which values to use. Unfortunately, the range of HSV values varies widely under different lighting conditions. For example, if our flying area has a mixture of bright sunlight and dark shadows, then our detection does not work well. We control this by shining bright LED lights over the flying area.
# Blur image to remove noise
frame=cv2.GaussianBlur(crop_frame, (3, 3), 0)
# Switch image from BGR colorspace to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of purple color in HSV
purpleMin = (115,50,10)
purpleMax = (160, 255, 255)
# Sets pixels to white if in purple range, else will be set to black
mask = cv2.inRange(hsv, purpleMin, purpleMax)
# Bitwise-AND of mask and purple only image - only used for display
res = cv2.bitwise_and(frame, frame, mask= mask)
# mask = cv2.erode(mask, None, iterations=1)
# commented out erode call, detection more accurate without it
# dilate makes the in range areas larger
mask = cv2.dilate(mask, None, iterations=1)
Now we use SimpleBlobDetector to find our blobs and they are saved in keypoints.
# Set up the SimpleBlobdetector with default parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 0;
params.maxThreshold = 256;
# Filter by Area.
params.filterByArea = True
params.minArea = 30
# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = 0.1
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.5
# Filter by Inertia
params.minInertiaRatio = 0.5
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(reversemask)
If we found more than 4 blobs, then we keep the four largest. We draw green circles around the blobs we found and we display these four images:
- The original image after undistort and Gaussian blur (frame)
- The image with the purple circles separated out and shown in white (mask)
- The image with the purple circles separated out and shown in their original color (res)
- The original image with green circles drawn around the purple circles (im_with_keypoints)
If there are multiple images in the directory, then we go through this whole process for the next image. Now our code can see where our drone is!