At MakeHardware.com, we love hosting elaborate birthday parties for our kids! I was all for having my daughter’s ninth birthday party at our local paint-your-own-pottery studio, but then my daughter suggested a Little House on the Prairie theme and I couldn’t resist! It’s the perfect birthday party theme for makers!
According to tech blogger Dave Zatz, the next Tivo OTA DVR might have an architecture that is a lot more similar to the Tablo series of OTA DVRs. What this means is that the Tivo DVR named “Mantis” would no longer connect directly to the TV, instead it would “transcode” video to a streaming device such as a Roku, Apple TV, or Amazon Fire, or a table or phone. The benefit of this approach is that one box can stream to multiple TVs or devices and it can be significantly cheaper in a household with multiple TVs. Previously, multiple TV households wanting to have DVR features would need a Tivo mini for each TV.
Broadcast TV signals can provide high quality HD images and in many ways still provide a better user experience that is easier to use and has less lag.
Interestingly, the number of households that were Internet streaming also grew from 4% to 6%. In terms of percentage, it is definitely much faster growth, but it’s interesting to consider that the number of OTA only households is almost three times as large.
It turns out that a large number of toy drones use the same nRF24L01+ compatible RF chips. The word compatible is necessary because most of them seem to not use the Nordic Semiconductor chipset, but rather something like the XN297 from Panchip. . . . Continue Reading
It’s hard not to wonder if Microsoft’s DVR strategy has been influenced by the growth of Sony’s PS Vue service and it’s “Cloud DVR.” From a revenue perspective, the attractiveness of the monthly subscription model for streaming must have turned some heads at Microsoft. I’m guessing that Microsoft will be attempting to come out with a streaming service and Cloud DVR to compete head on with the PS Vue rather than a DVR than runs locally.
This announcement doesn’t change the fact that you can still use your Xbox to watch OTA TV if you just buy an antenna and tuner, but you won’t be able to record it.
See these links below for more info and discussion:
For our drone flying project, we have been using the ELP 2 Megapixel USB Camera. The auto exposure on this camera works in most situations, but we found that it does not always adjust to bright sunlight. In preparation for demonstrating our computer-controlled drone at the Maker Faire, I wanted to have a plan in case we were outdoors. It was a good thing too, since we were assigned an outdoor booth next to the Drone Combat arena.
We detect the location of our drone by using blob detection on four paper circles that we have taped to the top of the drone. Originally, we were using a medium green color, but we found that under some lighting conditions, our code would confuse the light blue color on the body of the drone with the green circles. I thought about making our blob detection code more robust, but the Maker Faire was quickly approaching! Instead we decided to make our flying area more controlled. We used white poster board as the background for our flying area and I tested some different colors for the paper circles. Red circles were good, except that our code got confused if one of our hands was reaching into the flying area. Black was not good in dim light. In the end, we decided on a dark purple with a blue undertone.
OpenCV provides a way to set a webcam’s manual exposure, but there are two problems. The first is that OpenCV is not well-documented. I could find the documentation stating that I should be able to set the exposure value, but it was not at all clear what values to pass! The second problem is that your particular webcam may not support programmatic setting of the exposure. Thus, when your code doesn’t work, it can be difficult to determine if your code is wrong or if your webcam just won’t allow it!
OpenCV’s VideoCapture.set() is the method to use. If you look at the documentation, you will see that there is a property named CV_CAP_PROP_EXPOSURE. It took me some time to discover that depending on the version of OpenCV you are using, the property’s name might actually be CAP_PROP_EXPOSURE.
There is no hint as to what the exposure value should be set to, but luckily for me, I found a mapping for the ELP 2 MP webcam on this page by Joan Charmant. He shows that the exposure values range between -1 and -13. You can programmatically set the exposure in this manner:
Unfortunately, I could not figure out a programmatic way to set the exposure back to auto exposure. If you know how, please add a comment! Please be aware that for some webcams, such as this one, the manual exposure setting is stored in its on-board memory, which means that turning off your program and even turning off the webcam itself, the manual exposure will still be set!
As a workaround, I found a way to bring up the DirectShow property pages so that I could use the DirectShow GUI to set the manual exposure or to turn auto exposure back on.
Here’s the code to launch the DirectShow property page:
During the Maker Faire, our demonstration area was shaded by a tent for most of the day, but around 2 PM our flying area was part sun and part shade. We delayed the inevitable by moving our table back with the shade, but eventually we had to move our table back to the front of the booth and into the sun. On Saturday, the afternoon was mostly overcast, and the camera’s auto exposure worked most of the time. I was surprised that our blob detection code even worked when people walked in front of our booth and made our flying area partly shaded by their shadows.
Sunday was mostly sunny, and the webcam’s auto exposure did not work when it was very bright. At these times, I opened up the DirectShow property pages and set the exposure manually so that our demo would still work. Maker disaster averted!
In my previous post, I described how to set up Python and OpenCV on your computer. Now I will show you how to use OpenCV’s computer vision capabilities to detect an object.
OpenCV’s SimpleBlobDetector will be the primary function that we will be using. With the SimpleBlobDetector, you can distinguish blobs in your image based on different parameters such as color, size, and shape.
As an OpenCV novice, I searched Google to help me get started with the Python OpenCV code. You will find that OpenCV is very powerful and extensive, but unfortunately it is not well documented. Some classes and functions are described well, but some just list a method’s parameters with a terse description. I suppose we can’t have everything. On the bright side, there are many tutorials and examples to help you out.
Ball Tracking with OpenCV – this example is more extensive, and he has a nice animated gif at the top of his page showing the ball tracking in action. We use cv2.inRange() like he does, but we then use SimpleBlobDetector_Params() instead of findContours().
OpenCV’s Python Tutorials Page – I don’t have the patience to go through tutorials when I just need a quick solution, but I did look through a few of the tutorials on this page when the need arose. We based some of our color threshold code on the example shown if you go into the Image Processing in OpenCV section and then to the Changing Colorspaces tutorial.
When we were testing out our detection, we took a bunch of jpg photos with our webcam under different conditions and we put them in the ./images directory. In this code example, we loop through the image files and we try to detect the purple circles on our drone for each image.
The full code is detectDrone.py and is up on Github with the rest of the project. Here is the beginning of the code. We set up our import statements, and then we need to undistort the image. For our webcam, the image is distorted around the edges – like a fishbowl effect.
Blob Detection with OpenCV
# Load webcam calibration values for undistort()
# calibration values calculated using cv2.calibrateCamera() previously
Now to the heart of our code. We run cv2.GaussianBlur() to blur the image and which helps remove noise. The webcam image is in the BGR (Blue Green Red) color space and we need it in HSV (Hue Saturation Value), so the next call is cv2.cvtColor(image, cv2.COLOR_BGR2HSV).
We need to separate the purple circles from the rest of the image. We do this by using cv2.inRange() and passing in the range HSV values that we want separated out from the image. We had to do some experimentation to get the correct values for our purple circles. We used this range-detector script in the imutils library to help us determine which values to use. Unfortunately, the range of HSV values varies widely under different lighting conditions. For example, if our flying area has a mixture of bright sunlight and dark shadows, then our detection does not work well. We control this by shining bright LED lights over the flying area.
Blob Detection with OpenCV
# Blur image to remove noise
# Switch image from BGR colorspace to HSV
# define range of purple color in HSV
# Sets pixels to white if in purple range, else will be set to black
# Bitwise-AND of mask and purple only image - only used for display
# mask = cv2.erode(mask, None, iterations=1)
# commented out erode call, detection more accurate without it
# dilate makes the in range areas larger
Now we use SimpleBlobDetector to find our blobs and they are saved in keypoints.
Blob Detection with OpenCV
# Set up the SimpleBlobdetector with default parameters.
# Change thresholds
# Filter by Area.
# Filter by Circularity
# Filter by Convexity
# Filter by Inertia
# Detect blobs.
If we found more than 4 blobs, then we keep the four largest. We draw green circles around the blobs we found and we display these four images:
The original image after undistort and Gaussian blur (frame)
The image with the purple circles separated out and shown in white (mask)
The image with the purple circles separated out and shown in their original color (res)
The original image with green circles drawn around the purple circles (im_with_keypoints)
If there are multiple images in the directory, then we go through this whole process for the next image. Now our code can see where our drone is!
There’s a great tool on TVFool.com to help you determine which channels you can receive over the air (OTA) at your house. Yes, your house. You type in your address and it will give a list of channels that you will be able to receive for free with an antenna! It will even show you where the signals are coming from so that you can optimize your signal strength by pointing your antenna in that direction.
We use a Terk Indoor HD antenna sitting on top of our media cabinet about 8 feet off the ground. We live in a flat suburban area and we are able to get all of the main network channels in HD for free! I love that we can watch the Super Bowl and the Oscars in HD. We get lots of kids channels and even re-runs of The Brady Bunch. My kids have watched almost every episode of this good, wholesome show.
Once you have your HD antenna, take your set up to the next level by adding a DVR. With a DVR, you can record your OTA shows and watch them at your leisure. Our DVR comparison guide is here to help you choose the DVR that is right for you!
For our drone flying project, we needed a way for our computer to detect the location of our mini-drone through the use of a webcam mounted above the flying area. We are not at all familiar with computer vision algorithms, but we do know how to call functions from a Python library! We made use of OpenCV (Open Source Computer Vision), which is available for Python and C++.
For our Python environment, we chose Python(x,y). Python(x,y) is a version of Python developed specifically for scientific calculations and visualizations. If you are a fan of Matlab, then you will feel right at home with Python(x,y).
This is what you need to do to set up a Python(x,y) development environment with OpenCV.
Install the latest revision of the python(x,y) package. This includes Spyder (Scientific PYthon Development EnviRonment). Download Python(x,y) here.
For the Python(x,y) install, choose Custom install and select the Python ➞ PySerial 2.7-1 component. PySerial is needed to communicate with an Arduino.
Optional: We also like to add the Other ➞ WinMerge component when installing Python(x,y), but it is not required.
Unzip the opencv2 package and copy opencv\build\python\2.7\x86\cv2.pyd to <python dir>\Lib\site-packages\ where the default Windows location for <python dir> is C:\Python27
Note: If your computer supports it, copy opencv\build\python\2.7\x64\cv2.pyd instead of x86. I decided which to run by first trying the x64 copy, but the x64 version did not work for me when run. So I copied the x86 version instead. See below for how to check if OpenCV is loading properly.
Now it’s time to check if your development environment is working. Start Python(x,y) and you will see this window:
Click on the small blue and red icon that looks like a spider web to start the Spyder IDE. Here is what the Spyder IDE looks like:
The bottom right portion of the IDE shows the IPython console. You can run scripts or call Python commands directly in the IPython console.
In the IPython console, type import cv2 and hit enter.
If there is a problem, then you will receive an error, likely an error about “No module named cv2”. If that happens, then check that you copied the OpenCV files to the correct location as described in Step 3 above.
If everything is working, then the console will accept your command and show a prompt for your next command like this:
Hooray, you have successfully set up Python(x,y) and OpenCV! Nothing to it, right? Now let’s see what we can do with OpenCV. Take a look at our post on blob detection with OpenCV.
A few months ago, I watched this TED talk where they setup an indoor arena and did some amazing things with drones. It got me thinking, and it inspired me to build something like that for myself – but on a much smaller and cheaper scale.
In the video they use an expensive real-time infrared motion tracking system (I am guessing something like these Optitrack systems) to measure the position of the drones, and then uses a computer to calculate and send control signals to coordinate the drones. At a high level, my setup works in a similar way, as shown in this diagram:
Total cost for these items was around $85. In addition to the above, you might also need a folding table and stack of books to hold up the webcam as I did, but you can probably think up something more refined!
Here is a video of it working:
Here are some links to further information on how this all works: