Stop Motion Animation for the Girl Scout Entertainment Technology Badge

I covered Part 2 of the Entertainment Technology Badge for Junior Girl Scouts at our troop meeting. At a previous meeting, we learned about badge requirements 2 (video game development), 3 (amusement park science), and 4 (special effects). For Part 2, we investigated badge requirements 1 (animation) and 5 (sound) for the Entertainment Technology badge.

Since my troop has also been gearing up for Girl Scout Cookie Season, I combined our cookie sale role play practice with the creation of stop motion animation videos. The girls worked in groups of two to animate a cookie sale scenario using little toys like Playmobil figures or LEGO minifigures. It was fun for the girls to combine learning about animation with cookie sale practice!

Stop motion animation involves taking lots of pictures where the things in each photo are moving just a little bit. It is similar to traditional drawn animation except that photos are being displayed instead of multiple drawings. Animation works since showing the images fast enough will cause your brain to make it look like continuous motion.

We used the Stop Motion Studio app. The basic functionality of the app is free and works great. It does have in-app purchases for more options, but we did not need any of those.

To create the scene, my kids and I looked through their toy bins and found the following:

  • Playmobil figures for the characters
    • I took little strips of green felt and sewed a couple of stitches in one end to make little Girl Scout sashes. I had gold star stickers handy and cut off a few points to stick on as badges.
  • LEGO 1×2 bricks with a 1×2 flat tile on top
    • I printed out tiny photos of the Girl Scout cookie boxes, cut them out, and used double-sided tape to stick them to the front of the “boxes”
  • Roominate walls to create the set and for the cookie booth table
  • Various accessories from Playmobil and LEGO like money and a cell phone.
  • LEGO bricks to make a device holder
LEGO Device Holder – one child asked me how I knew how to make it. I just made it up!

During our troop meeting, we had the girls rotate through cookie sale-related stations and this was one of the stations. I had 4 girls working on stop motion at a time, so I had two devices and two scenes prepared.

For stop motion animation, keeping the device steady and at a fixed distance/angle is important. Otherwise your animation will look like things are jumping around. I used washi tape to tape down our homemade device holder and mark where it was supposed to be located. I did the same for the Roominate walls and floor.

The Stop Motion Studio app is easy to learn. You create a new movie and there are only a few options you need to know about.

  1. Settings: Since we had limited time, I had the girls change the Frames Per Second (FPS) to 2 FPS. Most of the girls’ videos were around 20-30 seconds long which would be 40-60 photos. I wanted to keep the activity fun and not make it painstaking, and 2 FPS worked well.
  2. Camera: This is where you take all those photos! If you take a bad photo (like your hand is showing), keep going and you can delete it later.
  3. Microphone: This is where you can record audio. It will play your video while you are recording so that you can keep your dialogue and the action in sync. This is where the girls practiced the Sound part of the Entertainment Technology badge since it took several tries to get their dialogue and action in sync. They sometimes had to edit their script or add/remove photos to get everything to line up.

From the main edit view, you can delete photos, copy and paste photos, and even select multiple photos to copy and paste. There is also a reverse option if you would like the photos you selected to be shown in reverse.

Stop Motion Studio Main Edit View

To do an action to a photo, you scroll over to the photo so that it is highlighted in the purple box at the bottom of the screen. Then you tap on the purple box and you will see the following menu:

Using Crop, Erase, Draw and Merge all require in-app purchases, but the rest are all available in the free version. It is useful to copy photos if you need to make part of your movie longer and that part of the scene does not have much movement, such as when the characters are just talking to each other. After you tap on Copy, you would scroll your purple box to where you want to paste the photo. Tap the purple box and then tap Paste. The photo will be copied right before your current spot in your movie.

Each pair of girls was given a cookie sale scenario to animate:

  1. Customer does not have any cash
  2. Customer is on a diet
  3. Customer is gluten-free
  4. $5 is so expensive
  5. Which is your favorite?
  6. Customer has already purchased Girl Scout cookies
  7. Customer is vegan

Here are a few more ideas:

  1. Customer does not eat cookies
  2. Customer is diabetic
  3. Customer is in a hurry
  4. Customer has someone from whom they purchase cookies
  5. Which flavor would you recommend?
  6. What are you going to do with the money?

I gave the girls a few minutes to write out their scripts. I advised them to keep their dialogue short so that they would not need to take as many pictures. I reviewed their script. For scripts that seemed longer than 20-30 seconds, I timed them and gave advice about how to make it shorter. The girls chose their characters. I had printed out a clip art cookie frame onto card stock (4 frames to a page). They used this card to write out their movie title and by-line.

They set up their scene and took their photos. We used double-stick tape when a character needed to hold a cookie box. Then they recorded their dialogue. We were a bit pressed for time since we had quite a bit of other cookie business to take care of at the same meeting. They each had about 30 minutes to create their movie. It would have been better if they had 45 minutes to give them time to refine their movie. But 30 minutes was long enough to give them the experience, even if the end product was not their best work.

Here is a compilation of the videos our troop created!

Here is the practice video I made while I was trying out the app. I made this video using 5 FPS and over 130 pictures! I had to lay down afterwards. 🙂

Maker Faire Bay Area 2017

The Maker Faire Bay Area is this weekend, May 19-21, 2017, at the San Mateo Event Center. MakeHardware had a booth at the Maker Faire last year, but we have been too busy to run a booth for 2017. We do plan to attend for a day to check out what other folks have been busy making!

If you decide to check out the Maker Faire this weekend, make sure you plan for enough time to get there and back. The Maker Faire is huge and even sets up exhibits in the parking lots at the San Mateo Event Center, so parking onsite is not available. There are shuttle buses and public transit, but be ready for a wait during busy times. The Maker Faire is totally worth the trouble, just be prepared!

Here are a couple of pics from our booth last year.

The MakeHardware booth at Maker Faire Bay Area 2016. Do you see our little drone flying inside the enclosure?
Our booth was in the back corner, but we still had plenty of people come check out our PC-drone flying project!

My favorite areas of the Maker Faire include the cooking (last year I bought some great fermentation tools), the crafts and the kid sections. There are lots of electronics, light sculptures, drones, huge metal sculptures, fire art, and tons of crazy creativity!


Little House on the Prairie Birthday Party

Little House on the Prairie Party Activities: Milking the "cow," spinning wool, churning butter, and shopping at the General Store.
Milking the “cow,” spinning wool, churning butter, and shopping at the General Store

At, we love hosting elaborate birthday parties for our kids! I was all for having my daughter’s ninth birthday party at our local paint-your-own-pottery studio, but then my daughter suggested a Little House on the Prairie theme and I couldn’t resist! It’s the perfect birthday party theme for makers!

My daughter loves American history and she . . . Continue Reading

Manual Exposure vs Auto Exposure for ELP 2 MP USB Camera

For our drone flying project, we have been using the ELP 2 Megapixel USB Camera. The auto exposure on this camera works in most situations, but we found that it does not always adjust to bright sunlight. In preparation for demonstrating our computer-controlled drone at the Maker Faire, I wanted to have a plan in case we were outdoors. It was a good thing too, since we were assigned an outdoor booth next to the Drone Combat arena.

We detect the location of our drone by using blob detection on four paper circles that we have taped to the top of the drone. Originally, we were using a medium green color, but we found that under some lighting conditions, our code would confuse the light blue color on the body of the drone with the green circles. I thought about making our blob detection code more robust, but the Maker Faire was quickly approaching! Instead we decided to make our flying area more controlled. We used white poster board as the background for our flying area and I tested some different colors for the paper circles. Red circles were good, except that our code got confused if one of our hands was reaching into the flying area. Black was not good in dim light. In the end, we decided on a dark purple with a blue undertone.

Testing different circle colors
The winning color: dark purple

OpenCV provides a way to set a webcam’s manual exposure, but there are two problems. The first is that OpenCV is not well-documented. I could find the documentation stating that I should be able to set the exposure value, but it was not at all clear what values to pass! The second problem is that your particular webcam may not support programmatic setting of the exposure. Thus, when your code doesn’t work, it can be difficult to determine if your code is wrong or if your webcam just won’t allow it!

OpenCV’s VideoCapture.set() is the method to use. If you look at the documentation, you will see that there is a property named CV_CAP_PROP_EXPOSURE. It took me some time to discover that depending on the version of OpenCV you are using, the property’s name might actually be CAP_PROP_EXPOSURE.

There is no hint as to what the exposure value should be set to, but luckily for me, I found a mapping for the ELP 2 MP webcam on this page by Joan Charmant. He shows that the exposure values range between -1 and -13. You can programmatically set the exposure in this manner:

vc = cv2.VideoCapture(1)

Unfortunately, I could not figure out a programmatic way to set the exposure back to auto exposure. If you know how, please add a comment! Please be aware that for some webcams, such as this one, the manual exposure setting is stored in its on-board memory, which means that turning off your program and even turning off the webcam itself, the manual exposure will still be set!

As a workaround, I found a way to bring up the DirectShow property pages so that I could use the DirectShow GUI to set the manual exposure or to turn auto exposure back on.


Here’s the code to launch the DirectShow property page:


During the Maker Faire, our demonstration area was shaded by a tent for most of the day, but around 2 PM our flying area was part sun and part shade. We delayed the inevitable by moving our table back with the shade, but eventually we had to move our table back to the front of the booth and into the sun. On Saturday, the afternoon was mostly overcast, and the camera’s auto exposure worked most of the time. I was surprised that our blob detection code even worked when people walked in front of our booth and made our flying area partly shaded by their shadows.

Sunday was mostly sunny, and the webcam’s auto exposure did not work when it was very bright. At these times, I opened up the DirectShow property pages and set the exposure manually so that our demo would still work. Maker disaster averted!

Blob Detection With Python and OpenCV

In my previous post, I described how to set up Python and OpenCV on your computer. Now I will show you how to use OpenCV’s computer vision capabilities to detect an object.

OpenCV’s SimpleBlobDetector will be the primary function that we will be using. With the SimpleBlobDetector, you can distinguish blobs in your image based on different parameters such as color, size, and shape.

As an OpenCV novice, I searched Google to help me get started with the Python OpenCV code. You will find that OpenCV is very powerful and extensive, but unfortunately it is not well documented. Some classes and functions are described well, but some just list a method’s parameters with a terse description. I suppose we can’t have everything. On the bright side, there are many tutorials and examples to help you out.

Here are a few tutorials that we found helpful:

  • Blob Detection using OpenCV – a nice brief introduction to SimpleBlobDetector.
  • Ball Tracking with OpenCV – this example is more extensive, and he has a nice animated gif at the top of his page showing the ball tracking in action. We use cv2.inRange() like he does, but we then use SimpleBlobDetector_Params() instead of findContours().
  • OpenCV’s Python Tutorials Page – I don’t have the patience to go through tutorials when I just need a quick solution, but I did look through a few of the tutorials on this page when the need arose. We based some of our color threshold code on the example shown if you go into the Image Processing in OpenCV section and then to the Changing Colorspaces tutorial.

For our drone flying project, we put four colored paper circles on top of our Cheerson CX-10 mini-drone to make detection simpler.

Drone image taken by webcam

When we were testing out our detection, we took a bunch of jpg photos with our webcam under different conditions and we put them in the ./images directory. In this code example, we loop through the image files and we try to detect the purple circles on our drone for each image.

The full code is and is up on Github with the rest of the project.  Here is the beginning of the code. We set up our import statements, and then we need to undistort the image. For our webcam, the image is distorted around the edges – like a fishbowl effect.

Now to the heart of our code. We run cv2.GaussianBlur() to blur the image and which helps remove noise. The webcam image is in the BGR (Blue Green Red) color space and we need it in HSV (Hue Saturation Value), so the next call is cv2.cvtColor(image, cv2.COLOR_BGR2HSV).

We need to separate the purple circles from the rest of the image. We do this by using cv2.inRange() and passing in the range HSV values that we want separated out from the image. We had to do some experimentation to get the correct values for our purple circles. We used this range-detector script in the imutils library to help us determine which values to use. Unfortunately, the range of HSV values varies widely under different lighting conditions. For example, if our flying area has a mixture of bright sunlight and dark shadows, then our detection does not work well. We control this by shining bright LED lights over the flying area.

Result of running cv2.inRange() to separate out only the purple pixels
Result of running cv2.inRange() to separate out only the purple pixels

Now we use SimpleBlobDetector to find our blobs and they are saved in keypoints.

If we found more than 4 blobs, then we keep the four largest. We draw green circles around the blobs we found and we display these four images:

  1. The original image after undistort and Gaussian blur (frame)
  2. The image with the purple circles separated out and shown in white (mask)
  3. The image with the purple circles separated out and shown in their original color (res)
  4. The original image with green circles drawn around the purple circles (im_with_keypoints)
Image after blob detection (im_with_keypoints)
Image after blob detection (im_with_keypoints)

If there are multiple images in the directory, then we go through this whole process for the next image. Now our code can see where our drone is!

Find Out Which Channels You Can Get For Free With an Antenna

There’s a great tool on to help you determine which channels you can receive over the air (OTA) at your house. Yes, your house. You type in your address and it will give a list of channels that you will be able to receive for free with an antenna! It will even show you where the signals are coming from so that you can optimize your signal strength by pointing your antenna in that direction.

Check out TV Fool’s TV Signal Locator

We use a Terk Indoor HD antenna sitting on top of our media cabinet about 8 feet off the ground. We live in a flat suburban area and we are able to get all of the main network channels in HD for free! I love that we can watch the Super Bowl and the Oscars in HD. We get lots of kids channels and even re-runs of The Brady Bunch. My kids have watched almost every episode of this good, wholesome show.

Once you have your HD antenna, take your set up to the next level by adding a DVR. With a DVR, you can record your OTA shows and watch them at your leisure. Our DVR comparison guide is here to help you choose the DVR that is right for you!


How to Set Up Your Python OpenCV Development Environment

For our drone flying project, we needed a way for our computer to detect the location of our mini-drone through the use of a webcam mounted above the flying area. We are not at all familiar with computer vision algorithms, but we do know how to call functions from a Python library! We made use of OpenCV (Open Source Computer Vision), which is available for Python and C++.

For our Python environment, we chose Python(x,y). Python(x,y) is a version of Python developed specifically for scientific calculations and visualizations. If you are a fan of Matlab, then you will feel right at home with Python(x,y).

This is what you need to do to set up a Python(x,y) development environment with OpenCV.

    1. Install the latest revision of the python(x,y) package. This includes Spyder (Scientific PYthon Development EnviRonment). Download Python(x,y) here.
    2. For the Python(x,y) install, choose Custom install and select the PythonPySerial 2.7-1 component. PySerial is needed to communicate with an Arduinopython install
    3. Optional: We also like to add the OtherWinMerge component when installing Python(x,y), but it is not required.
    4. You will also need to install the opencv2 package. Download opencv2 here.
    5. Unzip the opencv2 package and copy
      opencv\build\python\2.7\x86\cv2.pyd to <python dir>\Lib\site-packages\ where the default Windows location for <python dir> is C:\Python27

Note: If your computer supports it, copy opencv\build\python\2.7\x64\cv2.pyd instead of x86. I decided which to run by first trying the x64 copy, but the x64 version did not work for me when run. So I copied the x86 version instead. See below for how to check if OpenCV is loading properly.

Now it’s time to check if your development environment is working. Start Python(x,y) and you will see this window:


Click on the small blue and red icon Spyder button that looks like a spider web to start the Spyder IDE. Here is what the Spyder IDE looks like:

Spyder IDE

The bottom right portion of the IDE shows the IPython console. You can run scripts or call Python commands directly in the IPython console.

In the IPython console, type import cv2 and hit enter.

If there is a problem, then you will receive an error, likely an error about “No module named cv2”. If that happens, then check that you copied the OpenCV files to the correct location as described in Step 3 above.

If everything is working, then the console will accept your command and show a prompt for your next command like this:

import cv2

Hooray, you have successfully set up Python(x,y) and OpenCV! Nothing to it, right? Now let’s see what we can do with OpenCV. Take a look at our post on blob detection with OpenCV.