Stop Motion Animation for the Girl Scout Entertainment Technology Badge

I covered Part 2 of the Entertainment Technology Badge for Junior Girl Scouts at our troop meeting. At a previous meeting, we learned about badge requirements 2 (video game development), 3 (amusement park science), and 4 (special effects). For Part 2, we investigated badge requirements 1 (animation) and 5 (sound) for the Entertainment Technology badge.

Since my troop has also been gearing up for Girl Scout Cookie Season, I combined our cookie sale role play practice with the creation of stop motion animation videos. The girls worked in groups of two to animate a cookie sale scenario using little toys like Playmobil figures or LEGO minifigures. It was fun for the girls to combine learning about animation with cookie sale practice!

Stop motion animation involves taking lots of pictures where the things in each photo are moving just a little bit. It is similar to traditional drawn animation except that photos are being displayed instead of multiple drawings. Animation works since showing the images fast enough will cause your brain to make it look like continuous motion.

We used the Stop Motion Studio app. The basic functionality of the app is free and works great. It does have in-app purchases for more options, but we did not need any of those.

To create the scene, my kids and I looked through their toy bins and found the following:

  • Playmobil figures for the characters
    • I took little strips of green felt and sewed a couple of stitches in one end to make little Girl Scout sashes. I had gold star stickers handy and cut off a few points to stick on as badges.
  • LEGO 1×2 bricks with a 1×2 flat tile on top
    • I printed out tiny photos of the Girl Scout cookie boxes, cut them out, and used double-sided tape to stick them to the front of the “boxes”
  • Roominate walls to create the set and for the cookie booth table
  • Various accessories from Playmobil and LEGO like money and a cell phone.
  • LEGO bricks to make a device holder
LEGO Device Holder – one child asked me how I knew how to make it. I just made it up!

During our troop meeting, we had the girls rotate through cookie sale-related stations and this was one of the stations. I had 4 girls working on stop motion at a time, so I had two devices and two scenes prepared.

For stop motion animation, keeping the device steady and at a fixed distance/angle is important. Otherwise your animation will look like things are jumping around. I used washi tape to tape down our homemade device holder and mark where it was supposed to be located. I did the same for the Roominate walls and floor.

The Stop Motion Studio app is easy to learn. You create a new movie and there are only a few options you need to know about.

  1. Settings: Since we had limited time, I had the girls change the Frames Per Second (FPS) to 2 FPS. Most of the girls’ videos were around 20-30 seconds long which would be 40-60 photos. I wanted to keep the activity fun and not make it painstaking, and 2 FPS worked well.
  2. Camera: This is where you take all those photos! If you take a bad photo (like your hand is showing), keep going and you can delete it later.
  3. Microphone: This is where you can record audio. It will play your video while you are recording so that you can keep your dialogue and the action in sync. This is where the girls practiced the Sound part of the Entertainment Technology badge since it took several tries to get their dialogue and action in sync. They sometimes had to edit their script or add/remove photos to get everything to line up.

From the main edit view, you can delete photos, copy and paste photos, and even select multiple photos to copy and paste. There is also a reverse option if you would like the photos you selected to be shown in reverse.

Stop Motion Studio Main Edit View

To do an action to a photo, you scroll over to the photo so that it is highlighted in the purple box at the bottom of the screen. Then you tap on the purple box and you will see the following menu:

Using Crop, Erase, Draw and Merge all require in-app purchases, but the rest are all available in the free version. It is useful to copy photos if you need to make part of your movie longer and that part of the scene does not have much movement, such as when the characters are just talking to each other. After you tap on Copy, you would scroll your purple box to where you want to paste the photo. Tap the purple box and then tap Paste. The photo will be copied right before your current spot in your movie.

Each pair of girls was given a cookie sale scenario to animate:

  1. Customer does not have any cash
  2. Customer is on a diet
  3. Customer is gluten-free
  4. $5 is so expensive
  5. Which is your favorite?
  6. Customer has already purchased Girl Scout cookies
  7. Customer is vegan

Here are a few more ideas:

  1. Customer does not eat cookies
  2. Customer is diabetic
  3. Customer is in a hurry
  4. Customer has someone from whom they purchase cookies
  5. Which flavor would you recommend?
  6. What are you going to do with the money?

I gave the girls a few minutes to write out their scripts. I advised them to keep their dialogue short so that they would not need to take as many pictures. I reviewed their script. For scripts that seemed longer than 20-30 seconds, I timed them and gave advice about how to make it shorter. The girls chose their characters. I had printed out a clip art cookie frame onto card stock (4 frames to a page). They used this card to write out their movie title and by-line.

They set up their scene and took their photos. We used double-stick tape when a character needed to hold a cookie box. Then they recorded their dialogue. We were a bit pressed for time since we had quite a bit of other cookie business to take care of at the same meeting. They each had about 30 minutes to create their movie. It would have been better if they had 45 minutes to give them time to refine their movie. But 30 minutes was long enough to give them the experience, even if the end product was not their best work.

Here is a compilation of the videos our troop created!

Two of my scouts missed the meeting and here’s the video they made at a later date.

This is the practice video I made while I was trying out the app. I made this video using 5 FPS and over 130 pictures! I had to lay down afterwards. 🙂

Maker Faire Bay Area 2017

The Maker Faire Bay Area is this weekend, May 19-21, 2017, at the San Mateo Event Center. MakeHardware had a booth at the Maker Faire last year, but we have been too busy to run a booth for 2017. We do plan to attend for a day to check out what other folks have been busy making!

If you decide to check out the Maker Faire this weekend, make sure you plan for enough time to get there and back. The Maker Faire is huge and even sets up exhibits in the parking lots at the San Mateo Event Center, so parking onsite is not available. There are shuttle buses and public transit, but be ready for a wait during busy times. The Maker Faire is totally worth the trouble, just be prepared!

Here are a couple of pics from our booth last year.

The MakeHardware booth at Maker Faire Bay Area 2016. Do you see our little drone flying inside the enclosure?
Our booth was in the back corner, but we still had plenty of people come check out our PC-drone flying project!

My favorite areas of the Maker Faire include the cooking (last year I bought some great fermentation tools), the crafts and the kid sections. There are lots of electronics, light sculptures, drones, huge metal sculptures, fire art, and tons of crazy creativity!


Teambuilding with robotic insects!

I manage a team of electrical engineers, and at the end of every year I like to run a team building event.  Last year I bought everyone a mini-drone, and we had a lot of fun flying them around a conference room.  This year, I bought a bunch of remote control insect toys and the activity was to assemble them and then have teams compete in races and other games.

These are the robotic insect competitors. From left to right – a Kamigami Goki robot, the Hexbug Battle Spider, and a Hexbug Fire Ant.

I have a team of nine people, so we divided into 3 teams of 3.  Each team received one Kamigami robot, one Battle Spider, and one Fire Ant.  The first activity was to assemble the robots.

Kamigami Robot
These Robots are available in 4 models and can be controlled via Bluetooth from iOS and Android devices.  These robots have light and IR sensors, and IR transmitters. The apps for these robots are written very well and allow you to play various games such as “freeze tag” and make up your own games as well.
Hexbug Battle Spider

The little guys move slowly, but they are quite fun to drive around.  The turret on the top rotates and allows you to “shoot” IR beams at other spiders so you can play a laser tag game.

Hexbug Fire Ant

These little guys are simple and fun to drive.  They move very fast, but are a bit difficult to control, especially on carpets.

The Kamigami is manufactured out of a flat flexible plastic laminate which you have to fold like origami to create the robot’s legs and body. Assembling it took each team about 45 minutes. I wouldn’t say it was “simple” (there is no chance my parents would ever do it), but the online directions were good, and the pieces were precisely cut and fit together very well. For most engineers this will be fun.

The other robots didn’t require any assembly. The one tricky thing is that if you want to run multiple Battle Spiders, you have to make sure that each one is synchronized to a different remote controller channel. There are four possible channels, so you can run up to four spiders at a time. For the Fire Ant, there were only two possible channels, so you can only run two Fire Ants at a time.

For our competitions we setup a simple U-shaped race course and put some small cardboard boxes as obstacles.  None of the robots can really climb over any significant obstacles, but just steering them around obstacles or through a narrows space is challenging enough.

Probably the most fun event we had was a Sumo competition where we put all of the robots on a table and then had them try to push each other off, with the winner being the robot that stayed on the longest.  A large number of contestants ran off the edge of the table on their own. The Battle Spiders had a significant edge in this event because they were heavier, grip the table better, and since they move slower they were also less likely to be driven off by mistake.

At the end of the event, everyone got to take home one of the robots.  The winning teams got to choose which robots to take first. A good time was had by all.  Or, at least that’s what they said to me, but I’m the manager, so who knows what they really thought.

I enjoyed the Kamigami Robots so much, so I bought two for myself my daughters.  They’re definitely more fun to play with when you have two.


Little House on the Prairie Birthday Party

Little House on the Prairie Party Activities: Milking the "cow," spinning wool, churning butter, and shopping at the General Store.
Milking the “cow,” spinning wool, churning butter, and shopping at the General Store

At, we love hosting elaborate birthday parties for our kids! I was all for having my daughter’s ninth birthday party at our local paint-your-own-pottery studio, but then my daughter suggested a Little House on the Prairie theme and I couldn’t resist! It’s the perfect birthday party theme for makers!

My daughter loves American history and she . . . Continue Reading

The Next Tivo DVR Might Look a Lot Like a Tablo

According to tech blogger Dave Zatz, the next Tivo OTA DVR might have an architecture that is a lot more similar to the Tablo series of OTA DVRs. What this means is that the Tivo DVR named “Mantis” would no longer connect directly to the TV, instead it would “transcode” video to a streaming device such as a Roku, Apple TV, or Amazon Fire, or a table or phone.  The benefit of this approach is that one box can stream to multiple TVs or devices and it can be significantly cheaper in a household with multiple TVs. Previously, multiple TV households wanting to have DVR features would need a Tivo mini for each TV.

This type of device definitely seems like it might appeal to the growing number of households that already have a full array of streaming devices everywhere. As others have noted, streaming services are not perfect – they can be laggy, and are particularly prone to crashing during major live sporting events.  Purchasing something like the Mantis would give them benefit of having lag-free OTA broadcasts and commercial skipping capability throughout their house.  Hopefully the acquisition of Tivo by Rovi doesn’t delay or interfere with this product launch!

OTA-only households growing

A recent survey has found that the number of OTA-only households in the US has grown from 15% in 2015 to 17% in 2016.  I think that this is a reflection of several factors:

  1. Many households find cable/satellite too expensive for what it offers
  2. Streaming services still cost money, and as this NYTimes article pointed out, there are some drawbacks.
  3. Broadcast TV signals can provide high quality HD images and in many ways still provide a better user experience that is easier to use and has less lag.

Interestingly, the number of households that were Internet streaming also grew from 4% to 6%.  In terms of percentage, it is definitely much faster growth, but it’s interesting to consider that the number of OTA only households is almost three times as large.

For what it’s worth, my household has been OTA-only since 2012, and I’m a big fan of having an OTA DVR and of not paying any subscription fees.  

How to Control Your Drone From a Computer

After reading my post about how I used my computer to fly a Cheerson CX-10 drone, several people have asked me if it is possible to control other drones in a similar way.  It is in fact pretty straightforward, and in some cases you can re-use exactly the same hardware that I used to control the Cheerson CX-10 – the Arduino UNO and the Addicore nRF24L01+ Wireless Kit .

Component Description
Arduino Microcontroller Board Arduino UNO R3 Board Module With DIP ATmega328P(Blue)
Nordic Semiconductor 2.4GHz Wireless Card Addicore nRF24L01+ Wireless AddiKit with Socket Adapter Boards and Jumper Wires

It turns out that a large number of toy drones use the same nRF24L01+ compatible RF chips.  The word compatible is necessary because most of them seem to not use the Nordic Semiconductor chipset, but rather something like the XN297 from Panchip. . . . Continue Reading

Microsoft Putting Xbox DVR features on Hold

Back in August 2015, Microsoft announced they would be adding DVR features to the XboxOne.  This was an exciting announcement for many of us, because it meant that DVR, serious gaming, and streaming could be combined into a single piece of hardware. Well, those hopes have ended by the recent announcement that they will be putting the DVR features “on hold.”

It’s hard not to wonder if Microsoft’s DVR strategy has been influenced by the growth of Sony’s PS Vue service and it’s “Cloud DVR.”  From a revenue perspective, the attractiveness of the monthly subscription model for streaming must have turned some heads at Microsoft.  I’m guessing that Microsoft will be attempting to come out with a streaming service and Cloud DVR to compete head on with the PS Vue rather than a DVR than runs locally.

This announcement doesn’t change the fact that you can still use your Xbox to watch OTA TV if you just buy an antenna and tuner, but you won’t be able to record it.

See these links below for more info and discussion:


Manual Exposure vs Auto Exposure for ELP 2 MP USB Camera

For our drone flying project, we have been using the ELP 2 Megapixel USB Camera. The auto exposure on this camera works in most situations, but we found that it does not always adjust to bright sunlight. In preparation for demonstrating our computer-controlled drone at the Maker Faire, I wanted to have a plan in case we were outdoors. It was a good thing too, since we were assigned an outdoor booth next to the Drone Combat arena.

We detect the location of our drone by using blob detection on four paper circles that we have taped to the top of the drone. Originally, we were using a medium green color, but we found that under some lighting conditions, our code would confuse the light blue color on the body of the drone with the green circles. I thought about making our blob detection code more robust, but the Maker Faire was quickly approaching! Instead we decided to make our flying area more controlled. We used white poster board as the background for our flying area and I tested some different colors for the paper circles. Red circles were good, except that our code got confused if one of our hands was reaching into the flying area. Black was not good in dim light. In the end, we decided on a dark purple with a blue undertone.

Testing different circle colors
The winning color: dark purple

OpenCV provides a way to set a webcam’s manual exposure, but there are two problems. The first is that OpenCV is not well-documented. I could find the documentation stating that I should be able to set the exposure value, but it was not at all clear what values to pass! The second problem is that your particular webcam may not support programmatic setting of the exposure. Thus, when your code doesn’t work, it can be difficult to determine if your code is wrong or if your webcam just won’t allow it!

OpenCV’s VideoCapture.set() is the method to use. If you look at the documentation, you will see that there is a property named CV_CAP_PROP_EXPOSURE. It took me some time to discover that depending on the version of OpenCV you are using, the property’s name might actually be CAP_PROP_EXPOSURE.

There is no hint as to what the exposure value should be set to, but luckily for me, I found a mapping for the ELP 2 MP webcam on this page by Joan Charmant. He shows that the exposure values range between -1 and -13. You can programmatically set the exposure in this manner:

vc = cv2.VideoCapture(1)

Unfortunately, I could not figure out a programmatic way to set the exposure back to auto exposure. If you know how, please add a comment! Please be aware that for some webcams, such as this one, the manual exposure setting is stored in its on-board memory, which means that turning off your program and even turning off the webcam itself, the manual exposure will still be set!

As a workaround, I found a way to bring up the DirectShow property pages so that I could use the DirectShow GUI to set the manual exposure or to turn auto exposure back on.


Here’s the code to launch the DirectShow property page:


During the Maker Faire, our demonstration area was shaded by a tent for most of the day, but around 2 PM our flying area was part sun and part shade. We delayed the inevitable by moving our table back with the shade, but eventually we had to move our table back to the front of the booth and into the sun. On Saturday, the afternoon was mostly overcast, and the camera’s auto exposure worked most of the time. I was surprised that our blob detection code even worked when people walked in front of our booth and made our flying area partly shaded by their shadows.

Sunday was mostly sunny, and the webcam’s auto exposure did not work when it was very bright. At these times, I opened up the DirectShow property pages and set the exposure manually so that our demo would still work. Maker disaster averted!

Blob Detection With Python and OpenCV

In my previous post, I described how to set up Python and OpenCV on your computer. Now I will show you how to use OpenCV’s computer vision capabilities to detect an object.

OpenCV’s SimpleBlobDetector will be the primary function that we will be using. With the SimpleBlobDetector, you can distinguish blobs in your image based on different parameters such as color, size, and shape.

As an OpenCV novice, I searched Google to help me get started with the Python OpenCV code. You will find that OpenCV is very powerful and extensive, but unfortunately it is not well documented. Some classes and functions are described well, but some just list a method’s parameters with a terse description. I suppose we can’t have everything. On the bright side, there are many tutorials and examples to help you out.

Here are a few tutorials that we found helpful:

  • Blob Detection using OpenCV – a nice brief introduction to SimpleBlobDetector.
  • Ball Tracking with OpenCV – this example is more extensive, and he has a nice animated gif at the top of his page showing the ball tracking in action. We use cv2.inRange() like he does, but we then use SimpleBlobDetector_Params() instead of findContours().
  • OpenCV’s Python Tutorials Page – I don’t have the patience to go through tutorials when I just need a quick solution, but I did look through a few of the tutorials on this page when the need arose. We based some of our color threshold code on the example shown if you go into the Image Processing in OpenCV section and then to the Changing Colorspaces tutorial.

For our drone flying project, we put four colored paper circles on top of our Cheerson CX-10 mini-drone to make detection simpler.

Drone image taken by webcam

When we were testing out our detection, we took a bunch of jpg photos with our webcam under different conditions and we put them in the ./images directory. In this code example, we loop through the image files and we try to detect the purple circles on our drone for each image.

The full code is and is up on Github with the rest of the project.  Here is the beginning of the code. We set up our import statements, and then we need to undistort the image. For our webcam, the image is distorted around the edges – like a fishbowl effect.

Now to the heart of our code. We run cv2.GaussianBlur() to blur the image and which helps remove noise. The webcam image is in the BGR (Blue Green Red) color space and we need it in HSV (Hue Saturation Value), so the next call is cv2.cvtColor(image, cv2.COLOR_BGR2HSV).

We need to separate the purple circles from the rest of the image. We do this by using cv2.inRange() and passing in the range HSV values that we want separated out from the image. We had to do some experimentation to get the correct values for our purple circles. We used this range-detector script in the imutils library to help us determine which values to use. Unfortunately, the range of HSV values varies widely under different lighting conditions. For example, if our flying area has a mixture of bright sunlight and dark shadows, then our detection does not work well. We control this by shining bright LED lights over the flying area.

Result of running cv2.inRange() to separate out only the purple pixels
Result of running cv2.inRange() to separate out only the purple pixels

Now we use SimpleBlobDetector to find our blobs and they are saved in keypoints.

If we found more than 4 blobs, then we keep the four largest. We draw green circles around the blobs we found and we display these four images:

  1. The original image after undistort and Gaussian blur (frame)
  2. The image with the purple circles separated out and shown in white (mask)
  3. The image with the purple circles separated out and shown in their original color (res)
  4. The original image with green circles drawn around the purple circles (im_with_keypoints)
Image after blob detection (im_with_keypoints)
Image after blob detection (im_with_keypoints)

If there are multiple images in the directory, then we go through this whole process for the next image. Now our code can see where our drone is!