Making A Birthday Party Magical With Smart Wands

Visitors to the Wizarding World of Harry Potter at Universal Studios are able to cast “spells” by waving special interactive wands in the air. Hackers like us understand that there must be some unknown machinations happening behind the scenes to detect how the wands are moving, but for the kids wielding them, it might as well be real magic. So when his son asked to have a Harry Potter themed birthday party, [Adam Thole] decided to try recreating the system used at Universal Studios in his own home.

Components used in the IR streaming camera

The basic idea is that each wand has a reflector in the tip, which coupled with strong IR illumination makes them glow on camera. This allows for easy gesture recognition using computer vision techniques, all without any active components in the wand itself.

[Adam] notes that you can actually buy the official interactive wands from the Universal Studios online store, and they’d even work with his system, but at $50 USD each they were too expensive to distribute to the guests at the birthday party. His solution was to simply 3D print the wands and put a bit of white prismatic reflective tape on the ends.

With the wands out of the way, he turned his attention to the IR imaging side of the system. His final design is a very impressive 3D printed unit which includes four IR illuminators, a Raspberry Pi Zero with the NoIR camera module. [Adam] notes that his software setup specifically locks the camera at 41 FPS, as that triggers it to use a reduced field of view by essentially “zooming in” on the image. If you don’t request a FPS higher than 40, the camera will deliver a wider image which didn’t have any advantage in this particular project.

The last part of the project was taking the video stream from his IR camera and processing it to detect the bright glow of a wand’s tip. For each frame of the video the background is first removed and then any remaining pixel that doesn’t exceed a set brightness level if ignored. The end result is an isolated point of light representing the tip of the wand, which can be fed into Open CV’s optical flow function to show [Adam] what shape the user was trying to make. From there, his software just needs to match the shape with one of the stock “spells”, and execute the appropriate function (such as changing the color of the lights in the room) with Home Assistant.

Overall, it’s an exceptionally well designed system considering the goal was simply to entertain a group of children for a few hours. We almost feel bad for the other parents in the neighborhood; it’s going to take more than a piñata to impress these kids after [Adam] had them conjuring the Dark Arts at his son’s party.

It turns out there’s considerable overlap between hacker types and those who would like to have magic powers (go figure). [Jennifer Wang] presented her IMU-based magic wand research at the 2018 Hackaday Superconference, and in the past we’ve even seen other wand controlled light systems. If you go all the way back to 2009, we even saw some Disney-funded research into interactive wand attractions for their parks, which seems particularly prescient today.

Continue reading “Making A Birthday Party Magical With Smart Wands”

Sudo Find Me a Parking Space; Machine Learning Ends Circling the Block

If you live in a bustling city and have anyone over who drives, it can be difficult for them to find parking. Maybe you have an assigned space, but they’re resigned to circling the block with an eagle eye. With those friends in  mind, [Adam Geitgey] wrote a Python script that takes the video feed from a web cam and analyzes it frame by frame to figure out when a street parking space opens up. When the glorious moment arrives, he gets a text message via Twilio with a picture of the void.

It sounds complicated, but much of the work has already been done. Cars are a popular target for machine learning, so large data sets with cars already exist. [Adam] didn’t have to train a neural network, either–he found a pre-trained Mask R-CNN model with data for 80 common objects like people, animals, and cars.

The model gives a lot of useful info, including a bounding box for each car with pixel coordinates. Since the boxes overlap, there needs be a way to determine whether there’s really a car in the space, or just the bumpers of other cars. [Adam] used intersection over union to do this, which is conveniently available as a function of the Mask R-CNN model’s library. The function returns a score, so it was just a matter of ignoring low-scoring bounding boxes.

[Adam] purposely made the script adaptable. A few changes here and there, and you could be picking up tennis balls with a robotic collector or analyzing human migration patterns on your block in no time. Or change it up and detect all the cars that run the stop sign by your house.

Thanks for the tip, [foamyguy].

Bot Makes Etch A Sketch Art In One Continuous Line

Introduced in 1960 for the princely sum of $2.99 ($25.00 today), Etch A Sketch was to become a standard issue item for the Baby Boomers’ toy box. As enchanting as the toy seems, it’s hard to see why it had staying power: it was hard for young fingers to twirl the knobs, diagonal lines and smooth curves required a concert pianist’s fine motor control, and whatever drawings we managed to make were erased at the slightest jostle of the tablet.

Intent on righting these wrongs, [Sunny Balasubramanian] not only motorized an Etch A Sketch, but he’s also given it a mind of its own in a way. For those unfamiliar with the toy, it’s basically a manual X-Y plotter that drags a stylus across the underside of a glass screen, scraping off a silver powder clinging to the glass to make dark lines. Replacing the knobs with steppers is straightforward, of course, but driving them is the trick. [Sunny] hooked his up to a Raspberry Pi and wrote some Python code to drive them. The Pi also accepts input image files and processes them for rendering through the plotter, first doing Canny edge detection in OpenCV, then plotting a single path through the largest collection of connected pixels in the image. From there it’s just a matter of spinning the motors to create surprisingly detailed images. Check out the short video below to see it in action.

It’s hardly the first automatic Etch A Sketch we’ve seen – here’s one that automates everything including the shake to erase the drawing. That one cheats a little though, in that it rasters across the screen like a CRT. We really like how this one just does a single path. Pretty clever.

Continue reading “Bot Makes Etch A Sketch Art In One Continuous Line”

Reinforce Happy Faces With Marshmallows And Computer Vision

Bing Crosby famously sang “Just let a smile be your umbrella.” George Carlin, though, said, “Let a smile be your umbrella, and you’ll end up with a face full of rain.” [BebBrabyn] probably agrees more with the former and used a Raspberry Pi with Open CV to detect a smile, a feature some digital cameras have had for a long time. This project however doesn’t take a snapshot. It launches a marshmallow using a motor-driven catapult. We wondered if he originally tried lemon drops until too many people failed to catch them properly.

This wouldn’t be a bad project for a young person — as seen in the video below — although you might have to work a bit to duplicate it. The catapult was upcycled from a broken kid’s toy. You might have to run to the toy store or rig something up yourself. Perhaps you could 3D print it or replace it with a trebuchet or compressed air.

Continue reading “Reinforce Happy Faces With Marshmallows And Computer Vision”

Object Detection, With TensorFlow

Getting computers to recognize objects has been a historically difficult problem in computer science, but with the rise of machine learning it is becoming easier to solve. One of the tools that can be put to work in object recognition is an open source library called TensorFlow, which [Evan] aka [Edje Electronics] has put to work for exactly this purpose.

His object recognition software runs on a Raspberry Pi equipped with a webcam, and also makes use of Open CV. [Evan] notes that this opens up a lot of creative low-cost detection applications for the Pi, such as setting up a camera that detects when a pet is waiting at the door to be let inside or outside, counting the number of bees entering and exiting a beehive, or monitoring parking spaces at an office.

This project uses a number of other toolkits as well, including Protobuf. It also makes extensive use of Python scripts, but if you’re comfortable with that and you have an application for computer vision, [Evan]’s tutorial will get you started.

Continue reading “Object Detection, With TensorFlow”

Digital Logging Of Analog Instruments

The only useful data you’ll ever find is already digitized, but a surprising number of gauges and meters are still analog. The correct solution to digitizing various pressure gauges, electric meters, and any other analog gauge is obviously to replace the offending dial with a digital sensor and display. This isn’t always possible, so for [Egar] and [ivodopiviz]’s Hackaday Prize entry, they’re coming up with a way to convert these old analog gauges to digital using a Raspberry Pi and a bit of computer vision.

The idea behind this instrument digitizer isn’t to replace the mechanics and electronics, as we are so often wont to do. Instead, this team is using a 3D printed bracket that mounts a Raspberry Pi and camera directly in front of an analog gauge. Combine this contraption with OpenCV, and you have a device that’s just smart enough to look at a needle on a dial, convert that to a number, and save it to a file or send it out over WiFi.

It’s an extremely simple device for what [Egar] and [ivodopiviz] admit is a relatively niche application. However, if you only need digital measurements of an analog meter for a month or so, or you don’t want to mess up your steampunk decor, it’s an ingenious build.

The HackadayPrize2016 is Sponsored by:

Creepy delta bot follows your every move

tim_tracking_interactive_mechanism

The creation you see above is the work of art student [Daniel Bertner] who is wrapping up his Bachelor of Fine Arts degree at the School of the Art Institute of Chicago. He calls the incredibly intriguing, yet somewhat disturbing device “TIM”, which is short for Tracking Interactive Mechanism.

A culmination of different projects he has tinkered with over the last year or so, TIM is an interactive delta bot with an attitude. Mounted on the wall of the Art Institute’s Sullivan Galleries, TIM is as interested in you as you are in it. While passers by investigate the curious device, it watches them back, following their every movement.

The robot’s motors are controlled using an Arduino, and its ability to track people standing nearby is provided via a video stream processed with Open CV.

It really is a cool project, and we think it would make for an awesome prop in some sci-fi horror flick. Check out the video below to see TIM’s personality in action – he doesn’t like it when people stand too close!

Continue reading “Creepy delta bot follows your every move”