Cheat At Cornhole With A Bazillion-Dollar Robot

While the days of outdoor cookouts may be a few months away for most of us, that certainly leaves plenty of time to prepare for that moment. While some may spend that time perfecting recipies or doing various home improvement projects during their remaining isolation time, others are practicing their skills at the various games played at these events. Specifically, this group from [Dave’s Armory] which have trained a robot that helps play the perfect game of cornhole. (Video, embedded below.)

While the robot in question is an industrial-grade KUKA KR-20 robot with a hefty price tag of $32,000 USD, the software and control system that the group built are fairly accessible for most people. The computer vision is handled by an Nvidia Jetson board, a single-board computer with extra parallel computing abilities, which runs OpenCV. With this setup and a custom hand for holding the corn bags, as well as a decent amount of training, the software is easily able to identify the cornhole board and instruct the robot to play a perfect game.

While we don’t all have expensive industrial robots sitting around in our junk drawer, the use of OpenCV and an accessible computer might make this project a useful introduction to anyone interested in computer vision, and the group made the code public on their GitHub page. OpenCV can be used for a lot of other things besides robotics as well, such as identifying weeds in a field or using a Raspberry Pi for facial recognition.

Continue reading “Cheat At Cornhole With A Bazillion-Dollar Robot”

TMD-2: A Bigger, Better, More Collaborative Turing Machine

One of the things we love best about the articles we publish on Hackaday is the dynamic that can develop between the hacker and the readers. At its best, the comment section of an article can be a model of collaborative effort, with readers’ ideas and suggestions making their way into version 2.0 of a build.

This collegial dynamic is very much on display with TMD-2, [Michael Gardi]’s latest iteration of his Turing machine demonstrator. We covered the original TMD-1 back in late summer, the idea of which was to serve as a physical embodiment of the Turing machine concept. Briefly, the TMD-1 represented the key “tape and head” concepts of the Turing machine with a console of servo-controlled flip tiles, the state of which was controlled by a three-state, three-symbol finite state machine.

TMD-1

TMD-1 was capable of simple programs that really demonstrated the principles of Turing machines, and it really seemed to catch on with readers. Based on the comments of one reader, [Newspaperman5], [Mike] started thinking bigger and better for TMD-2. He expanded the finite state machine to six states and six symbols, which meant coming up with something more scalable than the Hall-effect sensors and magnetic tiles of TMD-1.

TMD-2 has a camera for computer vision of the state machine tiles

[Mike] opted for optical character recognition using a Raspberry Pi cam along with Open CV and the Tesseract OCR engine. The original servo-driven tape didn’t scale well either, so that was replaced by a virtual tape displayed on a 7″ LCD display. The best part of the original, the tile-based FSM, was expanded but kept that tactile programming experience.

Hats off to [Mike] for tackling a project with so many technologies that were previously new to him, and for pulling off another great build. And kudos to [Newspaperman5] for the great suggestions that spurred him on.

TOBOT Is Your Tic Tac Toe Opponent With A Bad Attitude

[3dprintedlife] is apparently a little bored. Instead of whiling away the time playing tic tac toe, he built an impressive tic tac toe robot named TOBOT. The robot uses a Rasberry Pi Zero and a Feather to control a two-axis robot arm that can draw the board and make moves using a pen. It also uses a simple computer vision system to look at the board to understand your move, and it has a voice too.

The other thing TOBOT has is a bad attitude. The robot wants to win. Badly. Check out the video below and you’ll see what we mean.

Continue reading “TOBOT Is Your Tic Tac Toe Opponent With A Bad Attitude”

Making A Birthday Party Magical With Smart Wands

Visitors to the Wizarding World of Harry Potter at Universal Studios are able to cast “spells” by waving special interactive wands in the air. Hackers like us understand that there must be some unknown machinations happening behind the scenes to detect how the wands are moving, but for the kids wielding them, it might as well be real magic. So when his son asked to have a Harry Potter themed birthday party, [Adam Thole] decided to try recreating the system used at Universal Studios in his own home.

Components used in the IR streaming camera

The basic idea is that each wand has a reflector in the tip, which coupled with strong IR illumination makes them glow on camera. This allows for easy gesture recognition using computer vision techniques, all without any active components in the wand itself.

[Adam] notes that you can actually buy the official interactive wands from the Universal Studios online store, and they’d even work with his system, but at $50 USD each they were too expensive to distribute to the guests at the birthday party. His solution was to simply 3D print the wands and put a bit of white prismatic reflective tape on the ends.

With the wands out of the way, he turned his attention to the IR imaging side of the system. His final design is a very impressive 3D printed unit which includes four IR illuminators, a Raspberry Pi Zero with the NoIR camera module. [Adam] notes that his software setup specifically locks the camera at 41 FPS, as that triggers it to use a reduced field of view by essentially “zooming in” on the image. If you don’t request a FPS higher than 40, the camera will deliver a wider image which didn’t have any advantage in this particular project.

The last part of the project was taking the video stream from his IR camera and processing it to detect the bright glow of a wand’s tip. For each frame of the video the background is first removed and then any remaining pixel that doesn’t exceed a set brightness level if ignored. The end result is an isolated point of light representing the tip of the wand, which can be fed into Open CV’s optical flow function to show [Adam] what shape the user was trying to make. From there, his software just needs to match the shape with one of the stock “spells”, and execute the appropriate function (such as changing the color of the lights in the room) with Home Assistant.

Overall, it’s an exceptionally well designed system considering the goal was simply to entertain a group of children for a few hours. We almost feel bad for the other parents in the neighborhood; it’s going to take more than a piñata to impress these kids after [Adam] had them conjuring the Dark Arts at his son’s party.

It turns out there’s considerable overlap between hacker types and those who would like to have magic powers (go figure). [Jennifer Wang] presented her IMU-based magic wand research at the 2018 Hackaday Superconference, and in the past we’ve even seen other wand controlled light systems. If you go all the way back to 2009, we even saw some Disney-funded research into interactive wand attractions for their parks, which seems particularly prescient today.

Continue reading “Making A Birthday Party Magical With Smart Wands”

Sudo Find Me A Parking Space; Machine Learning Ends Circling The Block

If you live in a bustling city and have anyone over who drives, it can be difficult for them to find parking. Maybe you have an assigned space, but they’re resigned to circling the block with an eagle eye. With those friends in  mind, [Adam Geitgey] wrote a Python script that takes the video feed from a web cam and analyzes it frame by frame to figure out when a street parking space opens up. When the glorious moment arrives, he gets a text message via Twilio with a picture of the void.

It sounds complicated, but much of the work has already been done. Cars are a popular target for machine learning, so large data sets with cars already exist. [Adam] didn’t have to train a neural network, either–he found a pre-trained Mask R-CNN model with data for 80 common objects like people, animals, and cars.

The model gives a lot of useful info, including a bounding box for each car with pixel coordinates. Since the boxes overlap, there needs be a way to determine whether there’s really a car in the space, or just the bumpers of other cars. [Adam] used intersection over union to do this, which is conveniently available as a function of the Mask R-CNN model’s library. The function returns a score, so it was just a matter of ignoring low-scoring bounding boxes.

[Adam] purposely made the script adaptable. A few changes here and there, and you could be picking up tennis balls with a robotic collector or analyzing human migration patterns on your block in no time. Or change it up and detect all the cars that run the stop sign by your house.

Thanks for the tip, [foamyguy].

Bot Makes Etch A Sketch Art In One Continuous Line

Introduced in 1960 for the princely sum of $2.99 ($25.00 today), Etch A Sketch was to become a standard issue item for the Baby Boomers’ toy box. As enchanting as the toy seems, it’s hard to see why it had staying power: it was hard for young fingers to twirl the knobs, diagonal lines and smooth curves required a concert pianist’s fine motor control, and whatever drawings we managed to make were erased at the slightest jostle of the tablet.

Intent on righting these wrongs, [Sunny Balasubramanian] not only motorized an Etch A Sketch, but he’s also given it a mind of its own in a way. For those unfamiliar with the toy, it’s basically a manual X-Y plotter that drags a stylus across the underside of a glass screen, scraping off a silver powder clinging to the glass to make dark lines. Replacing the knobs with steppers is straightforward, of course, but driving them is the trick. [Sunny] hooked his up to a Raspberry Pi and wrote some Python code to drive them. The Pi also accepts input image files and processes them for rendering through the plotter, first doing Canny edge detection in OpenCV, then plotting a single path through the largest collection of connected pixels in the image. From there it’s just a matter of spinning the motors to create surprisingly detailed images. Check out the short video below to see it in action.

It’s hardly the first automatic Etch A Sketch we’ve seen – here’s one that automates everything including the shake to erase the drawing. That one cheats a little though, in that it rasters across the screen like a CRT. We really like how this one just does a single path. Pretty clever.

Continue reading “Bot Makes Etch A Sketch Art In One Continuous Line”

Reinforce Happy Faces With Marshmallows And Computer Vision

Bing Crosby famously sang “Just let a smile be your umbrella.” George Carlin, though, said, “Let a smile be your umbrella, and you’ll end up with a face full of rain.” [BebBrabyn] probably agrees more with the former and used a Raspberry Pi with Open CV to detect a smile, a feature some digital cameras have had for a long time. This project however doesn’t take a snapshot. It launches a marshmallow using a motor-driven catapult. We wondered if he originally tried lemon drops until too many people failed to catch them properly.

This wouldn’t be a bad project for a young person — as seen in the video below — although you might have to work a bit to duplicate it. The catapult was upcycled from a broken kid’s toy. You might have to run to the toy store or rig something up yourself. Perhaps you could 3D print it or replace it with a trebuchet or compressed air.

Continue reading “Reinforce Happy Faces With Marshmallows And Computer Vision”