Computer Vision Extracts Lightning From Footage

Lightning is one of the more mysterious and fascinating phenomenon on the planet. Extremely powerful, but each strike on average only has enough energy to power an incandescent bulb for an hour. The exact mechanism that starts a lightning strike is still not well understood. Yet it happens 45 times per second somewhere on the planet. While we may not gain a deeper scientific appreciation of lightning anytime soon, but we can capture it in various photography thanks to this project which leverages computer vision machine learning to pull out the best frames of lightning.

The project’s creator, [Liam], built this as a tool for stormchasers and photographers so that they can film large amounts of time and not have to go back through their footage manually to pull out the frames with lightning strikes. The project borrows from a similar project, but this one adds Python 3 capabilities and runs on a tiny netbook for more easy field deployment. It uses OpenCV for object recognition, using video files as the source data, and features different modes to recognize different types of lightning.

The software is free and open source, and releases are supported for both Windows and Linux. So far, [Liam] has been able to capture all kinds of electrical atmospheric phenomenon with it including lightning, red sprites, and elves. We don’t see too many projects involving lightning around here, partly because humans can only generate a fraction of the voltage potential needed for the average lightning strike.

Wordle bot

Solving Wordle By Adding Machine Vision To A 3D Printer

Truth be told, we haven’t jumped on the Wordle bandwagon yet, mainly because we don’t need to be provided with yet another diversion — we’re more than capable of finding our own rabbit holes to fall down, thank you very much. But the word puzzle does look intriguing, and since the rules and the interface are pretty simple, it’s no wonder we’ve seen a few efforts like this automated Wordle solver crop up lately.

The goal of Wordle is to find a specific five-letter, more-or-less-common English word in as few guesses as possible. Clues are given at each turn in the form of color-coding the letters to indicate whether they appear in the word and in what order. [iamflimflam1]’s approach was to attach a Raspberry Pi camera over the bed of a 3D printer and attach a phone stylus in place of the print head. A phone running Wordle is placed on the printer bed, and Open CV is used to find both the screen of the phone, as well as the position of the phone on the printer bed. From there, the robot uses the stylus to enter an opening word, analyzes the colors of the boxes, and narrows in on a solution.

The video below shows the bot in use, and source code is available if you want to try it yourself. If you need a deeper dive into Wordle solving algorithms, and indeed other variant puzzles in the *dle space, check out this recent article on reverse engineering the popular game.

Continue reading “Solving Wordle By Adding Machine Vision To A 3D Printer”

OpenCV Knows Where Your Hand Is

We have to say, [Murtaza]’s example game in his latest video isn’t very exciting. However, the OpenCV technique he uses to track a hand and determine its distance from a single camera is pretty interesting. The demo shows a random button on the screen and you have to use your hand to press the button which then moves so you can try again. The hand measurement seems accurate to a few centimeters which is good enough for many applications.

The Python code is actually quite straightforward. Essentially, the software tracks your hand and by estimating its relative size to determine how far away it is. Of course, your hand might also rotate, and [Murtaza] works through all the cases step-by-step. If we wanted to know a distance, we’d probably turn to ultrasonics or a time of flight sensor. The problem is, those sensors can’t tell your hand from anything else that happens to be in front of it. The use of a single camera to track and locate is pretty impressive.

If you haven’t used OpenCV before, the channel has a lot of tutorials and they are all worth watching. Computer vision is a great technique and can replace a lot of things in some applications. GPS, for example. Or, try this creepier tracking application next Halloween.

Continue reading “OpenCV Knows Where Your Hand Is”

Cheat At Cornhole With A Bazillion-Dollar Robot

While the days of outdoor cookouts may be a few months away for most of us, that certainly leaves plenty of time to prepare for that moment. While some may spend that time perfecting recipies or doing various home improvement projects during their remaining isolation time, others are practicing their skills at the various games played at these events. Specifically, this group from [Dave’s Armory] which have trained a robot that helps play the perfect game of cornhole. (Video, embedded below.)

While the robot in question is an industrial-grade KUKA KR-20 robot with a hefty price tag of $32,000 USD, the software and control system that the group built are fairly accessible for most people. The computer vision is handled by an Nvidia Jetson board, a single-board computer with extra parallel computing abilities, which runs OpenCV. With this setup and a custom hand for holding the corn bags, as well as a decent amount of training, the software is easily able to identify the cornhole board and instruct the robot to play a perfect game.

While we don’t all have expensive industrial robots sitting around in our junk drawer, the use of OpenCV and an accessible computer might make this project a useful introduction to anyone interested in computer vision, and the group made the code public on their GitHub page. OpenCV can be used for a lot of other things besides robotics as well, such as identifying weeds in a field or using a Raspberry Pi for facial recognition.

Continue reading “Cheat At Cornhole With A Bazillion-Dollar Robot”

TMD-2: A Bigger, Better, More Collaborative Turing Machine

One of the things we love best about the articles we publish on Hackaday is the dynamic that can develop between the hacker and the readers. At its best, the comment section of an article can be a model of collaborative effort, with readers’ ideas and suggestions making their way into version 2.0 of a build.

This collegial dynamic is very much on display with TMD-2, [Michael Gardi]’s latest iteration of his Turing machine demonstrator. We covered the original TMD-1 back in late summer, the idea of which was to serve as a physical embodiment of the Turing machine concept. Briefly, the TMD-1 represented the key “tape and head” concepts of the Turing machine with a console of servo-controlled flip tiles, the state of which was controlled by a three-state, three-symbol finite state machine.

TMD-1

TMD-1 was capable of simple programs that really demonstrated the principles of Turing machines, and it really seemed to catch on with readers. Based on the comments of one reader, [Newspaperman5], [Mike] started thinking bigger and better for TMD-2. He expanded the finite state machine to six states and six symbols, which meant coming up with something more scalable than the Hall-effect sensors and magnetic tiles of TMD-1.

TMD-2 has a camera for computer vision of the state machine tiles

[Mike] opted for optical character recognition using a Raspberry Pi cam along with Open CV and the Tesseract OCR engine. The original servo-driven tape didn’t scale well either, so that was replaced by a virtual tape displayed on a 7″ LCD display. The best part of the original, the tile-based FSM, was expanded but kept that tactile programming experience.

Hats off to [Mike] for tackling a project with so many technologies that were previously new to him, and for pulling off another great build. And kudos to [Newspaperman5] for the great suggestions that spurred him on.

TOBOT Is Your Tic Tac Toe Opponent With A Bad Attitude

[3dprintedlife] is apparently a little bored. Instead of whiling away the time playing tic tac toe, he built an impressive tic tac toe robot named TOBOT. The robot uses a Rasberry Pi Zero and a Feather to control a two-axis robot arm that can draw the board and make moves using a pen. It also uses a simple computer vision system to look at the board to understand your move, and it has a voice too.

The other thing TOBOT has is a bad attitude. The robot wants to win. Badly. Check out the video below and you’ll see what we mean.

Continue reading “TOBOT Is Your Tic Tac Toe Opponent With A Bad Attitude”

Making A Birthday Party Magical With Smart Wands

Visitors to the Wizarding World of Harry Potter at Universal Studios are able to cast “spells” by waving special interactive wands in the air. Hackers like us understand that there must be some unknown machinations happening behind the scenes to detect how the wands are moving, but for the kids wielding them, it might as well be real magic. So when his son asked to have a Harry Potter themed birthday party, [Adam Thole] decided to try recreating the system used at Universal Studios in his own home.

Components used in the IR streaming camera

The basic idea is that each wand has a reflector in the tip, which coupled with strong IR illumination makes them glow on camera. This allows for easy gesture recognition using computer vision techniques, all without any active components in the wand itself.

[Adam] notes that you can actually buy the official interactive wands from the Universal Studios online store, and they’d even work with his system, but at $50 USD each they were too expensive to distribute to the guests at the birthday party. His solution was to simply 3D print the wands and put a bit of white prismatic reflective tape on the ends.

With the wands out of the way, he turned his attention to the IR imaging side of the system. His final design is a very impressive 3D printed unit which includes four IR illuminators, a Raspberry Pi Zero with the NoIR camera module. [Adam] notes that his software setup specifically locks the camera at 41 FPS, as that triggers it to use a reduced field of view by essentially “zooming in” on the image. If you don’t request a FPS higher than 40, the camera will deliver a wider image which didn’t have any advantage in this particular project.

The last part of the project was taking the video stream from his IR camera and processing it to detect the bright glow of a wand’s tip. For each frame of the video the background is first removed and then any remaining pixel that doesn’t exceed a set brightness level if ignored. The end result is an isolated point of light representing the tip of the wand, which can be fed into Open CV’s optical flow function to show [Adam] what shape the user was trying to make. From there, his software just needs to match the shape with one of the stock “spells”, and execute the appropriate function (such as changing the color of the lights in the room) with Home Assistant.

Overall, it’s an exceptionally well designed system considering the goal was simply to entertain a group of children for a few hours. We almost feel bad for the other parents in the neighborhood; it’s going to take more than a piñata to impress these kids after [Adam] had them conjuring the Dark Arts at his son’s party.

It turns out there’s considerable overlap between hacker types and those who would like to have magic powers (go figure). [Jennifer Wang] presented her IMU-based magic wand research at the 2018 Hackaday Superconference, and in the past we’ve even seen other wand controlled light systems. If you go all the way back to 2009, we even saw some Disney-funded research into interactive wand attractions for their parks, which seems particularly prescient today.

Continue reading “Making A Birthday Party Magical With Smart Wands”