CV Based Barking Dog Keeps Home Secure, Doesn’t Need Walking

Meet [Tanner]. [Tanner] is a hacker who also appreciates the security of their home while they’re out of town. After doing some research about home security, they found that it doesn’t take much to keep a house from being broken into. It’s true that truly determined burglars might be more difficult to avoid. But, for the opportunistic types who don’t like having their appendages treated like a chew toy or their face on the local news, the steaks are lowered.  All it might take is a security camera or two, or a big barking dog to send them on their way. Rather than running to the local animal shelter, [Tanner] used parts that were already sitting around to create a solution to the problem: A computer vision triggered virtual dog.

Continue reading “CV Based Barking Dog Keeps Home Secure, Doesn’t Need Walking”

Computer Vision Extracts Lightning From Footage

Lightning is one of the more mysterious and fascinating phenomenon on the planet. Extremely powerful, but each strike on average only has enough energy to power an incandescent bulb for an hour. The exact mechanism that starts a lightning strike is still not well understood. Yet it happens 45 times per second somewhere on the planet. While we may not gain a deeper scientific appreciation of lightning anytime soon, but we can capture it in various photography thanks to this project which leverages computer vision machine learning to pull out the best frames of lightning.

The project’s creator, [Liam], built this as a tool for stormchasers and photographers so that they can film large amounts of time and not have to go back through their footage manually to pull out the frames with lightning strikes. The project borrows from a similar project, but this one adds Python 3 capabilities and runs on a tiny netbook for more easy field deployment. It uses OpenCV for object recognition, using video files as the source data, and features different modes to recognize different types of lightning.

The software is free and open source, and releases are supported for both Windows and Linux. So far, [Liam] has been able to capture all kinds of electrical atmospheric phenomenon with it including lightning, red sprites, and elves. We don’t see too many projects involving lightning around here, partly because humans can only generate a fraction of the voltage potential needed for the average lightning strike.

Performing Magic With A Little High-Tech Help

Doing magic with cards involves a lot of precise dexterity to know which card is where. For plenty of tricks, this is often knowledge and control of a single card or a small number of cards. But knowing the exact position of every single card in the deck could certainly be helpful, so the Nettle Magic Project was created to allow magicians to easily identify the location of cards in the deck.

The system works through the use of computer vision to identify a series of marks on the short edge of a stack of cards. The marks can be printed in IR- or UV-sensitive ink to make them virtually invisible, but for demonstration these use regular black ink. Each card has landmarks printed on either side of a set of bit markers which identify the cards. A computer is able to quickly read the marks and identify each card in order while the deck is still stacked, aiding the magician in whichever trick they need to perform.

The software only runs on various Apple devices right now, including iPhones and iPads, but the software is readily available fore experimentation if you are a magician looking to try something like this out. Honestly, we don’t see too many builds focusing on magic, sleight-of-hand or otherwise, and we had to go back over a decade to find a couple of custom magical builds from a magician named [Mario].

Thanks to [Tim] for the tip!

Another Rubik’s Cube Robot Is Simple But Slow

[AndreaFavero] says that the CuboTino emphasizes simplicity and cost-savings over speed. However, solving the puzzle in about 90 seconds is still better than we can do. The plucky solver uses a Pi and a camera to understand what the cube looks like and then runs it through a solver to determine how to move.

Watching the video below, we were impressed with the mechanics. The titled surface solves a few problems and makes manipulation easier. The way the mechanics are arranged, it only takes a pair of servos to flip the cube around as you like. Continue reading “Another Rubik’s Cube Robot Is Simple But Slow”

The Unique Challenges Of Aerial Robotics

When we think of robotics, the first thing that usually comes to mind for many of us is some sort of industrial arm that’s bolted to the floor, or perhaps a semi-autonomous rover trudging its way across the dusty Martian landscape. While these two environments are about as different as can be, the basic “rules” are pretty much the same. Being on firm ground ground gives the robot a clear understanding of its position and orientation, which greatly simplifies tasks such as avoiding collisions or interacting with nearby objects.

But what happens when that reference point goes away? How does a robot navigate when it’s flying through open space or hovering in mid-air? That’s just one of the problems that fascinates Nick Rehm, who stopped by to host this week’s Aerial Robotics Hack Chat to talk about his passion for flying robots. He’s currently an aerospace engineer at Johns Hopkins Applied Physics Laboratory, where he works on the unique challenges faced by autonomous flying vehicles such as the detection and avoidance of mid-air collisions, as well as the development of vertical take-off and landing (VTOL) systems. But before he had his Master’s in Aerospace Engineering and Rotorcraft, he got started the same way many of us did, by playing around with DIY projects.

In fact, regular Hackaday readers will likely recall seeing some of his impressive builds. His autonomous ekranoplan designed to follow a target using computer vision graced the front page in April. Back in 2020, we took a look at his recreation of SpaceX’s Starship prototype, which used a realistic arrangement of control surfaces and vectored thrust to perform the spacecraft’s signature “Belly Flop” maneuver — albeit with RC motors and propellers instead of rocket engines. But even before that, Nick recalls asking his mother for permission to pull apart a Wii controller so he could use its inertial measurement unit (IMU) in a wooden-framed tricopter he was working on.

Discussing some of these hobby builds leads the Chat towards Nick’s dRehmFlight project, a GPLv3 licensed flight control package that can run on relatively low-cost hardware, namely a Teensy 4.0 microcontroller paired with the GY-521 MPU6050 IMU. The project is designed to let hobbyists easily experiment with VTOL craft, specifically those that transition between vertical and horizontal flight profiles, and has powered the bulk of Nick’s own flying craft.

Moving onto more technical questions, Nick says one of the most difficult aspects when designing an autonomous flying vehicle is getting your constraints nailed down. What he means by that is having a clear goal of what the craft needs to do, and critically, how long it needs to do it. How far does the craft need to be able to fly? How fast? Does it need to loiter at the target location, and if so, for how long? The answers to these questions will largely dictate the form of the final vehicle, and are key to determining if it’s worth implementing the complexity of transitioning from VTOL to fixed-wing horizontal flight.

But according to Nick, the biggest challenge in aerial robotics is onboard state estimation. That is, the ability for the craft to know its position and orientation relative to the ground. While high-performance computers have gotten lighter and sensors have improved, he says there’s still no substitute for having a ground-based tracking system. He mentions that those fancy demonstrations you’ve seen with drones flying in formation and working collaboratively towards a task will almost certainly have an array of motion capture cameras tucked off to the side. This makes for an impressive show, but greatly limits the practical application of these drone swarms.

Nick’s custom Raspberry Pi 4-powered quadcopter lets him test autonomous flight techniques.

So what does the future of aerial robotics look like? Nick says open source projects like ArduPilot and PX4 are still great choices for hobbyists, but sees promise in newer platforms which pair the traditional autopilot with more onboard computing power, such as Auterion’s Skynode. More powerful flight controllers can enable techniques such as simultaneous localization and mapping (SLAM), which uses 3D scans of the environment to help the robot orient itself. He’s also very interested in technologies that enable autonomous flight in GPS-denied environments, which is critical for robotic craft that need to operate indoors or in situations where satellite navigation is unavailable or unreliable. In light of the incredible success of NASA’s Ingenuity helicopter, we imagine these techniques will also play an invaluable role in the future airborne exploration of Mars.

We want to thank Nick for hosting this week’s Aerial Robotics Hack Chat, which turned out to be one of the fastest hours in recent memory. His experience as both an avid hobbyist and a professional in the field provided exactly the sort of insight the Hackaday community looks for, and his gracious offer to keep in touch with several of those who attended the Chat to further discuss their projects speaks to how passionate he is about this topic. We expect to see great things from Nick going forward, and would love to have him join us again in the future to see what he’s been up to.


The Hack Chat is a weekly online chat session hosted by leading experts from all corners of the hardware hacking universe. It’s a great way for hackers connect in a fun and informal way, but if you can’t make it live, these overview posts as well as the transcripts posted to Hackaday.io make sure you don’t miss out.

Automatic Water Turret Keeps Grass Watered

Summer is rapidly approaching (at least for those of us living in the Northern Hemisphere) and if you are having to maintain a lawn at your home, now is the time to be thinking about irrigation. Plenty of people have built-in sprinkler systems to care for their turf, but this is little (if any) fun for any children that might like to play in those sprinklers. This sprinkler solves that problem, functioning as an automatic water gun turret for anyone passing by.

This project was less a specific sprinkler build and more of a way to reuse some Khadas VIM3 single-board computers that the project’s creator, [Neil], wanted to use for something other than mining crypto. The boards have a neural processing unit (NPU) in them which makes them ideal for computer vision projects like this. The camera input is fed into the NPU which then directs the turret to the correct position using yaw and pitch drivers. It’s built out of mostly aluminum extrusion and 3D printed parts, and the project’s page goes into great details about all of the parts needed if you are interested in replicating the build.

[Neil] is also actively working on improving the project, especially around the turret’s ability to identify and track objects using OpenCV. We certainly look forward to more versions of this build in the future, and in the meantime be sure to check out some other automated sprinkler builds we’ve seen which solve different problems.

Continue reading “Automatic Water Turret Keeps Grass Watered”

OpenCV Running On A Tiny Microcontroller

At first blush, it might seem like projects that make extensive use of computer vision or machine learning would need to be based on powerful computing platforms with plenty of clock cycles and memory to handle this type of application. While there is some truth to this, as the field progresses it becomes possible to experiment with these tools on low-power devices as well. Take this OpenCV project which is built entirely on an ESP32 for example.

With that being said, there are some modifications that need to be made to the ESP32 in order to use OpenCV in any meaningful way. The most important of these is the use of the ESP32-DOWDQ6 module which increases the available memory of the ESP32 to allow it to make better use of camera functions. Even then, the ESP32 can’t run the entire OpenCV application, so a shrunken version of OpenCV is required before the device can run it natively. Once those two obstacles are out of the way, though, doing things like edge detection, as this project demonstrates, are well in the realm of possibility.

If running OpenCV on something as small as an ESP32 is possible, it is even easier to run on something orders of magnitude more powerful and yet still inexpensive, such as the Raspberry Pi. While the project’s code is available on its GitHub page for those interested, there are plenty of other OpenCV projects that we have featured on more powerful platforms as well, like this clock which falls off of the wall whenever someone looks at it.

Continue reading “OpenCV Running On A Tiny Microcontroller”