Automatic Water Turret Keeps Grass Watered

Summer is rapidly approaching (at least for those of us living in the Northern Hemisphere) and if you are having to maintain a lawn at your home, now is the time to be thinking about irrigation. Plenty of people have built-in sprinkler systems to care for their turf, but this is little (if any) fun for any children that might like to play in those sprinklers. This sprinkler solves that problem, functioning as an automatic water gun turret for anyone passing by.

This project was less a specific sprinkler build and more of a way to reuse some Khadas VIM3 single-board computers that the project’s creator, [Neil], wanted to use for something other than mining crypto. The boards have a neural processing unit (NPU) in them which makes them ideal for computer vision projects like this. The camera input is fed into the NPU which then directs the turret to the correct position using yaw and pitch drivers. It’s built out of mostly aluminum extrusion and 3D printed parts, and the project’s page goes into great details about all of the parts needed if you are interested in replicating the build.

[Neil] is also actively working on improving the project, especially around the turret’s ability to identify and track objects using OpenCV. We certainly look forward to more versions of this build in the future, and in the meantime be sure to check out some other automated sprinkler builds we’ve seen which solve different problems.

Continue reading “Automatic Water Turret Keeps Grass Watered”

OpenCV Running On A Tiny Microcontroller

At first blush, it might seem like projects that make extensive use of computer vision or machine learning would need to be based on powerful computing platforms with plenty of clock cycles and memory to handle this type of application. While there is some truth to this, as the field progresses it becomes possible to experiment with these tools on low-power devices as well. Take this OpenCV project which is built entirely on an ESP32 for example.

With that being said, there are some modifications that need to be made to the ESP32 in order to use OpenCV in any meaningful way. The most important of these is the use of the ESP32-DOWDQ6 module which increases the available memory of the ESP32 to allow it to make better use of camera functions. Even then, the ESP32 can’t run the entire OpenCV application, so a shrunken version of OpenCV is required before the device can run it natively. Once those two obstacles are out of the way, though, doing things like edge detection, as this project demonstrates, are well in the realm of possibility.

If running OpenCV on something as small as an ESP32 is possible, it is even easier to run on something orders of magnitude more powerful and yet still inexpensive, such as the Raspberry Pi. While the project’s code is available on its GitHub page for those interested, there are plenty of other OpenCV projects that we have featured on more powerful platforms as well, like this clock which falls off of the wall whenever someone looks at it.

Continue reading “OpenCV Running On A Tiny Microcontroller”

Need A Snack From Across Town? Send Spot!

[Dave Niewinski] clearly knows a thing or two about robots, judging from his YouTube channel. Usually the projects involve robot arms mounted on some sort of wheeled platform, but this time it’s the tune of some pretty famous yellow robot legs, in the shape of spot from Boston Dynamics. The premise is simple — tell the robot what snacks you want, entirely by voice command, and off he goes to fetch. But, we’re not talking about navigating to the fridge in the same room. We’re talking about trotting out the front door, down the street and crossing roads to visit favorite restaurant. Spot will order the snacks and bring them back, fully autonomously.

Spot’s depth cameras provide localized navigation and object avoidance information
Local AI vision system handles avoiding those pesky moving objects

There are multiple things going here, all of which are pretty big computational tasks. Firstly, there is no cloud-based voice control, ala Google voice or Alexa. The robot works on the premise of full autonomy, which means no internet connectivity for any aspect. All voice recognition, voice-to-text, and speech synthesis are performed locally using the NVIDIA Riva GPU-based AI speech SDK, running on the local NVIDIA Jetson AGX Orin carried on Spot’s back. A front-facing webcam supplies the audio feed for this. The voice recognition application listens for the wake phrase, then turns the snack order into text, for later replay when it gets to the destination. Navigation is taken care of with a Microstrain RTK GNSS module, which has all the needed robustness, such as dual antennas, and inertial fallback for those regions with a spotty signal. Navigation is no use out in the real world on its own, which is where Spot’s depth sensor cameras come in. These enable local obstacle avoidance, as per the usual spot behavior we’ve all seen before. But what about crossing the road without getting tens of thousands of dollars of someone else’s hardware crushed by a passing truck? Spot’s onboard streaming cameras are fed into the NVIDIA dash cam net AI platform which enables real-time recognition of moving obstacles such as cars, humans and anything else that might be wandering around and get in the way. All in all a cool project showing the future potential of AI in robotics for important tasks, like fetching me a beer when I most need it, even if it comes from the local corner shop.

We love robots around here. Robots can mow your lawn, navigate inside your house with a little help from invisible QR Codes, even help out with growing your food. The robot-assisted future long promised, may now be looking more like the present.

Continue reading “Need A Snack From Across Town? Send Spot!”

Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Aimbot Does It In Hardware

Anyone who has played an online shooter game in the past two or three decades has almost certainly come across a person or machine that cheats at the game by auto-aiming. For newer games with anti-cheat, this is less of a problem, but older games like Team Fortress have been effectively ruined by these aimbots. These types of cheats are usually done in software, though, and [Kamal] wondered if he would be able to build an aim bot that works directly on the hardware instead.

First, we’ll remind everyone frustrated with the state of games like TF2 that this is a proof-of-concept robot that is unlikely to make any aimbots worse or more common in any games. This is mostly because [Kamal] is training his machine to work in Aim Lab, a first-person shooter training simulation, and not in a real multiplayer videogame. The robot works by taking a screenshot of his computer in Python and passing the information through a computer vision algorithm which recognizes high-contrast targets. From there a PID controller is used to tell a series of omniwheels attached to the mouse where to point, and when the cursor is in the hitbox a mouse click is triggered.

While it might seem straightforward, building the robot and then, more importantly, tuning the PID controller took [Kamal] over two months before he was able to rival pro-FPS shooters at the aim trainer. It’s an impressive build though, and if one of his omniwheel motors hadn’t burned out it may have exceeded the top human scores on the platform. If you would like a bot that makes you worse at a game instead of better, though, head over to this build which plays Valorant by using two computers to pass game information between.

Continue reading “Aimbot Does It In Hardware”

Box with a hole. Camera and Raspberry Pi inside.

A Label Maker That Uses AI Really Poorly

[8BitsAndAByte] found herself obsessively labeling items around her house, and, like the rest of the world, wanted to see what simple, routine tasks could be made unnecessarily complicated by using AI. Instead of manually identifying objects using human intelligence, she thought it would be fun to offload that task to our AI overlords and the results are pretty amusing.

She constructed a cardboard enclosure that housed a Raspberry Pi 3B+, a Pi Camera Module V2, and a small thermal printer for making the labels. The enclosure included a hole for the camera and a button for taking the picture. The image taken by the Pi is analyzed by the DeepAI DenseCap API which, in theory, should create a label for each object detected within the image. Unfortunately, it doesn’t seem to do that very well and [8BitsAndAByte] is left with labels that don’t match any of the objects she took pictures of. In some cases it didn’t even get close, for example, the model thought an apple was a person’s head and a rotary dial phone was a cup. Go figure. It didn’t really seem to bother her though, and she got a pretty good laugh from the whole thing.

It appears the model detects all objects in the image, but only prints the label for the object it was most certain about. So maybe part of her problem is there were just too many objects in the background? If that were the case, you could probably improve the accuracy of the model by placing the object against a neutral background. That may confuse the AI a lot less and possibly give you better results. Or maybe try a different classifier altogether? Or don’t. Then you could just use it as a fun, gag project at your next get-together. That works too.

Cool project [8BitsAndAByte]! Hey, maybe this is a sign the world will still need some human intelligence after all. Who knows?

Continue reading “A Label Maker That Uses AI Really Poorly”

A robot that uses CV to detect villagers in Stardew Valley and display their gift preferences on a screen.

Stardew Valley Preferences Bot Is A Gift To The Player

It seems like most narrative games have some kind of drudgery built in. You know, some tedious and repetitious task that you absolutely must do if you want to succeed. In Stardew Valley, that thing is gift giving, which earns you friendship points just like in real life. More important than the giving itself is that each villager has preferences — things they love, like, and hate to receive as gifts. It’s a lot to remember, and most people don’t bother trying and just look it up in the wiki. Well, except for Abigail, who seems to like certain gemstones so much that she must be eating them. She’s hard to forget.

[kutluhan_aktar]’s villager gift preferences bot is a fun and fantastic use of OpenCV. This bot uses a LattePanda Alpha 864s, which is a single-board computer with an Arduino Leonardo built in. It works using template matching, which is basically a game of Where’s Waldo? for computers.

Given a screenshot of each villager in various positions, the LattePanda recognizes them among a given game scene, then does a lookup of their birthday and preferences which the Leonardo prints on a 3.5″ LCD screen. At the same time, it alerts the player with a buzz and big green LED. Be sure to check it out in action after the break.

In Animal Crossing, the drudgery amounts to pressing the A button while catching shooting stars. That’s not a huge problem for a Teensy.

Continue reading Stardew Valley Preferences Bot Is A Gift To The Player”