Need A Snack From Across Town? Send Spot!

[Dave Niewinski] clearly knows a thing or two about robots, judging from his YouTube channel. Usually the projects involve robot arms mounted on some sort of wheeled platform, but this time it’s the tune of some pretty famous yellow robot legs, in the shape of spot from Boston Dynamics. The premise is simple — tell the robot what snacks you want, entirely by voice command, and off he goes to fetch. But, we’re not talking about navigating to the fridge in the same room. We’re talking about trotting out the front door, down the street and crossing roads to visit favorite restaurant. Spot will order the snacks and bring them back, fully autonomously.

Spot’s depth cameras provide localized navigation and object avoidance information
Local AI vision system handles avoiding those pesky moving objects

There are multiple things going here, all of which are pretty big computational tasks. Firstly, there is no cloud-based voice control, ala Google voice or Alexa. The robot works on the premise of full autonomy, which means no internet connectivity for any aspect. All voice recognition, voice-to-text, and speech synthesis are performed locally using the NVIDIA Riva GPU-based AI speech SDK, running on the local NVIDIA Jetson AGX Orin carried on Spot’s back. A front-facing webcam supplies the audio feed for this. The voice recognition application listens for the wake phrase, then turns the snack order into text, for later replay when it gets to the destination. Navigation is taken care of with a Microstrain RTK GNSS module, which has all the needed robustness, such as dual antennas, and inertial fallback for those regions with a spotty signal. Navigation is no use out in the real world on its own, which is where Spot’s depth sensor cameras come in. These enable local obstacle avoidance, as per the usual spot behavior we’ve all seen before. But what about crossing the road without getting tens of thousands of dollars of someone else’s hardware crushed by a passing truck? Spot’s onboard streaming cameras are fed into the NVIDIA dash cam net AI platform which enables real-time recognition of moving obstacles such as cars, humans and anything else that might be wandering around and get in the way. All in all a cool project showing the future potential of AI in robotics for important tasks, like fetching me a beer when I most need it, even if it comes from the local corner shop.

We love robots around here. Robots can mow your lawn, navigate inside your house with a little help from invisible QR Codes, even help out with growing your food. The robot-assisted future long promised, may now be looking more like the present.

Continue reading “Need A Snack From Across Town? Send Spot!”

Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Aimbot Does It In Hardware

Anyone who has played an online shooter game in the past two or three decades has almost certainly come across a person or machine that cheats at the game by auto-aiming. For newer games with anti-cheat, this is less of a problem, but older games like Team Fortress have been effectively ruined by these aimbots. These types of cheats are usually done in software, though, and [Kamal] wondered if he would be able to build an aim bot that works directly on the hardware instead.

First, we’ll remind everyone frustrated with the state of games like TF2 that this is a proof-of-concept robot that is unlikely to make any aimbots worse or more common in any games. This is mostly because [Kamal] is training his machine to work in Aim Lab, a first-person shooter training simulation, and not in a real multiplayer videogame. The robot works by taking a screenshot of his computer in Python and passing the information through a computer vision algorithm which recognizes high-contrast targets. From there a PID controller is used to tell a series of omniwheels attached to the mouse where to point, and when the cursor is in the hitbox a mouse click is triggered.

While it might seem straightforward, building the robot and then, more importantly, tuning the PID controller took [Kamal] over two months before he was able to rival pro-FPS shooters at the aim trainer. It’s an impressive build though, and if one of his omniwheel motors hadn’t burned out it may have exceeded the top human scores on the platform. If you would like a bot that makes you worse at a game instead of better, though, head over to this build which plays Valorant by using two computers to pass game information between.

Continue reading “Aimbot Does It In Hardware”

Box with a hole. Camera and Raspberry Pi inside.

A Label Maker That Uses AI Really Poorly

[8BitsAndAByte] found herself obsessively labeling items around her house, and, like the rest of the world, wanted to see what simple, routine tasks could be made unnecessarily complicated by using AI. Instead of manually identifying objects using human intelligence, she thought it would be fun to offload that task to our AI overlords and the results are pretty amusing.

She constructed a cardboard enclosure that housed a Raspberry Pi 3B+, a Pi Camera Module V2, and a small thermal printer for making the labels. The enclosure included a hole for the camera and a button for taking the picture. The image taken by the Pi is analyzed by the DeepAI DenseCap API which, in theory, should create a label for each object detected within the image. Unfortunately, it doesn’t seem to do that very well and [8BitsAndAByte] is left with labels that don’t match any of the objects she took pictures of. In some cases it didn’t even get close, for example, the model thought an apple was a person’s head and a rotary dial phone was a cup. Go figure. It didn’t really seem to bother her though, and she got a pretty good laugh from the whole thing.

It appears the model detects all objects in the image, but only prints the label for the object it was most certain about. So maybe part of her problem is there were just too many objects in the background? If that were the case, you could probably improve the accuracy of the model by placing the object against a neutral background. That may confuse the AI a lot less and possibly give you better results. Or maybe try a different classifier altogether? Or don’t. Then you could just use it as a fun, gag project at your next get-together. That works too.

Cool project [8BitsAndAByte]! Hey, maybe this is a sign the world will still need some human intelligence after all. Who knows?

Continue reading “A Label Maker That Uses AI Really Poorly”

A robot that uses CV to detect villagers in Stardew Valley and display their gift preferences on a screen.

Stardew Valley Preferences Bot Is A Gift To The Player

It seems like most narrative games have some kind of drudgery built in. You know, some tedious and repetitious task that you absolutely must do if you want to succeed. In Stardew Valley, that thing is gift giving, which earns you friendship points just like in real life. More important than the giving itself is that each villager has preferences — things they love, like, and hate to receive as gifts. It’s a lot to remember, and most people don’t bother trying and just look it up in the wiki. Well, except for Abigail, who seems to like certain gemstones so much that she must be eating them. She’s hard to forget.

[kutluhan_aktar]’s villager gift preferences bot is a fun and fantastic use of OpenCV. This bot uses a LattePanda Alpha 864s, which is a single-board computer with an Arduino Leonardo built in. It works using template matching, which is basically a game of Where’s Waldo? for computers.

Given a screenshot of each villager in various positions, the LattePanda recognizes them among a given game scene, then does a lookup of their birthday and preferences which the Leonardo prints on a 3.5″ LCD screen. At the same time, it alerts the player with a buzz and big green LED. Be sure to check it out in action after the break.

In Animal Crossing, the drudgery amounts to pressing the A button while catching shooting stars. That’s not a huge problem for a Teensy.

Continue reading Stardew Valley Preferences Bot Is A Gift To The Player”

Computer Vision Lets You Skip Songs With A Glance

Have you ever wished you could control your home automation devices with nothing more than a withering stare? Well then you’re in luck, as [Norbert Zare] has come up with a clever way of controlling an MP3 player with only your face. Though as you might imagine, the technique could be applied to a whole range of home automation tasks with some minor tweaks.

At the core of this project is the Raspberry Pi, specifically the 3 B+ model, though with the computational demands of computer vision you might want to bump it up to the latest-and-greatest Pi 4. From there you need to load up OpenCV and a model trained for face detection, which as luck would have it, tends to be a fairly common application for this technology.

With a relatively simple Python script, [Norbert] is able to determine when OpenCV detects he’s looking directly into the camera and fire off one of the Pi’s GPIO pins that’s been connected to the “Skip” button on a physical MP3 player. That’s right, you read that correctly. He’s using a dedicated MP3 player in the year 2021.

In all seriousness, we’re not really sure why [Norbert] went this route compared to simply playing the music on the Pi and controlling it through software, but this does serve as a good example of how you can interface with physical devices if need be. In any event, using the Python script he’s provided, you could easily modify the setup to control other tasks, virtual or otherwise.

While face recognition can be a scary thing out in the wild, we do think it has some interesting applications within the home, so long as the user is the one who is in control of where their data ends up.

Continue reading “Computer Vision Lets You Skip Songs With A Glance”

Astro Pi Mk II, The New Raspberry Pi Hardware Headed To The Space Station

Back in 2015, European Space Agency (ESA) astronaut Tim Peake brought a pair of specially equipped Raspberry Pi computers, nicknamed Izzy and Ed, onto the International Space Station and invited students back on Earth to develop software for them as part of the Astro Pi Challenge. To date, more than 50,000 young people have had their code run on one of the single-board computers; making them arguably the most popular, and surely the most traveled, Raspberry Pis in the solar system.

While Izzy and Ed are still going strong, the ESA has decided it’s about time these veteran Raspberries finally get the retirement they’re due. Set to make the journey to the ISS in December aboard a SpaceX Cargo Dragon, the new Astro Pi MK II hardware looks quite similar to the original 2015 version at first glance. But a peek inside its 6063-grade aluminium flight case reveals plenty of new and improved gear, including a Raspberry Pi 4 Model B with 8 GB RAM.

The beefier hardware will no doubt be appreciated by students looking to push the envelope. While the majority of Python programs submitted to the Astro Pi program did little more than poll the current reading from the unit’s temperature or humidity sensors and scroll messages for the astronauts on the Astro Pi’s LED matrix, some of the more advanced projects were aimed at performing legitimate space research. From using the onboard camera to image the Earth and make weather predictions to attempting to map the planet’s magnetic field, code submitted from teams of older students will certainly benefit from the improved computational performance and expanded RAM of the newest Pi.

As with the original Astro Pi, the ESA and the Raspberry Pi Foundation have shared plenty of technical details about these space-rated Linux boxes. After all, students are expected to develop and test their code on essentially the same hardware down here on Earth before it gets beamed up to the orbiting computers. So let’s take a quick look at the new hardware inside Astro Pi MK II, and what sort of research it should enable for students in 2022 and beyond.

Continue reading “Astro Pi Mk II, The New Raspberry Pi Hardware Headed To The Space Station”