A robot that uses CV to detect villagers in Stardew Valley and display their gift preferences on a screen.

Stardew Valley Preferences Bot Is A Gift To The Player

It seems like most narrative games have some kind of drudgery built in. You know, some tedious and repetitious task that you absolutely must do if you want to succeed. In Stardew Valley, that thing is gift giving, which earns you friendship points just like in real life. More important than the giving itself is that each villager has preferences — things they love, like, and hate to receive as gifts. It’s a lot to remember, and most people don’t bother trying and just look it up in the wiki. Well, except for Abigail, who seems to like certain gemstones so much that she must be eating them. She’s hard to forget.

[kutluhan_aktar]’s villager gift preferences bot is a fun and fantastic use of OpenCV. This bot uses a LattePanda Alpha 864s, which is a single-board computer with an Arduino Leonardo built in. It works using template matching, which is basically a game of Where’s Waldo? for computers.

Given a screenshot of each villager in various positions, the LattePanda recognizes them among a given game scene, then does a lookup of their birthday and preferences which the Leonardo prints on a 3.5″ LCD screen. At the same time, it alerts the player with a buzz and big green LED. Be sure to check it out in action after the break.

In Animal Crossing, the drudgery amounts to pressing the A button while catching shooting stars. That’s not a huge problem for a Teensy.

Continue reading Stardew Valley Preferences Bot Is A Gift To The Player”

This Robot Can’t Keep Its Eyes Off The Money

Some say there’s no treasure quite as valuable as the almighty dollar. [Norbert Zare] likes alt-rock soundtracks on Youtube videos and robots obsessed with money, so set about building the latter.

The project is fundamentally a simple one. A Raspberry Pi 3B+ is outfitted with a Pi Camera, and set up to control twin servo motors attached to a simple pan/tilt assembly. The Pi runs OpenCV set up in a face-tracking mode. This allows the robot to readily track money in its field of view, as the vast majority of money out there has someone’s face on it. OpenCV is used to detect where the money is in the field of view, and guide the Pi’s camera towards the cash.

It’s a neat repurposing OpenCV’s face detection algorithm, and that’s much faster than training your own money-tracking system. However, it seems like the robot would also track regular human faces, too. Perhaps it could be optimised to do a color check, such that only greyscale or green faces were followed by the robot.

Does the project do anything useful or important? Arguably no, but if a robot can be this obsessed with money, perhaps we all can learn something. Alternatively, it might just have served as a useful project for [Norbert] to learn about programming and mechatronics projects. Either way, we dig it. Code is on Github for the curious.

Using OpenCV in this way has become common over the years. If you want to detect cats, however, maybe consider giving Tensorflow a try. Video after the break.

Continue reading “This Robot Can’t Keep Its Eyes Off The Money”

[Nick Rehm] explains the workings of a gps-less self guided drone

Autonomous Drone Dodges Obstacles Without GPS

If you’re [Nick Rehm], you want a drone that can plan its own routes even at low altitudes with unplanned obstacles blocking its way. (Video, embedded below.) And or course, you build it from scratch.

Why? Getting a drone that can fly a path and even return home when the battery is low, signal is lost, or on command, is simple enough. Just go to your favorite retailer, search “gps drone” and you can get away for a shockingly low dollar amount. This is possible because GPS receivers have become cheap, small, light, and power efficient. While all of these inexpensive drones can fly a predetermined path, they usually do so by flying over any obstacles rather than around.

[Nick Rehm] has envisioned a quadcopter that can do all of the things a GPS-enabled drone can do, without the use of a GPS receiver. [Nick] makes this possible by using algorithms similar to those used by Google Maps, with data coming from a typical IMU, a camera for Computer Vision, LIDAR for altitude, and an Intel RealSense camera for detection of position and movement. A Raspberry Pi 4 running Robot Operating System runs the autonomous show, and a Teensy takes care of flight control duties.

What we really enjoy about [Nick]’s video is his clear presentation of complex technologies, and a great sense of humor about a project that has consumed untold amounts of time, patience, and duct tape.

We can’t help but wonder if DARPA will allow [Nick] to fly his drone in the Subterranean Challenge such as the one hosted in an unfinished nuclear power plant in 2020.

Continue reading “Autonomous Drone Dodges Obstacles Without GPS”

Computer Vision Lets You Skip Songs With A Glance

Have you ever wished you could control your home automation devices with nothing more than a withering stare? Well then you’re in luck, as [Norbert Zare] has come up with a clever way of controlling an MP3 player with only your face. Though as you might imagine, the technique could be applied to a whole range of home automation tasks with some minor tweaks.

At the core of this project is the Raspberry Pi, specifically the 3 B+ model, though with the computational demands of computer vision you might want to bump it up to the latest-and-greatest Pi 4. From there you need to load up OpenCV and a model trained for face detection, which as luck would have it, tends to be a fairly common application for this technology.

With a relatively simple Python script, [Norbert] is able to determine when OpenCV detects he’s looking directly into the camera and fire off one of the Pi’s GPIO pins that’s been connected to the “Skip” button on a physical MP3 player. That’s right, you read that correctly. He’s using a dedicated MP3 player in the year 2021.

In all seriousness, we’re not really sure why [Norbert] went this route compared to simply playing the music on the Pi and controlling it through software, but this does serve as a good example of how you can interface with physical devices if need be. In any event, using the Python script he’s provided, you could easily modify the setup to control other tasks, virtual or otherwise.

While face recognition can be a scary thing out in the wild, we do think it has some interesting applications within the home, so long as the user is the one who is in control of where their data ends up.

Continue reading “Computer Vision Lets You Skip Songs With A Glance”

Skeleton Watches You Intensely Because It’s Halloween, Okay

If you’ve ever seen a painting in which the eyes follow you around the room, you might have found that a bit uneasy. [CuriousInventor] has taken that concept further with a skeleton that literally holds a gaze on anyone in its field of view. 

The heart of the system is a Raspberry Pi Zero, fitted with a Pi Camera. Running OpenCV, code is set up to track humans and turn the skeleton’s head to face any that are detected. This is achieved via a servo in the skeleton’s neck. A servo bonnet is used to drive the servos without unnecessarily straining the Raspberry Pi.

The skeleton itself doesn’t look modified in any way, though most of the electronics are mounted inside a pretty obvious plastic box. We’d love to see a version 2 with all the hardware housed neatly inside the skull.

It’s a fun hack that makes for an enjoyable Halloween decoration. OpenCV can do other useful things, too, however, like spotting weeds. Video after the break.

Continue reading “Skeleton Watches You Intensely Because It’s Halloween, Okay”

RC car without a top, showing electronics inside.

Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road

[Andy]’s robot is an autonomous RC car, and he shares the localization algorithm he developed to help the car keep track of itself while it zips crazily around an indoor racetrack. Since a robot like this is perfectly capable of driving faster than it can sense, his localization method is the secret to pouring on additional speed without worrying about the car losing itself.

The regular pattern of ceiling lights makes a good foundation for the system to localize itself.

To pull this off, [Andy] uses a camera with a fisheye lens aimed up towards the ceiling, and the video is processed on a Raspberry Pi 3. His implementation is slick enough that it only takes about 1 millisecond to do a localization update, netting a precision on the order of a few centimeters. It’s sort of like a fast indoor GPS, using math to infer position based on the movement of ceiling lights.

To be useful for racing, this localization method needs to be combined with a map of the racetrack itself, which [Andy] cleverly builds by manually driving the car around the track while building the localization data. Once that is in place, the car has all it needs to autonomously zip around.

Interested in the nitty-gritty details? You’re in luck, because all of the math behind [Andy]’s algorithm is explained on the project page linked above, and the GitHub repository for [Andy]’s autonomous car has all the implementation details.

The system is location-dependent, but it works so well that [Andy] considers track localization a solved problem. Watch the system in action in the two videos embedded below.

Continue reading “Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road”

OAK-D Depth Sensing AI Camera Gets Smaller And Lighter

The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.

Animated face with small blue dots as 3D feature markers.
An example of real-time feature tracking, now in 3D thanks to integrated depth sensing.

The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.

The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.

So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.

There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.

Earlier this year we saw the OAK-D used in a system to visually identify weeds and estimate biomass in agriculture, and it’s exciting to see a new model being released. If you’re interested, the OAK-D Lite is available at a considerable discount during the Kickstarter campaign.