accelerometer, oled, and PocketBeagle create a gesture-controlled calculator

The Calculator Charm: Calculatorium Leviosa!

Have you ever tried waving your hand around like a magic wand and summoning a calculator? We would guess not since you’d probably look a little silly doing so. That is unless you had [Andrei’s] cool gesture-controlled calculator. [Andrei] thought it would be helpful to use a calculator in his research lab without having to take his gloves off and the results are pretty cool.

His hardware consists of a PocketBeagle, an OLED, and an MPU6050 inertial measurement unit for capturing his hand motions using an accelerometer and gyroscope. The hardware is pretty straightforward, so the beauty of this project lies in its machine learning implementation.

[Andrei] first captured a few example datasets to train his algorithm by recreating the hand gestures for each number, 0-9, and recording the resulting accelerometer and gyroscope outputs. He processed the data first with a wavelet transform. The intent of the transform was two-fold. First, the transform allowed him to reduce the number of samples in his datasets while preserving the shape of the accelerometer and gyroscope signals, the key features in the machine learning classification. Secondly, he was able to increase the number of features for the classification since the wavelet transform resulted in both approximation and detailed coefficients which can both be fed into the algorithm.

Because he had a small dataset, he used the Stratified Shuffle Split technique instead of the test train split method which is generally more suited for larger datasets. The Stratified Shuffle Split ensured approximately the same number of train and test samples for each gesture. He was also very conscious of optimizing his model for running on a portable processing unit like the PocketBeagle. He spent some time optimizing the parameters of his algorithm and ultimately converted his model to a TensorFlowLite model using the built-in “TFLiteConverter” function within TensorFlow.

Finally, in true open-source fashion, all his code is available on GitHub, so feel free to give it a go yourself. Calculatorium Leviosa!

Continue reading “The Calculator Charm: Calculatorium Leviosa!”

Machine Learning Shushes Stressed Dogs

If there’s one demographic that has benefited from people being stuck at home during Covid lockdowns, it would be dogs. Having their humans around 24/7 meant more belly rubs, more table scraps, and more attention. Of course, for many dogs, especially those who found their homes during quarantine, this has led to attachment issues as their human counterparts have begin to return to work and school.

[Clairette] has had a particularly difficult time adapting to her friends leaving every day, but thankfully her human [Nathaniel Felleke] was able to come up with a clever solution. He trained a TinyML neural net to detect when she barked and used and Arduino to play a sound byte to sooth her. The sound bytes in question are recordings of [Nathaniel]’s mom either praising or scolding [Clairette], and as you can see from the video below, they seem to work quite well. To train the network, [Nathaniel] worked with several datasets to avoid overfitting, including one he created himself using actual recordings of barks and ambient sounds within his own house. He used Eon Tuner, a tool by Edge Impulse, to help find the best model to use and perform the training. He uploaded the trained network to an Arduino Nano 33 BLE Sense running Mbed OS, and a second Arduino handled playing sound bytes via an Adafruit Music Maker Featherwing.

While machine learning may sound like a bit of an extreme solution to curb your dog’s barking, it’s certainly innovative, and even appears to have been successful. Paired with this web-connected treat dispenser, you could keep a dog entertained for hours.

Continue reading “Machine Learning Shushes Stressed Dogs”

Computer Vision Lets You Skip Songs With A Glance

Have you ever wished you could control your home automation devices with nothing more than a withering stare? Well then you’re in luck, as [Norbert Zare] has come up with a clever way of controlling an MP3 player with only your face. Though as you might imagine, the technique could be applied to a whole range of home automation tasks with some minor tweaks.

At the core of this project is the Raspberry Pi, specifically the 3 B+ model, though with the computational demands of computer vision you might want to bump it up to the latest-and-greatest Pi 4. From there you need to load up OpenCV and a model trained for face detection, which as luck would have it, tends to be a fairly common application for this technology.

With a relatively simple Python script, [Norbert] is able to determine when OpenCV detects he’s looking directly into the camera and fire off one of the Pi’s GPIO pins that’s been connected to the “Skip” button on a physical MP3 player. That’s right, you read that correctly. He’s using a dedicated MP3 player in the year 2021.

In all seriousness, we’re not really sure why [Norbert] went this route compared to simply playing the music on the Pi and controlling it through software, but this does serve as a good example of how you can interface with physical devices if need be. In any event, using the Python script he’s provided, you could easily modify the setup to control other tasks, virtual or otherwise.

While face recognition can be a scary thing out in the wild, we do think it has some interesting applications within the home, so long as the user is the one who is in control of where their data ends up.

Continue reading “Computer Vision Lets You Skip Songs With A Glance”

the microGPS pipeline

MicroGPS Sees What You Overlook

GPS is an incredibly powerful tool that allows devices such as your smartphone to know roughly where they are with an accuracy of around a meter in some cases. However, this is largely too inaccurate for many use cases and that accuracy drops considerably when inside such as warehouse robots that rely on barcodes on the floor. In response, researchers [Linguang Zhang, Adam Finkelstein, Szymon Rusinkiewicz] at Princeton have developed a system they refer to as MicroGPS that uses pictures of the ground to determine its location with sub-centimeter accuracy.

The system has a downward-facing monochrome camera with a light shield to control for exposure. Camera output feeds into an Nvidia Jetson TX1 platform for processing. The idea is actually quite similar to that of an optical mouse as they are often little more than a downward-facing low-resolution camera with some clever processing. Rather than trying to capture relative position like a mouse, the researchers are trying to capture absolute position. Imagine picking up your mouse, dropping it on a different spot on your mousepad, and having the cursor snap to a different part of the screen. To our eyes that are quite far away from the surface, asphalt, tarmac, concrete, and carpet look quite uniform. But to a macro camera, there are cracks, fibers, and imperfections that are distinct and recognizable.

They sample the surface ahead of time, creating a globally consistent map of all the images stitched together. Then while moving around, they extract features and implement a voting method to filter out numerous false positives. The system is robust enough to work even a month after the initial dataset was created on an outside road. They put leaves on the ground to try and fool the system but saw remarkably stable navigation.

Their paper, code, and dataset are all available online. We’re looking forward to fusion systems where it can combine GPS, Wifi triangulation, and MicroGPS to provide a robust and accurate position.

Video after the break.

Continue reading “MicroGPS Sees What You Overlook”

Overengineering A Smart Doorbell

Fresh from the mediaeval splendour of the Belgian city of Gent, we bring you more from the Newline hacker conference organised by Hackerspace Gent. [Victor Sonck] works at the top of his house, and thus needed a doorbell notifier. His solution was unexpected, and as he admits over engineered, using machine learning on an audio stream from a microphone to detect the doorbell’s sound.

Having established that selling his soul to Amazon with a Ring doorbell wasn’t an appropriate solution, he next looked at his existing doorbell. Some of us might connect directly to its power to sense when the button was pressed, but we’re kinda glad he went for the overengineered route because it means we are treated to a run-down how machine learning works and how it can be applied to audio. The end result can sometimes be triggered by a spoon hitting a cereal plate, but since he was able to demonstrate it working we think it can be called a success. Should you wish to dive in further you can find more in his GitHub repository.

How would you overengineer a doorbell? Use GNU radio and filters? Or maybe a Rube Goldberg machine involving string and pulleys? As always, the comments are open.

Continue reading “Overengineering A Smart Doorbell”

Flamethrower weedkiller mounted on a robot arm riding a tank tracked base

Don’t Sleep On The Lawn, There’s An AI-Powered, Flamethrower-Wielding Robot About

You know how it goes, you’re just hanging out in the yard, there aren’t enough hours in the day, and weeding the lawn is just such a drag. Then an idea just pops into your head. How about we attach a gas powered flamethrower to a robot arm, drive it around on a tank-tracked robotic base, and have it operate autonomously with an AI brain? Yes, that sounds like a good idea. Let’s do that. And so, [Dave Niewinski] did exactly that with his Ultimate Weed Killing Robot.

And you thought the robot overlords might take a more subtle approach and take over the world one coffee machine at a time? No, straight for the fully-autonomous flamethrower it is then.

This build uses a Kinova Robots Gen 3 six-axis arm, mounted to an Agile-X Robotics Bunker base. Control is via a Connect Tech Rudi-NX box which contains an Nvidia Jetson Xavier NX Edge AI computing engine. Wow that was a mouthful!

Connectivity from the controller to the base is via CAN bus, but, sadly no mention of how the robot arm controller is hooked up. At least this particular model sports an effector mount camera system, which can feed straight into the Jetson, simplifying the build somewhat.

To start the software side of things, [Dave] took a video using his mobile phone while walking his lawn. Next he used RoboFlow to highlight image stills containing weeds, which were in turn used to help train a vision AI system. The actual AI training was written in Python using Google Collaboratory, which is itself based on the awesome Jupyter Notebook (see also Jupyter Lab on the main site. If you haven’t tried that yet, and if you do any data science at all, you’ll kick yourself for not doing so!) Collaboratory would not be all that useful for this by itself, except that it gives you direct, free GPU access, via the cloud, so you can use it for AI workloads without needing fancy (and currently hard to get) GPU hardware on your desk.

Details of the hardware may be a little sparse, but at least the software required can be found on the WeedBot GitHub. It’s not like most of us will have this exact hardware lying around anyway. For a more complete description of this terrifying contraption, checkout the video after the break.

Continue reading “Don’t Sleep On The Lawn, There’s An AI-Powered, Flamethrower-Wielding Robot About”

Espresso maker with added nixie flair

AI Powered Coffee Maker Knows A Bit Too Much About You

People keep warning that Skynet and the great robot uprising is not that far away, what with all this recent AI and machine-learning malarky getting all the attention lately. But we think going straight for a terminator robot army is not a very smart approach, not least due to a lack of subtlety. We think that it’s a much better bet to take over the world one home appliance at a time, and this AI Powered coffee maker might just well be part of that master plan.

Raspberry Pi Zero sitting atop the custom nixie tube driver PCB
PCB stackup with Pi Zero sat atop the driver / PSU PCBs

[Mark Smith] has taken a standard semi-auto espresso maker and jazzed it up a bit, with a sweet bar graph nixie tube the only obvious addition, at least from the front of the unit. Inside, a Raspberry Pi Zero sits atop his own nixie tube hat and associated power supply. The whole assembly is dropped into a 3D printed case and lives snuggled up to the water pump.

The Pi is running a web application written with the excellent Flask framework, and also an additional control application written in python. This allows the user to connect to the machine via Ethernet and see its status. The smarts are in the form of a simple self-grading machine learning algorithm, that takes a time series as an input (in this case when you take your shots of espresso) and after a few weeks of data, is able to make a reasonable prediction as to when you might want it in the future. It then automatically heats up in time for you to use the machine, when you usually do, then cools back down to save energy. No more pointless wandering around to see if the machine is hot enough yet – as you can just check the web page and see from the comfort of your desk.

But that’s not all [Mark] has done. He also improved the temperature control of the water boiler, and added an interlock that prevents the machine from producing a shot until the water temperature is just so. Water level is indicated by the glorious bar graph nixie tube, which also serves a few other user indication duties when appropriate. All in all a pretty sweet build, but we do add a word of caution: If your toaster starts making an unreasonable number of offers of toasted teacakes, give it a wide berth.