accelerometer, oled, and PocketBeagle create a gesture-controlled calculator

The Calculator Charm: Calculatorium Leviosa!

Have you ever tried waving your hand around like a magic wand and summoning a calculator? We would guess not since you’d probably look a little silly doing so. That is unless you had [Andrei’s] cool gesture-controlled calculator. [Andrei] thought it would be helpful to use a calculator in his research lab without having to take his gloves off and the results are pretty cool.

His hardware consists of a PocketBeagle, an OLED, and an MPU6050 inertial measurement unit for capturing his hand motions using an accelerometer and gyroscope. The hardware is pretty straightforward, so the beauty of this project lies in its machine learning implementation.

[Andrei] first captured a few example datasets to train his algorithm by recreating the hand gestures for each number, 0-9, and recording the resulting accelerometer and gyroscope outputs. He processed the data first with a wavelet transform. The intent of the transform was two-fold. First, the transform allowed him to reduce the number of samples in his datasets while preserving the shape of the accelerometer and gyroscope signals, the key features in the machine learning classification. Secondly, he was able to increase the number of features for the classification since the wavelet transform resulted in both approximation and detailed coefficients which can both be fed into the algorithm.

Because he had a small dataset, he used the Stratified Shuffle Split technique instead of the test train split method which is generally more suited for larger datasets. The Stratified Shuffle Split ensured approximately the same number of train and test samples for each gesture. He was also very conscious of optimizing his model for running on a portable processing unit like the PocketBeagle. He spent some time optimizing the parameters of his algorithm and ultimately converted his model to a TensorFlowLite model using the built-in “TFLiteConverter” function within TensorFlow.

Finally, in true open-source fashion, all his code is available on GitHub, so feel free to give it a go yourself. Calculatorium Leviosa!

Continue reading “The Calculator Charm: Calculatorium Leviosa!”

Mind-Controlled Flamethrower

Mind control might seem like something out of a sci-fi show, but like the tablet computer, universal translator, or virtual reality device, is actually a technology that has made it into the real world. While these devices often requires on advanced and expensive equipment to interpret brain waves properly, with the right machine learning system it’s possible to do things like this mind-controlled flame thrower on a much smaller budget. (Video, embedded below.)

[Nathaniel F] was already experimenting with using brain-computer interfaces and machine learning, and wanted to see if he could build something practical combining these two technologies. Instead of turning to an EEG machine to read brain patterns, he picked up a much less expensive Mindflex and paired it with a machine learning system running TensorFlow to make up for some of its shortcomings. The processing is done by a Raspberry Pi 4, which sends commands to an Arduino to fire the flamethrower when it detects the proper thought patterns. Don’t forget the flamethrower part of this build either: it was designed and built entirely by [Nathanial F] as well using gas and an arc lighter.

While the build took many hours of training to gather the proper amount of data to build the neural network and works as the proof of concept he was hoping for, [Nathaniel F] notes that it could be improved by replacing the outdated Mindflex with a better EEG. For now though, we appreciate seeing sci-fi in the real world in projects like this, or in other mind-controlled projects like this one which converts a prosthetic arm into a mind-controlled music synthesizer.

Continue reading “Mind-Controlled Flamethrower”

Machine Learning Current Sensor Snoops On MCUs

Anyone who’s ever tried their hand at reverse engineering a piece of hardware has wished there was some kind of magic wand you could tap on a PCB to understand what its doing and why. We imagine that’s what put security researcher [Mark C] on the path to developing CurrentSense-TinyML, a fascinating proof of concept that uses machine learning and sensitive current measurements to try and determine what a microcontroller is up to.

Energy consumption as the LED blinks.

The idea is simple enough: just place a INA219 current sensor between the power supply and the microcontroller under observation, and record the resulting measurements as it goes about its business. Of course in this case, [Mark] knew what the target Arduino Nano was doing because he wrote the code that blinks its onboard LED.

This allowed him to create training data for TensorFlow, which was ultimately optimized into a model that could fit onto the Arduino Nano 33 BLE Sense which stands in for our magic wand. The end result is that the model can accurately predict when the Nano has fired up its LED based on the amount of power it’s using. [Mark] has done a fantastic job of documenting the whole process, which also doubles as a great intro for putting machine learning to work on a microcontroller.

Now we already know what you’re thinking: obviously the current would go up when the LED was lit, so the machine learning aspect is completely unnecessary. That may be true in this limited context, but remember, this is just a proof of concept to base further work on. In the future, with more training data, this technique could potentially be used to identify a whole range of nuanced activities. You’d be able to see when the MCU was sitting idle, when it was writing to flash, or when it was reading from sensors. In fact, with a good enough model, it might even be possible to identify the individual sensors that are being polled.

These are early days, but we’re very interested in seeing where this research goes. It might not be magic, but if analyzing the current draw of a coffee maker can tell you how much everyone in the office is drinking, then maybe it can help us figure out what all these unlabeled ICs are doing.

Open Source Self-Driving Smartphone Robot

Our smartphones are incredibly powerful computers in their own right, yet we don’t often see them directly integrated into projects. Intel Intelligent Systems Lab has done exactly that with the release OpenBot, an open source smartphone based self-driving robot.

Most of the magic happens on the smartphone, which runs an app built on TensorFlow Lite, and integrates the camera and array of sensors on the smartphone, as well as the data from ultrasonic sensors and wheel encoders on the robot. The robot itself is relatively simple, with four geared DC motors, motor drivers wired to an Arduino Nano that interfaces with an Android Phone over serial.

The app created by the Intel ISL team comes preloaded with three AI models that can do either person following, or two different modes of autonomous navigation. By connecting a Bluetooth controller to the smartphone and drive the robot around manually in your specific environment while collecting data, you can train a custom autonomous driving policy to suit your environment.

This looks like an excellent way to get a taste of autonomous robots on a small budget, while still being a viable base for more demanding applications. We’ve seen only a few smartphone based robots like DriveMyPhone and SmartiPresense, which don’t have AI capabilities, but are intended for telepresence applications. We’ve always wondered why we don’t see more projects with cellphones, so we welcome the example.

Continue reading “Open Source Self-Driving Smartphone Robot”

Background Substitution, No Green Screen Required

All this working from home that people have been doing has a natural but unintended consequence: revealing your dirty little domestic secrets on a video conference. Face time can come at a high price if the only room you have available for work is the bedroom, with piles of dirty laundry or perhaps the incriminating contents of one’s nightstand on full display for your coworkers.

There has to be a tech fix for this problem, and many of the commercial video conferencing platforms support virtual backgrounds. But [Florian Echtler] would rather air his dirty laundry than go near Zoom, so he built a machine-learning background substitution app that works with just about any video conferencing platform. Awkwardly dubbed DeepBackSub — he’s working on a better name — the system does the hard work of finding the person in the frame with Tensorflow Lite. After identifying everything in the frame that’s a person, OpenCV replaces everything that’s not with whatever you choose, and the modified scene is piped over a virtual video device to the videoconferencing software. He’s tested on Firefox, Skype, and guvcview so far, all running on Linux. The resolution and framerates are limited, but such is the cost of keeping your secrets and establishing a firm boundary between work life and home life.

[Florian] has taken the need for a green screen out of what’s formally known as chroma key compositing, which [Tom Scott] did a great primer on a few years back. A physical green screen is the traditional way to do this, but we honestly think this technique is great and can’t wait to try it out with our Hackaday colleagues at the weekly videoconference.