Machine Learning Current Sensor Snoops On MCUs

Anyone who’s ever tried their hand at reverse engineering a piece of hardware has wished there was some kind of magic wand you could tap on a PCB to understand what its doing and why. We imagine that’s what put security researcher [Mark C] on the path to developing CurrentSense-TinyML, a fascinating proof of concept that uses machine learning and sensitive current measurements to try and determine what a microcontroller is up to.

Energy consumption as the LED blinks.

The idea is simple enough: just place a INA219 current sensor between the power supply and the microcontroller under observation, and record the resulting measurements as it goes about its business. Of course in this case, [Mark] knew what the target Arduino Nano was doing because he wrote the code that blinks its onboard LED.

This allowed him to create training data for TensorFlow, which was ultimately optimized into a model that could fit onto the Arduino Nano 33 BLE Sense which stands in for our magic wand. The end result is that the model can accurately predict when the Nano has fired up its LED based on the amount of power it’s using. [Mark] has done a fantastic job of documenting the whole process, which also doubles as a great intro for putting machine learning to work on a microcontroller.

Now we already know what you’re thinking: obviously the current would go up when the LED was lit, so the machine learning aspect is completely unnecessary. That may be true in this limited context, but remember, this is just a proof of concept to base further work on. In the future, with more training data, this technique could potentially be used to identify a whole range of nuanced activities. You’d be able to see when the MCU was sitting idle, when it was writing to flash, or when it was reading from sensors. In fact, with a good enough model, it might even be possible to identify the individual sensors that are being polled.

These are early days, but we’re very interested in seeing where this research goes. It might not be magic, but if analyzing the current draw of a coffee maker can tell you how much everyone in the office is drinking, then maybe it can help us figure out what all these unlabeled ICs are doing.

Open Source Self-Driving Smartphone Robot

Our smartphones are incredibly powerful computers in their own right, yet we don’t often see them directly integrated into projects. Intel Intelligent Systems Lab has done exactly that with the release OpenBot, an open source smartphone based self-driving robot.

Most of the magic happens on the smartphone, which runs an app built on TensorFlow Lite, and integrates the camera and array of sensors on the smartphone, as well as the data from ultrasonic sensors and wheel encoders on the robot. The robot itself is relatively simple, with four geared DC motors, motor drivers wired to an Arduino Nano that interfaces with an Android Phone over serial.

The app created by the Intel ISL team comes preloaded with three AI models that can do either person following, or two different modes of autonomous navigation. By connecting a Bluetooth controller to the smartphone and drive the robot around manually in your specific environment while collecting data, you can train a custom autonomous driving policy to suit your environment.

This looks like an excellent way to get a taste of autonomous robots on a small budget, while still being a viable base for more demanding applications. We’ve seen only a few smartphone based robots like DriveMyPhone and SmartiPresense, which don’t have AI capabilities, but are intended for telepresence applications. We’ve always wondered why we don’t see more projects with cellphones, so we welcome the example.

Continue reading “Open Source Self-Driving Smartphone Robot”

Background Substitution, No Green Screen Required

All this working from home that people have been doing has a natural but unintended consequence: revealing your dirty little domestic secrets on a video conference. Face time can come at a high price if the only room you have available for work is the bedroom, with piles of dirty laundry or perhaps the incriminating contents of one’s nightstand on full display for your coworkers.

There has to be a tech fix for this problem, and many of the commercial video conferencing platforms support virtual backgrounds. But [Florian Echtler] would rather air his dirty laundry than go near Zoom, so he built a machine-learning background substitution app that works with just about any video conferencing platform. Awkwardly dubbed DeepBackSub — he’s working on a better name — the system does the hard work of finding the person in the frame with Tensorflow Lite. After identifying everything in the frame that’s a person, OpenCV replaces everything that’s not with whatever you choose, and the modified scene is piped over a virtual video device to the videoconferencing software. He’s tested on Firefox, Skype, and guvcview so far, all running on Linux. The resolution and framerates are limited, but such is the cost of keeping your secrets and establishing a firm boundary between work life and home life.

[Florian] has taken the need for a green screen out of what’s formally known as chroma key compositing, which [Tom Scott] did a great primer on a few years back. A physical green screen is the traditional way to do this, but we honestly think this technique is great and can’t wait to try it out with our Hackaday colleagues at the weekly videoconference.