Compact, Gesture-Based Remote Control Over Bluetooth

[AlexMiller11] shared a project for a DIY gesture-sensing remote control that acts like a Bluetooth keyboard, capable of controlling media and presentations on a computer with a high degree of accuracy.

The device recognizes eight different gestures and controls a host PC over Bluetooth.

The hardware is a Silicon Labs xG24 dev kit, a small IoT-focused board able to be powered by a CR2032 cell. Part of what makes it all work is the six-axis IMU sensor, but the rest is the software to interpret that data and figure out what motions the user is trying to do. That happens with a Neuton.AI model and SDK, a tiny but effective machine learning framework for small devices.

How does it actually work? The device acts as a Bluetooth HID, and gets connected to a PC in the same was as a regular Bluetooth keyboard. Once that’s done, recognized gestures are printed out the serial port as well as sent via Bluetooth to the host machine. Media can then be played, paused, volume adjusted, presentations controlled, and more. More details are on the project’s GitHub repository. There’s also a demo video that explains exactly what’s going on, embedded below the page break.

Machine learning is a way of using software to solve the kinds of problems humans are not very good at writing programs to solve, and accurate gesture recognition is a good example. Not all such applications require heaps of overheating GPUs, either. We’ve seen the concept of a neural network stripped down to its bare essentials running on an Arduino Uno, for those who would like to better appreciate the fundamentals.

Continue reading “Compact, Gesture-Based Remote Control Over Bluetooth”

Tiny Machine Learning On As Little As 2 KB Of RAM

All of the machine language stuff coming out lately doesn’t affect you if you are developing with embedded microcontrollers, right? Perhaps not. Microsoft Research India wants you to use their EdgeML tool to do machine learning tasks such as gesture recognition in tiny devices like an Arduino Uno. According to the developers, you might need as little as 2 KB of RAM. There’s no network connection required and the work is using Tensorflow underneath, so it is compatible with much of what you’ll find for bigger computers.

If you add processing power, you can get more capability. For example, one of the demonstrations is a wake-word recognizer on a Raspberry Pi Zero (although the page for that demo seems to be missing at the moment; try the GesturePod, instead).

The system generally uses Python, but there are efficient C++ implementations for selected algorithms. The code lives on GitHub. There are also a number of research papers about each tool that you can find on the GitHub page. There’s also a recent paper on MinUn, an attempt to make things even more efficient for ARM microcontrollers. In particular, MinUn can store approximate numbers to save space, allows for variable precision of tensors, and tries to reduce memory fragmentation, an important feature for CPUs that don’t have memory management units.

If you haven’t studied TensorFlow yet, start here. Why use something like this with a microcontroller? How about smarter robots?

A montage of a "death stranding" lamp in two different color modes, purple on the left and blue on the right

Illuminate Your Benched Things With This Death Stranding Lamp

[Pinkman] creates a smart RGB table lamp based off of the “Odradek device” robot arm from the video game “Death Stranding”.

[Pinkman] adds a XIAO BLE nRF52840 Sense device, with Bluetooth support, microphone and TinyML capability. The nRF52840 is used to push data to the five WS2812 strips, one for each “blade” of the lamp, and also connects to a TTP223 capacitive touch controller to add touch input detection. The TinyML portion of the nRF52840 allows for custom keyword training to turn on the lamp with voice commands ([Pinkman] uses “Bling Bling”). [Pinkman] has also provided Bluetooth control, allowing the color and pattern to be changed from a phone application.

The lamp is 3D printed with the build being based off of [Nils Kal]’s Printables files. Each of the five blades has a white 3D-printed diffusor plate to help ease out the hot spots for the LED strip. The lamp is fully adjustable in addition to having cavities, channels and access points for “invisible” wiring. [Pinkman] has also upgraded the original 3D files to allow for the three wires needed to drive the WS2812, instead of the two wires that [Nils] had allotted in the original.

[Pinkman] has all of the code, STL files and training data available for download, so be sure to check it out. Lamps are a favorite of ours and we’ve featured our fair share, including 3D printed Shoji lamps and RGB wall lamps.

Video after the break!

Continue reading “Illuminate Your Benched Things With This Death Stranding Lamp”

Wearable Sensor Trained To Count Coughs

There are plenty of problems that are easy for humans to solve, but are almost impossibly difficult for computers. Even though it seems that with modern computing power being what it is we should be able to solve a lot of these problems, things like identifying objects in images remains fairly difficult. Similarly, identifying specific sounds within audio samples remains problematic, and as [Eivind] found, is holding up a lot of medical research to boot. To solve one specific problem he created a system for counting coughs of medical patients.

This was built with the idea of helping people with chronic obstructive pulmonary disease (COPD). Most of the existing methods for studying the disease and treating patients with it involves manually counting the number of coughs on an audio recording. While there are some software solutions to this problem to save some time, this device seeks to identify coughs in real time as they happen. It does this by training a model using tinyML to identify coughs and reject cough-like sounds. Everything runs on an Arduino Nano with BLE for communication.

While the only data the model has been trained on are sounds from [Eivind], the existing prototypes do seem to show promise. With more sound data this could be a powerful tool for patients with this disease. And, even though this uses machine learning on a small platform, we have seen before that Arudinos are plenty capable of being effective machine learning solutions with the right tools on board.

Machine Learning Shushes Stressed Dogs

If there’s one demographic that has benefited from people being stuck at home during Covid lockdowns, it would be dogs. Having their humans around 24/7 meant more belly rubs, more table scraps, and more attention. Of course, for many dogs, especially those who found their homes during quarantine, this has led to attachment issues as their human counterparts have begin to return to work and school.

[Clairette] has had a particularly difficult time adapting to her friends leaving every day, but thankfully her human [Nathaniel Felleke] was able to come up with a clever solution. He trained a TinyML neural net to detect when she barked and used and Arduino to play a sound byte to sooth her. The sound bytes in question are recordings of [Nathaniel]’s mom either praising or scolding [Clairette], and as you can see from the video below, they seem to work quite well. To train the network, [Nathaniel] worked with several datasets to avoid overfitting, including one he created himself using actual recordings of barks and ambient sounds within his own house. He used Eon Tuner, a tool by Edge Impulse, to help find the best model to use and perform the training. He uploaded the trained network to an Arduino Nano 33 BLE Sense running Mbed OS, and a second Arduino handled playing sound bytes via an Adafruit Music Maker Featherwing.

While machine learning may sound like a bit of an extreme solution to curb your dog’s barking, it’s certainly innovative, and even appears to have been successful. Paired with this web-connected treat dispenser, you could keep a dog entertained for hours.

Continue reading “Machine Learning Shushes Stressed Dogs”

An RP2040 Board Designed For Machine Learning

Machine learning (ML) typically conjures up ideas of fancy code requiring oodles of storage and tons of processing power. However, there are some ML models that, once trained, can readily be run on much more spartan hardware – even a microcontroller! The RP2040, star of the Raspberry Pi Pico, is one such chip up to the task, and [Arducam] have announced a board aiming to employ it to those ends – the Pico4ML.

The board goes heavy on the hardware, equipping the RP2040 with plenty of tools useful for machine learning tasks. There’s a QVGA camera on board, as well as a tiny 0.96″ TFT display. The camera feed can even be streamed live to the screen if so desired. There’s also a microphone to capture audio and an IMU, already baked into the board. This puts object, speech, and gesture recognition well within the purview of the Pico4ML.

Running ML models on a board like the Pico4ML isn’t about robust high performance situations. Instead, it’s intended for applications where low power and portability are key. If you’ve got some ideas on what the Pico4ML could do and do well, sound off in the comments. We’d probably hook it up to a network so we could have it automatically place an order when we yell out for pizza. We’ve covered machine learning on microcontrollers before, too – with a great Remoticon talk on how to get started!

Remoticon Video: How To Use Machine Learning With Microcontrollers

Going from a microcontroller blinking an LED, to one that blinks the LED using voice commands based on a data set that you trained on a neural net work is a “now draw the rest of the owl” problem. Lucky for us, Shawn Hymel walks us through the entire process during his Tiny ML workshop from the 2020 Hackaday Remoticon. The video has just now been published and can be viewed below.

This is truly an end-to-end Hello World for getting machine learning up and running on a microcontroller. Shawn covers the process of collecting and preparing the audio samples, training the data set, and getting it all onto the microcontroller. At the end of two hours, he’s able to show the STM32 recognizing and responding to two different spoken words. Along the way he pauses to discuss the context of what’s happening in every step, which will help you go back and expand in those areas later to suit your own project needs.

Continue reading “Remoticon Video: How To Use Machine Learning With Microcontrollers”