flow chart for Assessment of the Feasibility of Using Noninvasive Wearable Biometric Monitoring Sensors to Detect Influenza and the Common Cold Before Symptom Onset paper

Wearables Can Detect The Flu? Well…Maybe…

Surprisingly there are no pre-symptomatic screening methods for the common cold or the flu, allowing these viruses to spread unbeknownst to the infected. However, if we could detect when infected people will get sick even before they were showing symptoms, we could do a lot more to contain the flu or common cold and possibly save lives. Well, that’s what this group of researchers in this highly collaborative study set out to accomplish using data from wearable devices.

Participants of the study were given an E4 wristband, a research-grade wearable that measures heart rate, skin temperature, electrodermal activity, and movement. They then wore the E4 before and after inoculation of either influenza or rhinovirus. The researchers used 25 binary, random forest classification models to predict whether or not participants were infected based on the physiological data reported by the E4 sensor. Their results are pretty lengthy, so I’ll only highlight a few major discussion points. In one particular analysis, they found that at 36 hours after inoculation their model had an accuracy of 89% with a 100% sensitivity and a 67% specificity. Those aren’t exactly world-shaking numbers, but something the researchers thought was pretty promising nonetheless.

One major consideration for the accuracy of their model is the quality of the data reported by the wearable. Namely, if the data reported by the wearable isn’t reliable itself, no model derived from such data can be trustworthy either. We’ve discussed those points here at Hackaday before. Another major consideration is the lack of a control group. You definitely need to know if the model is simply tagging everyone as “infected” (which specificity does give us an idea of, to be fair) and a control group of participants who have not been inoculated with either virus would be one possible way to answer that question. Fortunately, the researchers admit this limitation of their work and we hope they will remedy this in future studies.

Studies like this are becoming increasingly common and the ongoing pandemic has motivated these physiological monitoring studies even further. It seems like wearables are here to stay as the academic research involving these devices seems to intensify each day. We’d love to see what kind of data could be obtained by a community-developed device, as we’ve seen some pretty impressive DIY biosensor projects over the years.

E4 Empatica device for measuring location, temperature, skin conductance, sleep, etc. on arm

Wearable Sensor For Detecting Substance Use Disorder

Oftentimes, the feature set for our typical fitness-focused wearables feels a bit empty. Push notifications on your wrist? OK, fine. Counting your steps? Sure, why not. But how useful are those capabilities anyway? Well, what if wearables could be used for a more dignified purpose like helping people in recovery from substance use disorder (SUD)? That’s what the researchers at the University of Massachusetts Medical School aimed to find out.

In their paper, they used a wrist-worn wearable to measure locomotion, heart rate, skin temperature, and electrodermal activity of 38 SUD patients during their everyday lives. They wanted to detect periods of stress and craving, as these parameters are possible triggers of substance use. Furthermore, they had patients self-report times during the day when they felt stressed or had cravings, and used those reports to calibrate their model.

They tried a number of classification models such as decision trees, discriminant analysis, logistic regression, and others, but found the most success using support vector machines though they failed to discuss why they thought that was the case. In the end, they found that they could detect stress vs. non-stress with an accuracy of 81.3% and craving vs. no-craving with an accuracy of 82.1%. Not amazing accuracy, but given the dire need for medical advancements for SUD, it’s something to keep an eye on. Interestingly enough, they found that locomotion data alone had an accuracy of approximately 75% when it came to indicating stress and cravings.

Much ado has been made about the insufficient accuracy of wearable devices for medical diagnoses, particularly of those that measure activity and heart rate. Maybe their model would perform better, being trained on real-time measurements of cortisol, a more accurate physiological measure of stress.

Finally, what really stood out to us about this study was how willing patients were to use a wearable in their treatment strategy. It’s sad that society oftentimes has a very negative perception of SUD patients, leading to fewer treatment options for patients. But hopefully, with technological advancements such as this, we’re one step closer to a more equitable future of healthcare.

accelerometer, oled, and PocketBeagle create a gesture-controlled calculator

The Calculator Charm: Calculatorium Leviosa!

Have you ever tried waving your hand around like a magic wand and summoning a calculator? We would guess not since you’d probably look a little silly doing so. That is unless you had [Andrei’s] cool gesture-controlled calculator. [Andrei] thought it would be helpful to use a calculator in his research lab without having to take his gloves off and the results are pretty cool.

His hardware consists of a PocketBeagle, an OLED, and an MPU6050 inertial measurement unit for capturing his hand motions using an accelerometer and gyroscope. The hardware is pretty straightforward, so the beauty of this project lies in its machine learning implementation.

[Andrei] first captured a few example datasets to train his algorithm by recreating the hand gestures for each number, 0-9, and recording the resulting accelerometer and gyroscope outputs. He processed the data first with a wavelet transform. The intent of the transform was two-fold. First, the transform allowed him to reduce the number of samples in his datasets while preserving the shape of the accelerometer and gyroscope signals, the key features in the machine learning classification. Secondly, he was able to increase the number of features for the classification since the wavelet transform resulted in both approximation and detailed coefficients which can both be fed into the algorithm.

Because he had a small dataset, he used the Stratified Shuffle Split technique instead of the test train split method which is generally more suited for larger datasets. The Stratified Shuffle Split ensured approximately the same number of train and test samples for each gesture. He was also very conscious of optimizing his model for running on a portable processing unit like the PocketBeagle. He spent some time optimizing the parameters of his algorithm and ultimately converted his model to a TensorFlowLite model using the built-in “TFLiteConverter” function within TensorFlow.

Finally, in true open-source fashion, all his code is available on GitHub, so feel free to give it a go yourself. Calculatorium Leviosa!

Continue reading “The Calculator Charm: Calculatorium Leviosa!”

Machine Learning Shushes Stressed Dogs

If there’s one demographic that has benefited from people being stuck at home during Covid lockdowns, it would be dogs. Having their humans around 24/7 meant more belly rubs, more table scraps, and more attention. Of course, for many dogs, especially those who found their homes during quarantine, this has led to attachment issues as their human counterparts have begin to return to work and school.

[Clairette] has had a particularly difficult time adapting to her friends leaving every day, but thankfully her human [Nathaniel Felleke] was able to come up with a clever solution. He trained a TinyML neural net to detect when she barked and used and Arduino to play a sound byte to sooth her. The sound bytes in question are recordings of [Nathaniel]’s mom either praising or scolding [Clairette], and as you can see from the video below, they seem to work quite well. To train the network, [Nathaniel] worked with several datasets to avoid overfitting, including one he created himself using actual recordings of barks and ambient sounds within his own house. He used Eon Tuner, a tool by Edge Impulse, to help find the best model to use and perform the training. He uploaded the trained network to an Arduino Nano 33 BLE Sense running Mbed OS, and a second Arduino handled playing sound bytes via an Adafruit Music Maker Featherwing.

While machine learning may sound like a bit of an extreme solution to curb your dog’s barking, it’s certainly innovative, and even appears to have been successful. Paired with this web-connected treat dispenser, you could keep a dog entertained for hours.

Continue reading “Machine Learning Shushes Stressed Dogs”

OpenGL Machine Learning Runs On Low-End Hardware

If you’ve looked into GPU-accelerated machine learning projects, you’re certainly familiar with NVIDIA’s CUDA architecture. It also follows that you’ve checked the prices online, and know how expensive it can be to get a high-performance video card that supports this particular brand of parallel programming.

But what if you could run machine learning tasks on a GPU using nothing more exotic than OpenGL? That’s what [lnstadrum] has been working on for some time now, as it would allow devices as meager as the original Raspberry Pi Zero to run tasks like image classification far faster than they could using their CPU alone. The trick is to break down your computational task into something that can be performed using OpenGL shaders, which are generally meant to push video game graphics.

An example of X2’s neural net upscaling.

[lnstadrum] explains that OpenGL releases from the last decade or so actually include so-called compute shaders specifically for running arbitrary code. But unfortunately that’s not an option on boards like the Pi Zero, which only meets the OpenGL for Embedded Systems (GLES) 2.0 standard from 2007.

Constructing the neural net in such a way that it would be compatible with these more constrained platforms was much more difficult, but the end result has far more interesting applications to show for it. During tests, both the Raspberry Pi Zero and several older Android smartphones were able to run a pre-trained image classification model at a respectable rate.

This isn’t just some thought experiment, [lnstadrum] has released an image processing framework called Beatmup using these concepts that you can play around with right now. The C++ library has Java and Python bindings, and according to the documentation, should run on pretty much anything. Included in the framework is a simple tool called X2 which can perform AI image upscaling on everything from your laptop’s integrated video card to the Raspberry Pi; making it a great way to check out this fascinating application of machine learning.

Truth be told, we’re a bit behind the ball on this one, as Beatmup made its first public release back in April of this year. It might have flown under the radar until now, but we think there’s a lot of potential for this project, and hope to see more of it once word gets out about the impressive results it can wring out of even the lowliest hardware.

[Thanks to Ishan for the tip.]

Astro Pi Mk II, The New Raspberry Pi Hardware Headed To The Space Station

Back in 2015, European Space Agency (ESA) astronaut Tim Peake brought a pair of specially equipped Raspberry Pi computers, nicknamed Izzy and Ed, onto the International Space Station and invited students back on Earth to develop software for them as part of the Astro Pi Challenge. To date, more than 50,000 young people have had their code run on one of the single-board computers; making them arguably the most popular, and surely the most traveled, Raspberry Pis in the solar system.

While Izzy and Ed are still going strong, the ESA has decided it’s about time these veteran Raspberries finally get the retirement they’re due. Set to make the journey to the ISS in December aboard a SpaceX Cargo Dragon, the new Astro Pi MK II hardware looks quite similar to the original 2015 version at first glance. But a peek inside its 6063-grade aluminium flight case reveals plenty of new and improved gear, including a Raspberry Pi 4 Model B with 8 GB RAM.

The beefier hardware will no doubt be appreciated by students looking to push the envelope. While the majority of Python programs submitted to the Astro Pi program did little more than poll the current reading from the unit’s temperature or humidity sensors and scroll messages for the astronauts on the Astro Pi’s LED matrix, some of the more advanced projects were aimed at performing legitimate space research. From using the onboard camera to image the Earth and make weather predictions to attempting to map the planet’s magnetic field, code submitted from teams of older students will certainly benefit from the improved computational performance and expanded RAM of the newest Pi.

As with the original Astro Pi, the ESA and the Raspberry Pi Foundation have shared plenty of technical details about these space-rated Linux boxes. After all, students are expected to develop and test their code on essentially the same hardware down here on Earth before it gets beamed up to the orbiting computers. So let’s take a quick look at the new hardware inside Astro Pi MK II, and what sort of research it should enable for students in 2022 and beyond.

Continue reading “Astro Pi Mk II, The New Raspberry Pi Hardware Headed To The Space Station”

Mastering Stop Motion Through Machine Learning

Stop motion animation is notoriously difficult to pull off well, in large part because it’s a mind-numbingly slow process. Each frame in the final video is a separate photograph, and for each one of those, the characters and props need to be moved the appropriate amount so that the final result looks smooth. You don’t even want to know how long Ben Wyatt spent working on Requiem for a Tuesday, though to be fair, it might still get done before the next Avatar.

But [Nick Bild] thinks his latest project might be able to improve on the classic technique with a dash of artificial intelligence provided by a Jetson Xavier NX. Basically, the Jetson watches the live feed from the camera, and using a hand pose detection model, waits until there’s no human hand in the frame. Once the coast is clear, it takes a shot and then goes back to waiting for the next hands-free opportunity. With the photographs being taken automatically, you’re free to focus on getting your characters moving around in a convincing way.

If it’s still not clicking for you, check out the video below. [Nick] first shows the raw unedited video, which primarily consists of him moving three LEGO figures around, and then the final product produced by his system. All the images of him fiddling with the scene have been automatically trimmed, leaving behind a short animated clip of the characters moving on their own.

Now don’t be fooled, it’s still going to take awhile. By our count, it took two solid minutes of moving around Minifigs to produce just a few seconds of animation. So while we can say its a quicker pace than with traditional stop motion production, it certainly isn’t fast.

Machine learning isn’t the only modern technology that can simplify stop motion production. We’ve seen a few examples of using 3D printed objects instead of manually-adjusted figures. It still takes a long time to print, and of course it eats up a ton of filament, but the mechanical precision of the printed scenes makes for a very clean final result.

Continue reading “Mastering Stop Motion Through Machine Learning”