Weather Station Predicts Air Quality

Measuring air quality at any particular location isn’t too complicated. Just a sensor or two and a small microcontroller is generally all that’s needed. Predicting the upcoming air quality is a little more complicated, though, since so many factors determine how safe it will be to breathe the air outside. Luckily, though, we don’t need to know all of these factors and their complex interactions in order to predict air quality. We can train a computer to do that for us as [kutluhan_aktar] demonstrates with a machine learning-capable air quality meter.

The build is based around an Arduino Nano 33 BLE which is connected to a small weather station outside. It specifically monitors ozone concentration as a benchmark for overall air quality but also uses an anemometer and a BMP180 precision pressure and temperature sensor to assist in training the algorithm. The weather data is sent over Bluetooth to a Raspberry Pi which is running TensorFlow. Once the neural network was trained, the model was sent back to the Arduino which is now capable of using it to make much more accurate predictions of future air quality.

The build goes into quite a bit of detail on setting up the models, training them, and then using them on the Arduino. It’s an impressive build capped off with a fun 3D-printed case that resembles an old windmill. Using machine learning to help predict the weather is starting to become more commonplace as well, as we have seen before with this weather station that can predict rainfall intensity.

People in meeting, with highlights of detected phones and identities

Machine Learning Detects Distracted Politicians

[Dries Depoorter] has a knack for highly technical projects with a solid artistic bent to them, and this piece is no exception. The Flemish Scrollers is a software system that watches live streamed sessions of the Flemish government, and uses Python and machine learning to identify and highlight politicians who pull out phones and start scrolling. The results? Pushed out live on Twitter and Instagram, naturally. The project started back in July 2021, and has been dutifully running ever since, so by now we expect that holding one’s phone where the camera can see it is probably considered a rookie mistake.

This project can also be considered a good example of how to properly handle confidence in results depending on the application. In this case, false negatives (a politician is using a phone, but the software doesn’t detect it properly) are much more acceptable than false positives (a member gets incorrectly identified, or is wrongly called-out for using a mobile device when they are not.)

Keras, an open-source software library, is used for the object detection and facial recognition (GitHub repository for Keras is here.) We’ve seen it used in everything from bat detection to automatic trash sorting, so if you’re interested in machine learning applications, give it a peek.

Box with a hole. Camera and Raspberry Pi inside.

A Label Maker That Uses AI Really Poorly

[8BitsAndAByte] found herself obsessively labeling items around her house, and, like the rest of the world, wanted to see what simple, routine tasks could be made unnecessarily complicated by using AI. Instead of manually identifying objects using human intelligence, she thought it would be fun to offload that task to our AI overlords and the results are pretty amusing.

She constructed a cardboard enclosure that housed a Raspberry Pi 3B+, a Pi Camera Module V2, and a small thermal printer for making the labels. The enclosure included a hole for the camera and a button for taking the picture. The image taken by the Pi is analyzed by the DeepAI DenseCap API which, in theory, should create a label for each object detected within the image. Unfortunately, it doesn’t seem to do that very well and [8BitsAndAByte] is left with labels that don’t match any of the objects she took pictures of. In some cases it didn’t even get close, for example, the model thought an apple was a person’s head and a rotary dial phone was a cup. Go figure. It didn’t really seem to bother her though, and she got a pretty good laugh from the whole thing.

It appears the model detects all objects in the image, but only prints the label for the object it was most certain about. So maybe part of her problem is there were just too many objects in the background? If that were the case, you could probably improve the accuracy of the model by placing the object against a neutral background. That may confuse the AI a lot less and possibly give you better results. Or maybe try a different classifier altogether? Or don’t. Then you could just use it as a fun, gag project at your next get-together. That works too.

Cool project [8BitsAndAByte]! Hey, maybe this is a sign the world will still need some human intelligence after all. Who knows?

Continue reading “A Label Maker That Uses AI Really Poorly”

accelerometer, oled, and PocketBeagle create a gesture-controlled calculator

The Calculator Charm: Calculatorium Leviosa!

Have you ever tried waving your hand around like a magic wand and summoning a calculator? We would guess not since you’d probably look a little silly doing so. That is unless you had [Andrei’s] cool gesture-controlled calculator. [Andrei] thought it would be helpful to use a calculator in his research lab without having to take his gloves off and the results are pretty cool.

His hardware consists of a PocketBeagle, an OLED, and an MPU6050 inertial measurement unit for capturing his hand motions using an accelerometer and gyroscope. The hardware is pretty straightforward, so the beauty of this project lies in its machine learning implementation.

[Andrei] first captured a few example datasets to train his algorithm by recreating the hand gestures for each number, 0-9, and recording the resulting accelerometer and gyroscope outputs. He processed the data first with a wavelet transform. The intent of the transform was two-fold. First, the transform allowed him to reduce the number of samples in his datasets while preserving the shape of the accelerometer and gyroscope signals, the key features in the machine learning classification. Secondly, he was able to increase the number of features for the classification since the wavelet transform resulted in both approximation and detailed coefficients which can both be fed into the algorithm.

Because he had a small dataset, he used the Stratified Shuffle Split technique instead of the test train split method which is generally more suited for larger datasets. The Stratified Shuffle Split ensured approximately the same number of train and test samples for each gesture. He was also very conscious of optimizing his model for running on a portable processing unit like the PocketBeagle. He spent some time optimizing the parameters of his algorithm and ultimately converted his model to a TensorFlowLite model using the built-in “TFLiteConverter” function within TensorFlow.

Finally, in true open-source fashion, all his code is available on GitHub, so feel free to give it a go yourself. Calculatorium Leviosa!

Continue reading “The Calculator Charm: Calculatorium Leviosa!”

Machine Learning Shushes Stressed Dogs

If there’s one demographic that has benefited from people being stuck at home during Covid lockdowns, it would be dogs. Having their humans around 24/7 meant more belly rubs, more table scraps, and more attention. Of course, for many dogs, especially those who found their homes during quarantine, this has led to attachment issues as their human counterparts have begin to return to work and school.

[Clairette] has had a particularly difficult time adapting to her friends leaving every day, but thankfully her human [Nathaniel Felleke] was able to come up with a clever solution. He trained a TinyML neural net to detect when she barked and used and Arduino to play a sound byte to sooth her. The sound bytes in question are recordings of [Nathaniel]’s mom either praising or scolding [Clairette], and as you can see from the video below, they seem to work quite well. To train the network, [Nathaniel] worked with several datasets to avoid overfitting, including one he created himself using actual recordings of barks and ambient sounds within his own house. He used Eon Tuner, a tool by Edge Impulse, to help find the best model to use and perform the training. He uploaded the trained network to an Arduino Nano 33 BLE Sense running Mbed OS, and a second Arduino handled playing sound bytes via an Adafruit Music Maker Featherwing.

While machine learning may sound like a bit of an extreme solution to curb your dog’s barking, it’s certainly innovative, and even appears to have been successful. Paired with this web-connected treat dispenser, you could keep a dog entertained for hours.

Continue reading “Machine Learning Shushes Stressed Dogs”

Computer Vision Lets You Skip Songs With A Glance

Have you ever wished you could control your home automation devices with nothing more than a withering stare? Well then you’re in luck, as [Norbert Zare] has come up with a clever way of controlling an MP3 player with only your face. Though as you might imagine, the technique could be applied to a whole range of home automation tasks with some minor tweaks.

At the core of this project is the Raspberry Pi, specifically the 3 B+ model, though with the computational demands of computer vision you might want to bump it up to the latest-and-greatest Pi 4. From there you need to load up OpenCV and a model trained for face detection, which as luck would have it, tends to be a fairly common application for this technology.

With a relatively simple Python script, [Norbert] is able to determine when OpenCV detects he’s looking directly into the camera and fire off one of the Pi’s GPIO pins that’s been connected to the “Skip” button on a physical MP3 player. That’s right, you read that correctly. He’s using a dedicated MP3 player in the year 2021.

In all seriousness, we’re not really sure why [Norbert] went this route compared to simply playing the music on the Pi and controlling it through software, but this does serve as a good example of how you can interface with physical devices if need be. In any event, using the Python script he’s provided, you could easily modify the setup to control other tasks, virtual or otherwise.

While face recognition can be a scary thing out in the wild, we do think it has some interesting applications within the home, so long as the user is the one who is in control of where their data ends up.

Continue reading “Computer Vision Lets You Skip Songs With A Glance”

the microGPS pipeline

MicroGPS Sees What You Overlook

GPS is an incredibly powerful tool that allows devices such as your smartphone to know roughly where they are with an accuracy of around a meter in some cases. However, this is largely too inaccurate for many use cases and that accuracy drops considerably when inside such as warehouse robots that rely on barcodes on the floor. In response, researchers [Linguang Zhang, Adam Finkelstein, Szymon Rusinkiewicz] at Princeton have developed a system they refer to as MicroGPS that uses pictures of the ground to determine its location with sub-centimeter accuracy.

The system has a downward-facing monochrome camera with a light shield to control for exposure. Camera output feeds into an Nvidia Jetson TX1 platform for processing. The idea is actually quite similar to that of an optical mouse as they are often little more than a downward-facing low-resolution camera with some clever processing. Rather than trying to capture relative position like a mouse, the researchers are trying to capture absolute position. Imagine picking up your mouse, dropping it on a different spot on your mousepad, and having the cursor snap to a different part of the screen. To our eyes that are quite far away from the surface, asphalt, tarmac, concrete, and carpet look quite uniform. But to a macro camera, there are cracks, fibers, and imperfections that are distinct and recognizable.

They sample the surface ahead of time, creating a globally consistent map of all the images stitched together. Then while moving around, they extract features and implement a voting method to filter out numerous false positives. The system is robust enough to work even a month after the initial dataset was created on an outside road. They put leaves on the ground to try and fool the system but saw remarkably stable navigation.

Their paper, code, and dataset are all available online. We’re looking forward to fusion systems where it can combine GPS, Wifi triangulation, and MicroGPS to provide a robust and accurate position.

Video after the break.

Continue reading “MicroGPS Sees What You Overlook”