Two assembled 1 dollar TinyML boards

$1 TinyML Board For Your “AI” Sensor Swarm

You might be under the impression that machine learning costs thousands of dollars to work with. That might be true in many cases, but there’s more to machine learning than you might think. For instance, what if you could shower anything with a network of cheap machine-learning-enabled sensors? The 1 dollar TinyML project by [Jon Nordby] allows you to do just that. These tiny boards host an STM32-like MCU, a BLE module, lithium ion power circuitry, and some nice sensor options — an accelerometer, a pair of microphones, and a light sensor.

What could you do with these sensors? [Jon] has talked a bit about a few commercial and non-commercial applications he’s worked on in his ML career, and tells us that the accelerometer alone lets you do human presence detection, sleep tracking, personal activity monitoring, or vibration pattern sensing, for a start. As for the sound input, there’s tasks ranging from gunshot or clapping detection, to coffee roasting process tracking, voice and speech detection, and surely much more. Just a few years ago, we’ve seen machine learning used to comfort a barking dog while its owner is away.

Bottom line is, you ought to get a few of these in your hands and start playing with ML. You still might need a bit of beefier hardware to train your code, but it gets that much easier once you have a network of sensors waiting for your command. Plus, since it’s an open source project, you’ll have a much easier time adding on any additional capabilities your particular application might need.

These boards are pretty cost-optimized, which makes it possible for you to order a couple dozen without breaking the bank. The $1 target is BOM cost, especially if you opt to not include one of the pricier sensors. You can assemble these boards yourself, or get them assembled at a fab of your choice for barely a cost increase. As for software, they will work with the emlearn framework.

Everything is on GitHub — from KiCad sources to Jupyter notebooks. As for Hackaday.io, there are five worklogs of impressive insight — the microphone worklog alone will teach you about microphone amplification in low-power conditions while keeping the cost low. Not as price-constrained and want to try on some image processing tasks? Here’s a beautiful Pi Pico ArduCam board with a camera and a TFT screen.

Full Self-Driving, On A Budget

Self-driving is currently the Holy Grail in the automotive world, with a number of companies racing to build general-purpose autonomous vehicles that can get from point A to point B with no user input. While no one has brought one to market yet, at least one has promised this feature and had customers pay for it, but continually moved the goalposts for delivery due to how challenging this problem turns out to be. But it doesn’t need to be that hard or expensive to solve, at least in some situations.

The situation in question is driving on a single stretch of highway, and only focuses on steering, so it doesn’t handle the accelerator or brake pedal input. The highway is driven normally, using a webcam to take images of the route and an Arduino to capture data about the steering angle. The idea here is that with enough training the Arduino could eventually steer the car. But first some math needs to happen on the training data since the steering wheel is almost always not turning the car, so the Arduino knows that actual steering events aren’t just statistical anomalies. After the training, the system does a surprisingly good job at “driving” based on this data, and does it on a budget not much larger than laptop, microcontroller, and webcam.

Admittedly, this project was a proof-of-concept to investigate machine learning, neural networks, and other statistical algorithms used in these sorts of systems, and doesn’t actually drive any cars on any roadways. Even the creator says he wouldn’t trust it himself, but that he was pleasantly surprised by the results of such a simple system. It could also be expanded out to handle brake and accelerator pedals with separate neural networks as well. It’s not our first budget-friendly self-driving system, either. This one makes it happen with the enormous computing resources of a single Android smartphone.

Continue reading “Full Self-Driving, On A Budget”

Text Compression Gets Weirdly Efficient With LLMs

It used to be that memory and storage space were so precious and so limited of a resource that handling nontrivial amounts of text was a serious problem. Text compression was a highly practical application of computing power.

Today it might be a solved problem, but that doesn’t mean it doesn’t attract new or unusual solutions. [Fabrice Bellard] released ts_zip which uses Large Language Models (LLM) to attain text compression ratios higher than any other tool can offer.

LLMs are the technology behind natural language AIs, and applying them in this way seems effective. The tradeoff? Unlike typical compression tools, the lossless decompression part isn’t exactly guaranteed when an LLM is involved. Lossy compression methods are in fact quite useful. JPEG compression, for example, is a good example of discarding data that isn’t readily perceived by humans to make a smaller file, but that isn’t usually applied to text. If you absolutely require lossless compression, [Fabrice] has that covered with NNCP, a neural-network powered lossless data compressor.

Do neural networks and LLMs sound far too serious and complicated for your text compression needs? As long as you don’t mind a mild amount of definitely noticeable data loss, check out [Greg Kennedy]’s Lossy Text Compression which simply, brilliantly, and amusingly uses a thesaurus instead of some fancy algorithms. Yep, it just swaps longer words for shorter ones. Perhaps not the best solution for every need, but between that and [Fabrice]’s brilliant work we’re confident there’s something for everyone who craves some novelty with their text compression.

[Photo by Matthew Henry from Burst]

Physical Neural Network Can Be Trained Like A Digital One

Here’s an unusual concept: a computer-guided mechanical neural network (video, embedded below.) Why would one want a mechanical neural network? It’s essentially a tool to explore what it would take to make physical materials work in nonstandard ways. The main part is a lattice of interlinked mechanical components. When one applies a certain force in a certain direction on one end, it causes the lattice to deform in a non-intuitive way on the other end.

To make this happen, individual mechanical elements  in the lattice need to have their compliance carefully tuned under the guidance of a computer system. The mechanisms shown can be adjusted on demand while force is applied and cameras monitor the results.

This feedback loop allows researchers to use the same techniques for training neural networks that are used in machine learning applications. Ultimately, a lattice can be configured in such a way that when side A is pressed like this, side B moves like that.

We’ve seen compliant structures that move in unexpected ways before, and they are always fascinating. One example is this 3D-printed door latch that translates a twisting motion into a linear one. Research into physical neural networks seems like it might open the door to more complex systems, or provide insights into metamaterial design.

You can watch the video below just under the page break, or if you prefer, skip the intro and jump straight into How It Works at [2:32].

Continue reading “Physical Neural Network Can Be Trained Like A Digital One”

Neural Network Helps With Radar Pipeline Diagnostics

Diagnosing pipeline problems is important in industry to avoid costly or dangerous failures from cracked, broken, or damaged pipes. [Kutluhan Aktar] has built an system that uses AI to assist in this difficult task.

The core of the system is a MR60BHA1 60 GHz mmWave radar module, which is most typically used for breathing and heartrate detection. Here, it’s repurposed to detect fluctuating vibrations as a sign that a pipeline may be cracked or damaged. It’s paired with an Arduino Nicla Vision module, with the smart camera able to run a neural network model on the captured radar data to flag potential pipe defects and photograph them. The various modules are assembled on a PCB resembling Dragonite, the Dragon/Flying-type Pokemon.

[Kutluhan] walks us through the whole development process, including the creation of a web interface for the system. Of particular interest is the way the neural network was trained on real defect models that [Kutluhan] built using PVC pipe. We’ve looked at industrial pipelines in detail before, too. Video after the break.

Continue reading “Neural Network Helps With Radar Pipeline Diagnostics”

Voice Without Sound

Voice recognition is becoming more and more common, but anyone who’s ever used a smart device can attest that they aren’t exactly fool-proof. They can activate seemingly at random, don’t activate when called or, most annoyingly, completely fail to understand the voice commands. Thankfully, researchers from the University of Tokyo are looking to improve the performance of devices like these by attempting to use them without any spoken voice at all.

The project is called SottoVoce and uses an ultrasound imaging probe placed under the user’s jaw to detect internal movements in the speaker’s larynx. The imaging generated from the probe is fed into a series of neural networks, trained with hundreds of speech patterns from the researchers themselves. The neural networks then piece together the likely sounds being made and generate an audio waveform which is played to an unmodified Alexa device. Obviously a few improvements would need to be made to the ultrasonic imaging device to make this usable in real-world situations, but it is interesting from a research perspective nonetheless.

The research paper with all the details is also available (PDF warning). It’s an intriguing approach to improving the performance or quality of voice especially in situations where the voice may be muffled, non-existent, or overlaid with a lot of background noise. Machine learning like this seems to be one of the more powerful tools for improving speech recognition, as we saw with this robot that can walk across town and order food for you using voice commands only.

Continue reading “Voice Without Sound”

How To Roll Your Own Custom Object Detection Neural Network

Real-time object detection, which uses neural networks and deep learning to rapidly identify and tag objects of interest in a video feed, is a handy feature with great hacker potential. Happily, it’s also possible to make customized CNNs (convolutional neural networks) tailored for one’s own needs, and that process just got easier thanks to some new documentation for the Vizy “AI camera” by Charmed Labs.

Raspberry Pi-based Vizy camera

Charmed Labs has been making hacker-friendly machine vision devices for a long time, and the Vizy camera impressed us mightily when we checked it out last year. Out of the box, Vizy has a perfectly functional object detector application that runs locally on the device, and can detect and tag many common everyday objects in real time. But what if that default application doesn’t quite meet one’s project needs? Good news, because it’s possible to create a custom-trained CNN, and that process got a lot more accessible thanks to step-by-step examples of training a model to recognize hands doing rock-paper-scissors.

Person and cat with machine-generated tags identifying them
Default object detection works well, but sometimes one needs custom results.

The basic process is this: Start with a variety of images that show the item of interest. Then identify and label the item of interest in each photo. These photos (a “training set”) are then sent to Google Colab, which will be used to generate a neural network. The resulting CNN model can then be downloaded and used, to see how well it performs.

Of course things rarely work perfectly the first time around, so at this point it’s pretty common for some refinement to be needed to increase accuracy. Luckily there are a number of tools to help do this without creating a new model from scratch, so it’s just a matter of tweaking until things perform acceptably.

Google Colab is free and the resulting CNNs are implemented in the TensorFlow Lite framework, meaning it’s possible to use them elsewhere. So if custom object detection has been holding up a project idea of yours, this might be what gets you over that hump.