Machine Learning Helps Electron Microscopy

Machine learning is supposed to help us do everything these days, so why not electron microscopy? A team from Ireland has done just that and published their results using machine learning to enhance STEM — scanning transmission electron microscopy. The result is important because it targets a very particular use case — low dose STEM.

The problem is that to get high resolutions, you typically need to use high electron doses. However, bombarding a delicate, often biological, subject with high-energy electrons may change what you are looking at and damage the sample. But using reduced electron dosages results in a poor image due to Poisson noise. The new technique learns how to compensate for the noise and produce a better-quality image even at low dosages.

Continue reading “Machine Learning Helps Electron Microscopy”

Combining Acoustic Bioprinting With Raman Spectroscopy For High-Throughput Identification Of Bacteria

Rapidly analyzing samples for the presence of bacteria and similar organic structures is generally quite a time-intensive process, with often the requirement of a cell culture being developed. Proposed by Fareeha Safir and colleagues inĀ Nano Letters is a method to use an acoustic droplet printer combined with Raman spectroscopy. Advantages of this method are a high throughput, which could make analysis of samples at sewage installations, hospitals and laboratories significantly faster.

Raman spectroscopy works on the principle of Raman scattering, which is the inelastic scattering of photons by matter, causing a distinct pattern in the thus scattered light. By starting with a pure light source (that is, a laser), the relatively weak Raman scattering can be captured and the laser light filtered out. The thus captured signal can be analyzed and matched with known pathogens. Continue reading “Combining Acoustic Bioprinting With Raman Spectroscopy For High-Throughput Identification Of Bacteria”

Hands-On: NVIDIA Jetson Orin Nano Developer Kit

NVIDIA’s Jetson line of single-board computers are doing something different in a vast sea of relatively similar Linux SBCs. Designed for edge computing applications, such as a robot that needs to perform high-speed computer vision while out in the field, they provide exceptional performance in a board that’s of comparable size and weight to other SBCs on the market. The only difference, as you might expect, is that they tend to cost a lot more: the current top of the line Jetson AGX Orin Developer Kit is $1999 USD

Luckily for hackers and makers like us, NVIDIA realized they needed an affordable gateway into their ecosystem, so they introduced the $99 Jetson Nano in 2019. The product proved so popular that just a year later the company refreshed it with a streamlined carrier board that dropped the cost of the kit down to an incredible $59. Looking to expand on that success even further, today NVIDIA announced a new upmarket entry into the Nano family that lies somewhere in the middle.

While the $499 price tag of the Jetson Orin Nano Developer Kit may be a bit steep for hobbyists, there’s no question that you get a lot for your money. Capable of performing 40 trillion operations per second (TOPS), NVIDIA estimates the Orin Nano is a staggering 80X as powerful as the previous Nano. It’s a level of performance that, admittedly, not every Hackaday reader needs on their workbench. But the allure of a palm-sized supercomputer is very real, and anyone with an interest in experimenting with machine learning would do well to weigh (literally, and figuratively) the Orin Nano against a desktop computer with a comparable NVIDIA graphics card.

We were provided with one of the very first Jetson Orin Nano Developer Kits before their official unveiling during NVIDIA GTC (GPU Technology Conference), and I’ve spent the last few days getting up close and personal with the hardware and software. After coming to terms with the fact that this tiny board is considerably more powerful than the computer I’m currently writing this on, I’m left excited to see what the community can accomplish with the incredible performance offered by this pint-sized system.

Continue reading “Hands-On: NVIDIA Jetson Orin Nano Developer Kit”

Voice Without Sound

Voice recognition is becoming more and more common, but anyone who’s ever used a smart device can attest that they aren’t exactly fool-proof. They can activate seemingly at random, don’t activate when called or, most annoyingly, completely fail to understand the voice commands. Thankfully, researchers from the University of Tokyo are looking to improve the performance of devices like these by attempting to use them without any spoken voice at all.

The project is called SottoVoce and uses an ultrasound imaging probe placed under the user’s jaw to detect internal movements in the speaker’s larynx. The imaging generated from the probe is fed into a series of neural networks, trained with hundreds of speech patterns from the researchers themselves. The neural networks then piece together the likely sounds being made and generate an audio waveform which is played to an unmodified Alexa device. Obviously a few improvements would need to be made to the ultrasonic imaging device to make this usable in real-world situations, but it is interesting from a research perspective nonetheless.

The research paper with all the details is also available (PDF warning). It’s an intriguing approach to improving the performance or quality of voice especially in situations where the voice may be muffled, non-existent, or overlaid with a lot of background noise. Machine learning like this seems to be one of the more powerful tools for improving speech recognition, as we saw with this robot that can walk across town and order food for you using voice commands only.

Continue reading “Voice Without Sound”

Tiny Machine Learning On As Little As 2 KB Of RAM

All of the machine language stuff coming out lately doesn’t affect you if you are developing with embedded microcontrollers, right? Perhaps not. Microsoft Research India wants you to use their EdgeML tool to do machine learning tasks such as gesture recognition in tiny devices like an Arduino Uno. According to the developers, you might need as little as 2 KB of RAM. There’s no network connection required and the work is using Tensorflow underneath, so it is compatible with much of what you’ll find for bigger computers.

If you add processing power, you can get more capability. For example, one of the demonstrations is a wake-word recognizer on a Raspberry Pi Zero (although the page for that demo seems to be missing at the moment; try the GesturePod, instead).

The system generally uses Python, but there are efficient C++ implementations for selected algorithms. The code lives on GitHub. There are also a number of research papers about each tool that you can find on the GitHub page. There’s also a recent paper on MinUn, an attempt to make things even more efficient for ARM microcontrollers. In particular, MinUn can store approximate numbers to save space, allows for variable precision of tensors, and tries to reduce memory fragmentation, an important feature for CPUs that don’t have memory management units.

If you haven’t studied TensorFlow yet, start here. Why use something like this with a microcontroller? How about smarter robots?

Machine Learning Baby Monitor, Part 2: Learning Sleep Patterns

The first lesson a new parent learns is that the second you think you’ve finally figured out your kid’s patterns — sleeping, eating, pooping, crying endlessly in the middle of the night for no apparent reason, whatever — the kid will change it. It’s the Uncertainty Principle of kids — the mere act of observing the pattern changes it, and you’re back at square one.

As immutable as this rule seems, [Caleb Olson] is convinced he can work around it with this over-engineered sleep pattern tracker. You may recall [Caleb]’s earlier attempts to automate certain aspects of parenthood, like this machine learning system to predict when baby is hungry; and yes, he’s also strangely obsessed with automating his dog’s bathroom habits. All that preliminary work put [Caleb] in a good position to analyze his son’s sleep patterns, which he did with the feed from their baby monitor camera and Google’s MediaPipe library.

This lets him look for how much the baby’s eyes are open, calculate with a wakefulness probability, and record the time he wakes up. This worked great right up until the wave function collapsed the baby suddenly started sleeping on his side, requiring the addition of a general motion detection function to compensate for the missing eyeball data. Check out the video below for more details, although the less said about the screaming, demon-possessed owl, the better.

The data [Caleb] has collected has helped him and his wife understand the little fellow’s sleep needs and fine-tune his cycles. There’s a web app, of course, and a really nice graphical representation of total time asleep and awake. No word on naps not taken in view of the camera, though — naps in the car are an absolute godsend for many parents. We suppose that could be curated manually, but wouldn’t doubt it if [Caleb] had a plan to cover that too.

Continue reading “Machine Learning Baby Monitor, Part 2: Learning Sleep Patterns”

Smart Bike Suspension Tunes Your Ride On The Fly

Riding a bike is a pretty simple affair, but like with many things, technology marches on and adds complications. Where once all you had to worry about was pumping the cranks and shifting the gears, now a lot of bikes have front suspensions that need to be adjusted for different riding conditions. Great for efficiency and ride comfort, but a little tough to accomplish while you’re underway.

Luckily, there’s a solution to that, in the form of this active suspension system by [Jallson S]. The active bit is a servo, which is attached to the adjustment valve on the top of the front fork of the bike. The servo moves the valve between fully locked, for smooth surfaces, and wide open, for rough terrain. There’s also a stop in between, which partially softens the suspension for moderate terrain. The 9-gram hobby servo rotates the valve with the help of a 3D printed gear train.

But that’s not all. Rather than just letting the rider control the ride stiffness from a handlebar-mounted switch, [Jallson S] added a little intelligence into the mix. Ride data from the accelerometer on an Arduino Nano 33 BLE Sense was captured on a smartphone via Arduino Science Journal. The data was processed through Edge Impulse Studio to create models for five different ride surfaces and rider styles. This allows the stiffness to be optimized for current ride conditions — check it out in action in the video below.

[Jallson S] is quick to point out that this is a prototype, and that niceties like weatherproofing still have to be addressed. But it seems like a solid start — now let’s see it teamed up with an Arduino shifter.

Continue reading “Smart Bike Suspension Tunes Your Ride On The Fly”