Hackaday Prize 2023: EyeBREAK Could Be A Breakthrough

For those with strokes or other debilitating conditions, control over one’s eyelid can be one of the last remaining motor functions. Inspired by [Jeremiah Denton] blinking in Morse code on a televised interview, [MBW] designed an ESP32-based device to decode blinks into words.

While an ESP32 offers Bluetooth for simulating a keyboard and has a relatively low power draw, getting a proper blink detection system to run at 20 frames per second in a constrained environment is challenging. Earlier attempts used facial landmarks to try and determine, based on ratios, whether an eye was open or closed. A cascade detector combined with an XGBoost classifier offered excellent performance but struggled when the eye wasn’t centered. Ultimately a 50×50, 4-layer CNN in TensorFlow Lite processes the camera frames, producing a single output, eye open or closed. For debugging purposes, it streams camera frames over Wi-Fi with annotations via OpenCV, though getting OpenCV to compile for ESP32 was also nontrivial.

[MBW] trained the model using the MRL dataset and then quantized to int8. Getting the Bluetooth and Wi-Fi stacks to run concurrently was a bit of a pain, as was managing RAM. After exhausting SRAM and IRAM, [MBW] had to move to PRAM. The entire system is built into some lightweight goggles and makes for a fairly comfortable experience.

While TensorFlow and microcontrollers might seem like a bit of an odd couple, at the end of the day, the inference engine is just doing some math on an array of inputs with some weights. We’ve even seen TensorFlow Lite on a Commodore 64. If you don’t know about [Admiral Jerimiah Denton] we can shed some light on it for you.

Continue reading “Hackaday Prize 2023: EyeBREAK Could Be A Breakthrough”

AI Creates Killer Drug

Researchers in Canada and the United States have used deep learning to derive an antibiotic that can attack a resistant microbe, acinetobacter baumannii, which can infect wounds and cause pneumonia. According to the BBC, a paper in Nature Chemical Biology describes how the researchers used training data that measured known drugs’ action on the tough bacteria. The learning algorithm then projected the effect of 6,680 compounds with no data on their effectiveness against the germ.

In an hour and a half, the program reduced the list to 240 promising candidates. Testing in the lab found that nine of these were effective and that one, now called abaucin, was extremely potent. While doing lab tests on 240 compounds sounds like a lot of work, it is better than testing nearly 6,700.

Interestingly, the new antibiotic seems only to be effective against the target microbe, which is a plus. It isn’t available for people yet and may not be for some time — drug testing being what it is. However, this is still a great example of how machine learning can augment human brainpower, letting scientists and others focus on what’s really important.

WHO identified acinetobacter baumannii as one of the major superbugs threatening the world, so a weapon against it would be very welcome. You can hope that this technique will drastically cut the time involved in developing new drugs. It also makes you wonder if there are other fields where AI techniques could cull out alternatives quickly, allowing humans to focus on the more promising candidates.

Want to catch up on machine learning algorithms? Google can help. Or dive into an even longer course.

Hackaday Prize 2023: Hearing Sirens When Drivers Can’t

[Jan Říha]’s PionEar device is a wonderful entry to the Assistive Tech portion of the 2023 Hackaday Prize. It’s a small unit intended to perch within view of the driver in a vehicle, and it has one job: flash a light whenever a siren is detected. It is intended to provide drivers with a better awareness of emergency vehicles, because they are so often heard well before they are seen, and their presence disrupts the usual flow of the road. [Jan] learned that there was a positive response in the Deaf and hard of hearing communities to a device like this; roads get safer when one has early warning.

Deaf and hard of hearing folks are perfectly capable of driving. After all, not being able to hear is not a barrier to obeying the rules of the road. Even so, for some drivers it can improve awareness of their surroundings, which translates to greater safety. For the hearing impaired, higher frequencies tend to experience the most attenuation, and this can include high-pitched sirens.

The PionEar leverages embedded machine learning to identify sirens, which is a fantastic application of the technology. Machine learning, after all, is a way to solve the kinds of problems that humans are not good at figuring out how to write a program to solve. Singling out the presence of a siren in live environmental audio definitely qualifies.

We also like the clever way that [Jan] embedded an LED light guide into the 3D-printed enclosure: by making a channel and pouring in a small amount of white resin intended for 3D printers. Cure the resin with a UV light, and one is left with an awfully good light guide that doubles as a diffuser. You can see it all in action in a short video, just under the page break.

Continue reading “Hackaday Prize 2023: Hearing Sirens When Drivers Can’t”

Get That Dream Job, With A Bit Of Text Injection

Getting a job has always been a tedious and annoying process, as for all the care that has been put into a CV or resume, it can be still headed for the round file at the whim of some corporate apparatchik. At various times there have also been dubious psychometric tests and other horrors to contend with, and now we have the specter of AI before us. We can be tossed aside simply because some AI model has rejected our CV, no human involved. If this has made you angry, perhaps it’s time to look at [Kai Greshake]’s work. He’s fighting back, by injecting a PDF CV with extra text to fool the AI into seeing the perfect candidate, and even fooling AI-based summarizers.

Text injection into a PDF is a technique the same as used by the less salubrious end of the search engine marketing world, of placing text in a web page such that a human can’t read it but a machine can. The search engine marketeers put them in tiny white text or offset them far out of the viewport, and it seems the same is possible in a PDF. He’s put the injection in white and a tiny font, and interestingly, overlaid it several times.

Using the ChatGPT instance available in the Bing sidebar he’s then able to fool it into an affirmative replay to questions about whether he should be hired. But it’s not just ChatGPT he’s targeting, another use of AI in recruitment is via summarizing tools. By injecting a lot of text with phrases normally used in conclusion of a document, he’s able to make Quillbot talk about puppies. Fancy a go yourself? He’s put a summarizer online, in the link above.

So maybe the all-seeing AI isn’t as clever as we’ve been led to believe. Who’d have thought it!

ChatGPT Powers A Different Kind Of Logic Analyzer

If you’re hoping that this AI-powered logic analyzer will help you quickly debug that wonky digital circuit on your bench with the magic of AI, we’re sorry to disappoint you. But if you’re in luck if you’re in the market for something to help you detect logical fallacies someone spouts in conversation. With the magic of AI, of course.

First, a quick review: logic fallacies are errors in reasoning that lead to the wrong conclusions from a set of observations. Enumerating the kinds of fallacies has become a bit of a cottage industry in this age of fake news and misinformation, to the extent that many of the common fallacies have catchy names like “Texas Sharpshooter” or “No True Scotsman”. Each fallacy has its own set of characteristics, and while it can be easy to pick some of them out, analyzing speech and finding them all is a tough job.

Continue reading “ChatGPT Powers A Different Kind Of Logic Analyzer”

Machine Learning Helps Electron Microscopy

Machine learning is supposed to help us do everything these days, so why not electron microscopy? A team from Ireland has done just that and published their results using machine learning to enhance STEM — scanning transmission electron microscopy. The result is important because it targets a very particular use case — low dose STEM.

The problem is that to get high resolutions, you typically need to use high electron doses. However, bombarding a delicate, often biological, subject with high-energy electrons may change what you are looking at and damage the sample. But using reduced electron dosages results in a poor image due to Poisson noise. The new technique learns how to compensate for the noise and produce a better-quality image even at low dosages.

Continue reading “Machine Learning Helps Electron Microscopy”

Combining Acoustic Bioprinting With Raman Spectroscopy For High-Throughput Identification Of Bacteria

Rapidly analyzing samples for the presence of bacteria and similar organic structures is generally quite a time-intensive process, with often the requirement of a cell culture being developed. Proposed by Fareeha Safir and colleagues in Nano Letters is a method to use an acoustic droplet printer combined with Raman spectroscopy. Advantages of this method are a high throughput, which could make analysis of samples at sewage installations, hospitals and laboratories significantly faster.

Raman spectroscopy works on the principle of Raman scattering, which is the inelastic scattering of photons by matter, causing a distinct pattern in the thus scattered light. By starting with a pure light source (that is, a laser), the relatively weak Raman scattering can be captured and the laser light filtered out. The thus captured signal can be analyzed and matched with known pathogens. Continue reading “Combining Acoustic Bioprinting With Raman Spectroscopy For High-Throughput Identification Of Bacteria”