AI Creates Killer Drug

Researchers in Canada and the United States have used deep learning to derive an antibiotic that can attack a resistant microbe, acinetobacter baumannii, which can infect wounds and cause pneumonia. According to the BBC, a paper in Nature Chemical Biology describes how the researchers used training data that measured known drugs’ action on the tough bacteria. The learning algorithm then projected the effect of 6,680 compounds with no data on their effectiveness against the germ.

In an hour and a half, the program reduced the list to 240 promising candidates. Testing in the lab found that nine of these were effective and that one, now called abaucin, was extremely potent. While doing lab tests on 240 compounds sounds like a lot of work, it is better than testing nearly 6,700.

Interestingly, the new antibiotic seems only to be effective against the target microbe, which is a plus. It isn’t available for people yet and may not be for some time — drug testing being what it is. However, this is still a great example of how machine learning can augment human brainpower, letting scientists and others focus on what’s really important.

WHO identified acinetobacter baumannii as one of the major superbugs threatening the world, so a weapon against it would be very welcome. You can hope that this technique will drastically cut the time involved in developing new drugs. It also makes you wonder if there are other fields where AI techniques could cull out alternatives quickly, allowing humans to focus on the more promising candidates.

Want to catch up on machine learning algorithms? Google can help. Or dive into an even longer course.

Hackaday Prize 2023: Hearing Sirens When Drivers Can’t

[Jan Říha]’s PionEar device is a wonderful entry to the Assistive Tech portion of the 2023 Hackaday Prize. It’s a small unit intended to perch within view of the driver in a vehicle, and it has one job: flash a light whenever a siren is detected. It is intended to provide drivers with a better awareness of emergency vehicles, because they are so often heard well before they are seen, and their presence disrupts the usual flow of the road. [Jan] learned that there was a positive response in the Deaf and hard of hearing communities to a device like this; roads get safer when one has early warning.

Deaf and hard of hearing folks are perfectly capable of driving. After all, not being able to hear is not a barrier to obeying the rules of the road. Even so, for some drivers it can improve awareness of their surroundings, which translates to greater safety. For the hearing impaired, higher frequencies tend to experience the most attenuation, and this can include high-pitched sirens.

The PionEar leverages embedded machine learning to identify sirens, which is a fantastic application of the technology. Machine learning, after all, is a way to solve the kinds of problems that humans are not good at figuring out how to write a program to solve. Singling out the presence of a siren in live environmental audio definitely qualifies.

We also like the clever way that [Jan] embedded an LED light guide into the 3D-printed enclosure: by making a channel and pouring in a small amount of white resin intended for 3D printers. Cure the resin with a UV light, and one is left with an awfully good light guide that doubles as a diffuser. You can see it all in action in a short video, just under the page break.

Continue reading “Hackaday Prize 2023: Hearing Sirens When Drivers Can’t”

Self-Driving Library For Python

Fully autonomous vehicles seem to perennially be just a few years away, sort of like the automotive equivalent of fusion power. But just because robotic vehicles haven’t made much progress on our roadways doesn’t mean we can’t play with the technology at the hobbyist level. You can embark on your own experimentation right now with this open source self-driving Python library.

Granted, this is a library built for much smaller vehicles, but it’s still quite full-featured. Known as Donkey Car, it’s mostly intended for what would otherwise be remote-controlled cars or robotics platforms. The library is built to be as minimalist as possible with modularity as a design principle, and includes the ability to self-drive with computer vision using machine-learning algorithms. It is capable of logging sensor data and interfacing with various controllers as well, either physical devices or through something like a browser.

To build a complete platform costs around $250 in parts, but most things needed for a Donkey Car compatible build are easily sourced and it won’t be too long before your own RC vehicle has more “full self-driving” capabilities than a Tesla, and potentially less risk of having a major security vulnerability as well.

Hackaday Prize 2023: Finger Tracking Via Muscle Sensors

Whether you want to build a computer interface device, or control a prosthetic hand, having some idea of a user’s finger movements can be useful. The OpenMuscle finger tracking sensor can offer the data you need, and it’s a device you can readily build in your own workshop.

The device consists of a wrist cuff that mounts twelve pressure sensors, arranged radially about the forearm. The pressure sensors are a custom design, using magnets, hall effect senors, and springs to detect the motion of the muscles in the vicinity of the wrist.

We first looked at this project last year, and since then, it’s advanced in leaps and bounds. The basic data from the pressure sensors now feeds into a trained machine learning model, which then predicts the user’s actual finger movements. The long-term goal is to create a device that can control prosthetic hands based on muscle contractions in the forearm. Ideally, this would be super-intuitive to use, requiring a minimum of practice and training for the end user.

It’s great to see machine learning combined with innovative mechanical design to serve a real need. We can’t wait to see where the OpenMuscle project goes next.

Continue reading “Hackaday Prize 2023: Finger Tracking Via Muscle Sensors”

Hackaday Links Column Banner

Hackaday Links: May 7, 2023

More fallout for SpaceX this week after their Starship launch attempt, but of the legal kind rather than concrete and rebar. A handful of environmental groups filed the suit, alleging that the launch generated “intense heat, noise, and light that adversely affects surrounding habitat areas and communities, which included designated critical habitat for federally protected species as well as National Wildlife Refuge and State Park lands,” in addition to “scatter[ing] debris and ash over a large area.”

Specifics of this energetic launch aside, we always wondered about the choice of Boca Chica for a launch facility. Yes, it has all the obvious advantages, like a large body of water directly to the east and being at a relatively low latitude. But the whole area is a wildlife sanctuary, and from what we understand there are still people living pretty close to the launch facility. Then again, you could pretty much say the same thing about the Cape Canaveral and Cape Kennedy complex, which probably couldn’t be built today. Amazing how a Space Race will grease the wheels of progress.

Continue reading “Hackaday Links: May 7, 2023”

Thermal Camera Plus Machine Learning Reads Passwords Off Keyboard Keys

An age-old vulnerability of physical keypads is visibly worn keys. For example, a number pad with digits clearly worn from repeated use provides an attacker with a clear starting point. The same concept can be applied to keyboards by using a thermal camera with the help of machine learning, but it also turns out that some types of keys and typing styles are harder to read than others.

Researchers at the University of Glasgow show how machine learning can pull details from thermal images like these quickly and effectively.

Touching a key with a fingertip imparts a slight amount of body heat, and that small amount of heat can be spotted by a thermal sensor. We’ve seen this basic approach used since at least 2005, and two things have changed since then: thermal cameras gotten much more common, and researchers discovered that by combining thermal readings with machine learning, it’s possible to eke out slight details too difficult or subtle to spot by human eye and judgement alone.

Here’s a link to the research and findings from the University of Glasgow, which shows how even a 16 symbol password can be attacked with an average accuracy of 55%. Shorter passwords are much easier to decipher, with the system attacking 6 and 8 symbol passwords with an accuracy between 92% and 80%, respectively. In the study, thermal readings were taken up to a full minute after the password was entered, but sooner readings result in higher accuracy.

A few things make things harder for the system. Fast typists spend less time touching keys, and therefore transfer less heat when they do, making things a little more challenging. Interestingly, the material of the keycaps plays a large role. ABS keycaps retain heat far more effectively than PBT (a material we often see in custom keyboard builds like this one.) It also turns out that the tiny amount of heat from LEDs in backlit keyboards runs effective interference when it comes to thermal readings.

Amusingly this kind of highly modern attack would be entirely useless against a scramblepad. Scramblepads are vintage devices that mix up which numbers go with which buttons each time the pad is used. Thermal imaging and machine learning would be able to tell which buttons were pressed and in what order, but that still wouldn’t help! A reminder that when it comes to security, tech does matter but fundamentals can matter more.

Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning

[mat kelcey] was so impressed and inspired by the concept of a very slow movie player (which is the playing of a movie at a slow rate on a kind of DIY photo frame) that he created his own with a high-resolution e-ink display. It shows high definition frames from Alien (1979) at a rate of about one frame every 200 seconds, but a surprising amount of work went into getting a color film intended to look good on a movie screen also look good when displayed on black & white e-ink.

The usual way to display images on a screen that is limited to black or white pixels is dithering, or manipulating relative densities of white and black to give the impression of a much richer image than one might otherwise expect. By itself, a dithering algorithm isn’t a cure-all and [mat] does an excellent job of explaining why, complete with loads of visual examples.

One consideration is the e-ink display itself. With these displays, changing the screen contents is where all the work happens, and it can be a visually imperfect process when it does. A very slow movie player aims to present each frame as cleanly as possible in an artful and stylish way, so rewriting the entire screen for every frame would mean uglier transitions, and that just wouldn’t do.

Delivering good dithering results despite sudden contrast shifts, and with fewest changed pixels.

So the overall challenge [mat] faced was twofold: how to dither a frame in a way that looked great, but also tried to minimize the number of pixels changed from the previous frame? All of a sudden, he had an interesting problem to solve and chose to solve it in an interesting way: training a GAN to generate the dithers, aiming to balance best image quality with minimal pixel change from the previous frame. The results do a great job of delivering quality visuals even when there are sharp changes in scene contrast to deal with. Curious about the code? Here’s the GitHub repository.

Here’s the original Very Slow Movie Player that so inspired [mat], and here’s a color version that helps make every frame a work of art. And as for dithering? It’s been around for ages, but that doesn’t mean there aren’t new problems to solve in that space. For example, making dithering look good in the game Return of the Obra Dinn required a custom algorithm.