AI Creates Killer Drug

Researchers in Canada and the United States have used deep learning to derive an antibiotic that can attack a resistant microbe, acinetobacter baumannii, which can infect wounds and cause pneumonia. According to the BBC, a paper in Nature Chemical Biology describes how the researchers used training data that measured known drugs’ action on the tough bacteria. The learning algorithm then projected the effect of 6,680 compounds with no data on their effectiveness against the germ.

In an hour and a half, the program reduced the list to 240 promising candidates. Testing in the lab found that nine of these were effective and that one, now called abaucin, was extremely potent. While doing lab tests on 240 compounds sounds like a lot of work, it is better than testing nearly 6,700.

Interestingly, the new antibiotic seems only to be effective against the target microbe, which is a plus. It isn’t available for people yet and may not be for some time — drug testing being what it is. However, this is still a great example of how machine learning can augment human brainpower, letting scientists and others focus on what’s really important.

WHO identified acinetobacter baumannii as one of the major superbugs threatening the world, so a weapon against it would be very welcome. You can hope that this technique will drastically cut the time involved in developing new drugs. It also makes you wonder if there are other fields where AI techniques could cull out alternatives quickly, allowing humans to focus on the more promising candidates.

Want to catch up on machine learning algorithms? Google can help. Or dive into an even longer course.

AI Image Generation Gets A Drag Interface

AI image generators have gained new tools and techniques for not just creating pictures, but modifying them in consistent and sensible ways, and it seems that every week brings a fascinating new development in this area. One of the latest is Drag Your GAN, presented at SIGGRAPH 2023, and it’s pretty wild.

It provides a point-dragging interface that modifies images based on their implied structure. A picture is worth a thousand words, so this short animation shows what that means. There are plenty more where that came from at the project’s site, so take a few minutes to check it out.

GAN stands for generative adversarial network, a class of machine learning that features prominently in software like image generation; the “adversarial” part comes from the concept of networks pulling results between different goalposts. Drag Your GAN has a GitHub repository where code is expected to be released in June, but in the meantime, you can read the full paper or brush up on the basics of how AI image generators work, as well as see how image generation can be significantly enhanced with an understanding of a 2D image’s implied depth.

Prompt Injection: An AI-Targeted Attack

For a brief window of time in the mid-2010s, a fairly common joke was to send voice commands to Alexa or other assistant devices over video. Late-night hosts and others would purposefully attempt to activate voice assistants like these en masse and get them to do ridiculous things. This isn’t quite as common of a gag anymore and was relatively harmless unless the voice assistant was set up to do something like automatically place Amazon orders, but now that much more powerful AI tools are coming online we’re seeing that joke taken to its logical conclusion: prompt-injection attacks. Continue reading “Prompt Injection: An AI-Targeted Attack”

A 3D printed copper aerospike engine cutaway showing the intricate, organic-looking channels inside. It is vaguely reminiscent of a human torso and lungs.

3D Printed Aerospike Was Designed By AI

We’re still in the early days of generatively-designed objects, but when combined with the capabilities of 3D printing, we’re already seeing some interesting results. One example is this new copper aerospike engine. [via Fabbaloo]

A collaboration between startups Hyperganic (generative AI CAD) and AMCM (additive manufacturing), this 800 mm long aerospike engine may be the most complicated 3D print yet. It continues the exciting work being done with 3D printing for aerospace applications. The complicated geometries of rocket nozzles of any type let additive manufacturing really shine, so the combination of generative algorithms and 3D printed nozzles could result in some big leaps in coming years.

Aerospikes are interesting as their geometry isn’t pressure dependent like more typical bell-shaped rocket nozzles meaning you only need one engine for your entire flight profile instead of the traditional switching mid-flight. A linear aerospike engine was one of the main selling points for the cancelled VentureStar Space Shuttle replacement.

This isn’t the only generative design headed to space, and we’ve covered a few projects if you’re interested in building your own 3D printed rocket nozzles or aerospike engines. Just make sure you get clearance from your local aviation regulator before your project goes to space!

ChatGPT Makes A 3D Model: The Secret Ingredient? Much Patience

ChatGPT is an AI large language model (LLM) which specializes in conversation. While using it, [Gil Meiri] discovered that one way to create models in FreeCAD is with Python scripting, and ChatGPT could be encouraged to create a 3D model of a plane in FreeCAD by expressing the model as a script. The result is just a basic plane shape, and it certainly took a lot of guidance on [Gil]’s part to make it happen, but it’s not bad for a tool that can’t see what it is doing.

The first step was getting ChatGPT to create code for a 10 mm cube, and plug that in FreeCAD to see the results. After that basic workflow was shown to work, [Gil] asked it to create a simple airplane shape. The resulting code had objects for wing, fuselage, and tail, but that’s about all that could be said because the result was almost — but not quite — completely unlike a plane. Not an encouraging start, but at least the basic building blocks were there. Continue reading “ChatGPT Makes A 3D Model: The Secret Ingredient? Much Patience”

Thermal Camera Plus Machine Learning Reads Passwords Off Keyboard Keys

An age-old vulnerability of physical keypads is visibly worn keys. For example, a number pad with digits clearly worn from repeated use provides an attacker with a clear starting point. The same concept can be applied to keyboards by using a thermal camera with the help of machine learning, but it also turns out that some types of keys and typing styles are harder to read than others.

Researchers at the University of Glasgow show how machine learning can pull details from thermal images like these quickly and effectively.

Touching a key with a fingertip imparts a slight amount of body heat, and that small amount of heat can be spotted by a thermal sensor. We’ve seen this basic approach used since at least 2005, and two things have changed since then: thermal cameras gotten much more common, and researchers discovered that by combining thermal readings with machine learning, it’s possible to eke out slight details too difficult or subtle to spot by human eye and judgement alone.

Here’s a link to the research and findings from the University of Glasgow, which shows how even a 16 symbol password can be attacked with an average accuracy of 55%. Shorter passwords are much easier to decipher, with the system attacking 6 and 8 symbol passwords with an accuracy between 92% and 80%, respectively. In the study, thermal readings were taken up to a full minute after the password was entered, but sooner readings result in higher accuracy.

A few things make things harder for the system. Fast typists spend less time touching keys, and therefore transfer less heat when they do, making things a little more challenging. Interestingly, the material of the keycaps plays a large role. ABS keycaps retain heat far more effectively than PBT (a material we often see in custom keyboard builds like this one.) It also turns out that the tiny amount of heat from LEDs in backlit keyboards runs effective interference when it comes to thermal readings.

Amusingly this kind of highly modern attack would be entirely useless against a scramblepad. Scramblepads are vintage devices that mix up which numbers go with which buttons each time the pad is used. Thermal imaging and machine learning would be able to tell which buttons were pressed and in what order, but that still wouldn’t help! A reminder that when it comes to security, tech does matter but fundamentals can matter more.

An electronic neuron implemented on a purple neuron-shaped PCB

Hackaday Prize 2023: Explore The Basics Of Neuroscience With This Electronic Neuron

Brains are the most complex systems in the universe, but their basic building blocks are surprisingly simple — the complexity arises from billions of neurons, axons and synapses working together. Simulating an entire brain therefore requires vast computing resources, but if it’s just a few cells you’re interested in, you don’t need much: a handful of op-amps and transistors will do the job, as [Sebastian Billaudelle] has demonstrated. He has designed an electronic neuron called Lu.i that does everything a real neuron does, in a convenient package suitable for educational use.

[Sebastian]’s neuron implements what’s known as the leaky integrate-and-fire model, first proposed by [Louis Lapicque] as a simple model for a neuron’s behavior. Basically, the neuron acts as an integrator that stores all incoming charge in a capacitor and generates a spiky output signal once its voltage reaches a certain threshold level. The capacitor is slowly discharged however, which means the neuron will only “fire” when it gets a strong enough input signal.

Two neuron-shaped PCBs exchanging signalsA couple of MCP6004 op-amps implement this model, with an LM339 comparator acting as the threshold detector. The neuron’s inputs are generated by electronic synapses made from logic-level MOSFETS. These circuits route signals between different neurons and can be manually set to either source or sink current, thereby increasing or decreasing the neuron’s voltage level.

All of this is built onto a neat purple PCB in the shape of a nerve cell, with external connections on the tips of its dendrites. The neuron’s internal state is made visible by an LED bar graph, giving the user an immediate feel for what’s going on inside the network. Multiple neurons can be connected together to form reasonably complex networks that can implement things like oscillators or logic functions, examples of which are shown on the project’s GitHub page.

The Lu.i project is a great way to teach the basics of neuroscience, turning dry differential equations into a neat display of signals racing around a network. Neurons are fascinating things that we’re learning more about every day, enabling things like brain-computer interfaces and neuromorphic computing.