For a brief window of time in the mid-2010s, a fairly common joke was to send voice commands to Alexa or other assistant devices over video. Late-night hosts and others would purposefully attempt to activate voice assistants like these en masse and get them to do ridiculous things. This isn’t quite as common of a gag anymore and was relatively harmless unless the voice assistant was set up to do something like automatically place Amazon orders, but now that much more powerful AI tools are coming online we’re seeing that joke taken to its logical conclusion: prompt-injection attacks. Continue reading “Prompt Injection: An AI-Targeted Attack”
Artificial Intelligence258 Articles
3D Printed Aerospike Was Designed By AI
We’re still in the early days of generatively-designed objects, but when combined with the capabilities of 3D printing, we’re already seeing some interesting results. One example is this new copper aerospike engine. [via Fabbaloo]
A collaboration between startups Hyperganic (generative AI CAD) and AMCM (additive manufacturing), this 800 mm long aerospike engine may be the most complicated 3D print yet. It continues the exciting work being done with 3D printing for aerospace applications. The complicated geometries of rocket nozzles of any type let additive manufacturing really shine, so the combination of generative algorithms and 3D printed nozzles could result in some big leaps in coming years.
Aerospikes are interesting as their geometry isn’t pressure dependent like more typical bell-shaped rocket nozzles meaning you only need one engine for your entire flight profile instead of the traditional switching mid-flight. A linear aerospike engine was one of the main selling points for the cancelled VentureStar Space Shuttle replacement.
This isn’t the only generative design headed to space, and we’ve covered a few projects if you’re interested in building your own 3D printed rocket nozzles or aerospike engines. Just make sure you get clearance from your local aviation regulator before your project goes to space!
ChatGPT Makes A 3D Model: The Secret Ingredient? Much Patience
ChatGPT is an AI large language model (LLM) which specializes in conversation. While using it, [Gil Meiri] discovered that one way to create models in FreeCAD is with Python scripting, and ChatGPT could be encouraged to create a 3D model of a plane in FreeCAD by expressing the model as a script. The result is just a basic plane shape, and it certainly took a lot of guidance on [Gil]’s part to make it happen, but it’s not bad for a tool that can’t see what it is doing.
The first step was getting ChatGPT to create code for a 10 mm cube, and plug that in FreeCAD to see the results. After that basic workflow was shown to work, [Gil] asked it to create a simple airplane shape. The resulting code had objects for wing, fuselage, and tail, but that’s about all that could be said because the result was almost — but not quite — completely unlike a plane. Not an encouraging start, but at least the basic building blocks were there. Continue reading “ChatGPT Makes A 3D Model: The Secret Ingredient? Much Patience”
Thermal Camera Plus Machine Learning Reads Passwords Off Keyboard Keys
An age-old vulnerability of physical keypads is visibly worn keys. For example, a number pad with digits clearly worn from repeated use provides an attacker with a clear starting point. The same concept can be applied to keyboards by using a thermal camera with the help of machine learning, but it also turns out that some types of keys and typing styles are harder to read than others.

Touching a key with a fingertip imparts a slight amount of body heat, and that small amount of heat can be spotted by a thermal sensor. We’ve seen this basic approach used since at least 2005, and two things have changed since then: thermal cameras gotten much more common, and researchers discovered that by combining thermal readings with machine learning, it’s possible to eke out slight details too difficult or subtle to spot by human eye and judgement alone.
Here’s a link to the research and findings from the University of Glasgow, which shows how even a 16 symbol password can be attacked with an average accuracy of 55%. Shorter passwords are much easier to decipher, with the system attacking 6 and 8 symbol passwords with an accuracy between 92% and 80%, respectively. In the study, thermal readings were taken up to a full minute after the password was entered, but sooner readings result in higher accuracy.
A few things make things harder for the system. Fast typists spend less time touching keys, and therefore transfer less heat when they do, making things a little more challenging. Interestingly, the material of the keycaps plays a large role. ABS keycaps retain heat far more effectively than PBT (a material we often see in custom keyboard builds like this one.) It also turns out that the tiny amount of heat from LEDs in backlit keyboards runs effective interference when it comes to thermal readings.
Amusingly this kind of highly modern attack would be entirely useless against a scramblepad. Scramblepads are vintage devices that mix up which numbers go with which buttons each time the pad is used. Thermal imaging and machine learning would be able to tell which buttons were pressed and in what order, but that still wouldn’t help! A reminder that when it comes to security, tech does matter but fundamentals can matter more.
Hackaday Prize 2023: Explore The Basics Of Neuroscience With This Electronic Neuron
Brains are the most complex systems in the universe, but their basic building blocks are surprisingly simple — the complexity arises from billions of neurons, axons and synapses working together. Simulating an entire brain therefore requires vast computing resources, but if it’s just a few cells you’re interested in, you don’t need much: a handful of op-amps and transistors will do the job, as [Sebastian Billaudelle] has demonstrated. He has designed an electronic neuron called Lu.i that does everything a real neuron does, in a convenient package suitable for educational use.
[Sebastian]’s neuron implements what’s known as the leaky integrate-and-fire model, first proposed by [Louis Lapicque] as a simple model for a neuron’s behavior. Basically, the neuron acts as an integrator that stores all incoming charge in a capacitor and generates a spiky output signal once its voltage reaches a certain threshold level. The capacitor is slowly discharged however, which means the neuron will only “fire” when it gets a strong enough input signal.
A couple of MCP6004 op-amps implement this model, with an LM339 comparator acting as the threshold detector. The neuron’s inputs are generated by electronic synapses made from logic-level MOSFETS. These circuits route signals between different neurons and can be manually set to either source or sink current, thereby increasing or decreasing the neuron’s voltage level.
All of this is built onto a neat purple PCB in the shape of a nerve cell, with external connections on the tips of its dendrites. The neuron’s internal state is made visible by an LED bar graph, giving the user an immediate feel for what’s going on inside the network. Multiple neurons can be connected together to form reasonably complex networks that can implement things like oscillators or logic functions, examples of which are shown on the project’s GitHub page.
The Lu.i project is a great way to teach the basics of neuroscience, turning dry differential equations into a neat display of signals racing around a network. Neurons are fascinating things that we’re learning more about every day, enabling things like brain-computer interfaces and neuromorphic computing.
Liquid Neural Networks Do More With Less
[Ramin Hasani] and colleague [Mathias Lechner] have been working with a new type of Artificial Neural Network called Liquid Neural Networks, and presented some of the exciting results at a recent TEDxMIT.
Liquid neural networks are inspired by biological neurons to implement algorithms that remain adaptable even after training. [Hasani] demonstrates a machine vision system that steers a car to perform lane keeping with the use of a liquid neural network. The system performs quite well using only 19 neurons, which is profoundly fewer than the typically large model intelligence systems we’ve come to expect. Furthermore, an attention map helps us visualize that the system seems to attend to particular aspects of the visual field quite similar to a human driver’s behavior.

A liquid neural network can implement synaptic weights using nonlinear probabilities instead of simple scalar values. The synaptic connections and response times can adapt based on sensory inputs to more flexibly react to perturbations in the natural environment.
We should probably expect to see the operational gap between biological neural networks and artificial neural networks continue to close and blur. We’ve previously presented on wetware examples of building neural networks with actual neurons and ever advancing brain-computer interfaces.
Chatting With Local AI Moves Directly In-Browser, Thanks To Web LLM
Large Language Models (LLM) are at the heart of natural-language AI tools like ChatGPT, and Web LLM shows it is now possible to run an LLM directly in a browser. Just to be clear, this is not a browser front end talking via API to some server-side application. This is a client-side LLM running entirely in the browser.

Running an AI system like an LLM locally usually leverages the computational abilities of a graphics card (GPU) to accelerate performance. This is true when running an image-generating AI system like Stable Diffusion, and it’s also true when implementing a local copy of an LLM like Vicuna (which happens to be the model implemented by Web LLM.) The thing that made Web LLM possible is WebGPU, whose release we covered just last month.
WebGPU provides a way for an in-browser application to talk to a local GPU directly, and it sure didn’t take long for someone to get the idea of using that to get a local LLM to run entirely within the browser, complete with GPU acceleration. This approach isn’t just limited to language models, either. The same method has been applied to successfully create Web Stable Diffusion as well.
It’s a fascinating (and fast) development that opens up new possibilities and, hopefully, gives people some new ideas. Check out Web LLM’s GitHub repository for a closer look, as well as access to an online demo.






