A 3D printed copper aerospike engine cutaway showing the intricate, organic-looking channels inside. It is vaguely reminiscent of a human torso and lungs.

3D Printed Aerospike Was Designed By AI

We’re still in the early days of generatively-designed objects, but when combined with the capabilities of 3D printing, we’re already seeing some interesting results. One example is this new copper aerospike engine. [via Fabbaloo]

A collaboration between startups Hyperganic (generative AI CAD) and AMCM (additive manufacturing), this 800 mm long aerospike engine may be the most complicated 3D print yet. It continues the exciting work being done with 3D printing for aerospace applications. The complicated geometries of rocket nozzles of any type let additive manufacturing really shine, so the combination of generative algorithms and 3D printed nozzles could result in some big leaps in coming years.

Aerospikes are interesting as their geometry isn’t pressure dependent like more typical bell-shaped rocket nozzles meaning you only need one engine for your entire flight profile instead of the traditional switching mid-flight. A linear aerospike engine was one of the main selling points for the cancelled VentureStar Space Shuttle replacement.

This isn’t the only generative design headed to space, and we’ve covered a few projects if you’re interested in building your own 3D printed rocket nozzles or aerospike engines. Just make sure you get clearance from your local aviation regulator before your project goes to space!

ChatGPT Makes A 3D Model: The Secret Ingredient? Much Patience

ChatGPT is an AI large language model (LLM) which specializes in conversation. While using it, [Gil Meiri] discovered that one way to create models in FreeCAD is with Python scripting, and ChatGPT could be encouraged to create a 3D model of a plane in FreeCAD by expressing the model as a script. The result is just a basic plane shape, and it certainly took a lot of guidance on [Gil]’s part to make it happen, but it’s not bad for a tool that can’t see what it is doing.

The first step was getting ChatGPT to create code for a 10 mm cube, and plug that in FreeCAD to see the results. After that basic workflow was shown to work, [Gil] asked it to create a simple airplane shape. The resulting code had objects for wing, fuselage, and tail, but that’s about all that could be said because the result was almost — but not quite — completely unlike a plane. Not an encouraging start, but at least the basic building blocks were there. Continue reading “ChatGPT Makes A 3D Model: The Secret Ingredient? Much Patience”

Thermal Camera Plus Machine Learning Reads Passwords Off Keyboard Keys

An age-old vulnerability of physical keypads is visibly worn keys. For example, a number pad with digits clearly worn from repeated use provides an attacker with a clear starting point. The same concept can be applied to keyboards by using a thermal camera with the help of machine learning, but it also turns out that some types of keys and typing styles are harder to read than others.

Researchers at the University of Glasgow show how machine learning can pull details from thermal images like these quickly and effectively.

Touching a key with a fingertip imparts a slight amount of body heat, and that small amount of heat can be spotted by a thermal sensor. We’ve seen this basic approach used since at least 2005, and two things have changed since then: thermal cameras gotten much more common, and researchers discovered that by combining thermal readings with machine learning, it’s possible to eke out slight details too difficult or subtle to spot by human eye and judgement alone.

Here’s a link to the research and findings from the University of Glasgow, which shows how even a 16 symbol password can be attacked with an average accuracy of 55%. Shorter passwords are much easier to decipher, with the system attacking 6 and 8 symbol passwords with an accuracy between 92% and 80%, respectively. In the study, thermal readings were taken up to a full minute after the password was entered, but sooner readings result in higher accuracy.

A few things make things harder for the system. Fast typists spend less time touching keys, and therefore transfer less heat when they do, making things a little more challenging. Interestingly, the material of the keycaps plays a large role. ABS keycaps retain heat far more effectively than PBT (a material we often see in custom keyboard builds like this one.) It also turns out that the tiny amount of heat from LEDs in backlit keyboards runs effective interference when it comes to thermal readings.

Amusingly this kind of highly modern attack would be entirely useless against a scramblepad. Scramblepads are vintage devices that mix up which numbers go with which buttons each time the pad is used. Thermal imaging and machine learning would be able to tell which buttons were pressed and in what order, but that still wouldn’t help! A reminder that when it comes to security, tech does matter but fundamentals can matter more.

An electronic neuron implemented on a purple neuron-shaped PCB

Hackaday Prize 2023: Explore The Basics Of Neuroscience With This Electronic Neuron

Brains are the most complex systems in the universe, but their basic building blocks are surprisingly simple — the complexity arises from billions of neurons, axons and synapses working together. Simulating an entire brain therefore requires vast computing resources, but if it’s just a few cells you’re interested in, you don’t need much: a handful of op-amps and transistors will do the job, as [Sebastian Billaudelle] has demonstrated. He has designed an electronic neuron called Lu.i that does everything a real neuron does, in a convenient package suitable for educational use.

[Sebastian]’s neuron implements what’s known as the leaky integrate-and-fire model, first proposed by [Louis Lapicque] as a simple model for a neuron’s behavior. Basically, the neuron acts as an integrator that stores all incoming charge in a capacitor and generates a spiky output signal once its voltage reaches a certain threshold level. The capacitor is slowly discharged however, which means the neuron will only “fire” when it gets a strong enough input signal.

Two neuron-shaped PCBs exchanging signalsA couple of MCP6004 op-amps implement this model, with an LM339 comparator acting as the threshold detector. The neuron’s inputs are generated by electronic synapses made from logic-level MOSFETS. These circuits route signals between different neurons and can be manually set to either source or sink current, thereby increasing or decreasing the neuron’s voltage level.

All of this is built onto a neat purple PCB in the shape of a nerve cell, with external connections on the tips of its dendrites. The neuron’s internal state is made visible by an LED bar graph, giving the user an immediate feel for what’s going on inside the network. Multiple neurons can be connected together to form reasonably complex networks that can implement things like oscillators or logic functions, examples of which are shown on the project’s GitHub page.

The Lu.i project is a great way to teach the basics of neuroscience, turning dry differential equations into a neat display of signals racing around a network. Neurons are fascinating things that we’re learning more about every day, enabling things like brain-computer interfaces and neuromorphic computing.

Liquid Neural Networks Do More With Less

[Ramin Hasani] and colleague [Mathias Lechner] have been working with a new type of Artificial Neural Network called Liquid Neural Networks, and presented some of the exciting results at a recent TEDxMIT.

Liquid neural networks are inspired by biological neurons to implement algorithms that remain adaptable even after training. [Hasani] demonstrates a machine vision system that steers a car to perform lane keeping with the use of a liquid neural network. The system performs quite well using only 19 neurons, which is profoundly fewer than the typically large model intelligence systems we’ve come to expect. Furthermore, an attention map helps us visualize that the system seems to attend to particular aspects of the visual field quite similar to a human driver’s behavior.

 

Mathias Lechner and Ramin Hasani
[Mathias Lechner] and [Ramin Hasani]
The typical scaling law of neural networks suggests that accuracy is improved with larger models, which is to say, more neurons. Liquid neural networks may break this law to show that scale is not the whole story. A smaller model can be computed more efficiently. Also, a compact model can improve accountability since decision activity is more readily located within the network. Surprisingly though, liquid neural network performance can also improve generalization, robustness, and fairness.

A liquid neural network can implement synaptic weights using nonlinear probabilities instead of simple scalar values. The synaptic connections and response times can adapt based on sensory inputs to more flexibly react to perturbations in the natural environment.

We should probably expect to see the operational gap between biological neural networks and artificial neural networks continue to close and blur. We’ve previously presented on wetware examples of building neural networks with actual neurons and ever advancing brain-computer interfaces.

Continue reading “Liquid Neural Networks Do More With Less”

Chatting With Local AI Moves Directly In-Browser, Thanks To Web LLM

Large Language Models (LLM) are at the heart of natural-language AI tools like ChatGPT, and Web LLM shows it is now possible to run an LLM directly in a browser. Just to be clear, this is not a browser front end talking via API to some server-side application. This is a client-side LLM running entirely in the browser.

The ability to run an LLM (natural language AI) directly in-browser means more ways to implement local AI while enjoying GPU acceleration via WebGPU.

Running an AI system like an LLM locally usually leverages the computational abilities of a graphics card (GPU) to accelerate performance. This is true when running an image-generating AI system like Stable Diffusion, and it’s also true when implementing a local copy of an LLM like Vicuna (which happens to be the model implemented by Web LLM.) The thing that made Web LLM possible is WebGPU, whose release we covered just last month.

WebGPU provides a way for an in-browser application to talk to a local GPU directly, and it sure didn’t take long for someone to get the idea of using that to get a local LLM to run entirely within the browser, complete with GPU acceleration. This approach isn’t just limited to language models, either. The same method has been applied to successfully create Web Stable Diffusion as well.

It’s a fascinating (and fast) development that opens up new possibilities and, hopefully, gives people some new ideas. Check out Web LLM’s GitHub repository for a closer look, as well as access to an online demo.

A small speaker with an LCD showing chatbot responses

AI-Powered Speaker Is A Chatbot You Can Actually Chat With

AI-powered chatbots are pretty cool, but most still require you to type your question on a keyboard and read an answer from a screen. It doesn’t have to be like that, of course: with a few standard tools, you can turn a chatbot into a machine that literally chats, as [Hoani Bryson] did. He decided to make a standalone voice-operated ChatGPT client that you can actually sit next to and have a conversation with.

The base of the project is a USB speaker, to which [Hoani] added a Raspberry Pi, a Teensy, a two-line LCD and a big red button. When you press the button, the Pi listens to your speech and converts it to text using the OpenAI voice transcription feature. It then sends the resulting text to ChatGPT through its API and waits for its response, which it turns into sound again through the eSpeak speech synthesizer. The LCD, driven by the Teensy, shows the current status of the machine and also provides live subtitles while the machine is talking.

To spice up the AI box’s appearance, [Hoani] also added an LED ring which shows a spectrogram of the audio being generated. This small addition really makes the thing come alive, turning it into what looks like a classic Sci-Fi movie prop. Except that this one’s real, of course – we are actually living in the future, with human-like AI all around us.

All code, mostly written in Go, is freely available on [Hoani]’s GitHub page. It also includes a separate audio processing library called toot that [Hoani] wrote to help him interface with the micophone and do spectral analysis. Anyone with basic electronic skills can now build their own AI companion and talk to it – something that ham radio operators have been doing for a while.

Continue reading “AI-Powered Speaker Is A Chatbot You Can Actually Chat With”