Blind Camera: Visualizing A Scene From Its Sounds Alone

A visualization by the Blind Camera based on recorded sounds and the training data set for the neural network. (Credit: Diego Trujillo Pisanty)
A visualization by the Blind Camera based on recorded sounds and the training data set for the neural network. (Credit: Diego Trujillo Pisanty)

When we see a photograph or photo of a scene, we can likely imagine what sounds would go with it, but what if this gets inverted, and we have to imagine the scene that goes with the sounds? How close would we get to reconstructing the scene in our mind, without the biases of our upbringing and background rendering this into a near-impossible task? This is essentially the focus of a project by [Diego Trujillo Pisanty] which he calls Blind Camera.

Based on video data recorded in Mexico City, a neural network created using Tensorflow 3 was trained using an RTX 3080 GPU on a dataset containing frames from these videos that were associated with a sound. As a result, when the thus trained neural network is presented with a sound profile (the ‘photo’), it’ll attempt to reconstruct the scene based on this input and its model, all of which has been adapted to run on a single Raspberry Pi 3B board.

However, since all the model knows are the sights and sounds of Mexico City, the resulting image will always be presented as a composite of scenes from this city. As [Diego] himself puts it: for the device, everything is a city. In a way it is an excellent way to demonstrate how not only neural networks are limited by their training data, but so too are us humans.

Continue reading “Blind Camera: Visualizing A Scene From Its Sounds Alone”

An electronic neuron implemented on a purple neuron-shaped PCB

Hackaday Prize 2023: Explore The Basics Of Neuroscience With This Electronic Neuron

Brains are the most complex systems in the universe, but their basic building blocks are surprisingly simple — the complexity arises from billions of neurons, axons and synapses working together. Simulating an entire brain therefore requires vast computing resources, but if it’s just a few cells you’re interested in, you don’t need much: a handful of op-amps and transistors will do the job, as [Sebastian Billaudelle] has demonstrated. He has designed an electronic neuron called Lu.i that does everything a real neuron does, in a convenient package suitable for educational use.

[Sebastian]’s neuron implements what’s known as the leaky integrate-and-fire model, first proposed by [Louis Lapicque] as a simple model for a neuron’s behavior. Basically, the neuron acts as an integrator that stores all incoming charge in a capacitor and generates a spiky output signal once its voltage reaches a certain threshold level. The capacitor is slowly discharged however, which means the neuron will only “fire” when it gets a strong enough input signal.

Two neuron-shaped PCBs exchanging signalsA couple of MCP6004 op-amps implement this model, with an LM339 comparator acting as the threshold detector. The neuron’s inputs are generated by electronic synapses made from logic-level MOSFETS. These circuits route signals between different neurons and can be manually set to either source or sink current, thereby increasing or decreasing the neuron’s voltage level.

All of this is built onto a neat purple PCB in the shape of a nerve cell, with external connections on the tips of its dendrites. The neuron’s internal state is made visible by an LED bar graph, giving the user an immediate feel for what’s going on inside the network. Multiple neurons can be connected together to form reasonably complex networks that can implement things like oscillators or logic functions, examples of which are shown on the project’s GitHub page.

The Lu.i project is a great way to teach the basics of neuroscience, turning dry differential equations into a neat display of signals racing around a network. Neurons are fascinating things that we’re learning more about every day, enabling things like brain-computer interfaces and neuromorphic computing.

Liquid Neural Networks Do More With Less

[Ramin Hasani] and colleague [Mathias Lechner] have been working with a new type of Artificial Neural Network called Liquid Neural Networks, and presented some of the exciting results at a recent TEDxMIT.

Liquid neural networks are inspired by biological neurons to implement algorithms that remain adaptable even after training. [Hasani] demonstrates a machine vision system that steers a car to perform lane keeping with the use of a liquid neural network. The system performs quite well using only 19 neurons, which is profoundly fewer than the typically large model intelligence systems we’ve come to expect. Furthermore, an attention map helps us visualize that the system seems to attend to particular aspects of the visual field quite similar to a human driver’s behavior.

 

Mathias Lechner and Ramin Hasani
[Mathias Lechner] and [Ramin Hasani]
The typical scaling law of neural networks suggests that accuracy is improved with larger models, which is to say, more neurons. Liquid neural networks may break this law to show that scale is not the whole story. A smaller model can be computed more efficiently. Also, a compact model can improve accountability since decision activity is more readily located within the network. Surprisingly though, liquid neural network performance can also improve generalization, robustness, and fairness.

A liquid neural network can implement synaptic weights using nonlinear probabilities instead of simple scalar values. The synaptic connections and response times can adapt based on sensory inputs to more flexibly react to perturbations in the natural environment.

We should probably expect to see the operational gap between biological neural networks and artificial neural networks continue to close and blur. We’ve previously presented on wetware examples of building neural networks with actual neurons and ever advancing brain-computer interfaces.

Continue reading “Liquid Neural Networks Do More With Less”

Holographic Cellphones Coming Thanks To AI

Issac Asimov foresaw 3D virtual meetings but gave them the awkward name “tridimensional personification.” While you could almost do this now with VR headsets and 3D cameras, it would be awkward at best. It is easy to envision conference rooms full of computer equipment and scanners, but an MIT student has a method that may do away with all that by using machine learning to simplify hologram generation.

As usual, though, the popular press may be carried away a little bit. The key breakthrough here is that you can use TensorFlow to generate real-time holograms at a few frames per second using consumer-grade processing power found in a high-end phone from images with depth information, which is also available on some phones. There’s still the problem of displaying the hologram on the other side, which your phone can’t do. So any implication that you’ll download an app that enables holograms phone calls is hyperbole and images of this are in the realm of photoshop.

Continue reading “Holographic Cellphones Coming Thanks To AI”

picture of a brambling (a small bird), with "BirdNET-Pi" written above it

Neural Network Identifies Bird Calls, Even On Your Pi

Recently, we’ve stumbled upon the extensive effort that is the BirdNET research platform. BirdNET uses a neural network to identify birds by the sounds they make, and is a joint project between the Cornell Lab of Ornithology and the Chemnitz University of Technology. What strikes us is – this project is impressively featureful and accessible for a variety of applications. No doubt, BirdNET is aiming to become a one-stop shop for identifying birds as they sing.

There’s plenty of ways BirdNET can help you. Starting with likely the most popular option among us, there are iOS and Android apps – giving the microphone-enabled “smart” devices in our pockets a feature even the most app-averse hackers can respect. However, the BirdNET team also talks about bringing sound recognition to our browsers, Raspberry Pi and other SBCs, and even microcontrollers. We can’t wait for someone to bring BirdNET to a RP2040! The code’s open-source, the models are freely available – there’s hardly a use case one couldn’t cover with these.

Screenshot of the BirdNET-Pi interface, showing a chart of bird chirp occurences, and a spectrogram below itAbout that Raspberry Pi version! There’s a sister project called BirdNET-Pi – it’s an easy-to-install software package intended for the Raspberry Pi OS. Having equipped your Pi with a USB sound card, you can make it do 24/7 recording and analysis using a “lite” version of BirdNET. Then, you get a web interface you can log into and see bird sounds identified in real-time. Not just that – BirdNET-Pi also processes the sounds and creates spectrograms, keeps the sound in a database, and can even send you notifications.

The BirdNET-Pi project is open, too, of course. Not just that – the BirdNET-Pi team emphasizes everything being fully local, unless you choose otherwise, and perhaps decide to share it with others. Many do make their BirdNET-Pi instances public, and there’s a lovely interactive map that shows bird sounds all across the world!

BirdNET is, undoubtedly, a high-effort project – and a shining example of what a dedicated research team can do with a neural network and an admirable goal in mind. For many of us who feel joy when we hear birds outside, it’s endearing to know that we can plug a USB sound card into our Pi and learn more about them – even if we can’t spot them or recognize them by sight just yet. We’ve covered bird sound recognition on microcontrollers before – also using machine learning.

A purple 3D-printed case with an LCD screen on the front and Pikachu on top

Avoid Repetitive Strain Injury With Machine Learning – And Pikachu

The humble mouse has been an essential part of the desktop computing experience ever since the original Apple Macintosh popularized it in 1984. While mice enabled user-friendly GUIs, thus making computers accessible to more people than ever, they also caused a significant increase in repetitive strain injuries (RSI). Mainly caused by poor posture and stress, RSI can lead to pain, numbness and tingling sensations in the hand and arm, which the user might only notice when it’s too late.

Hoping to catch signs of RSI before it manifests itself, [kutluhan_aktar] built a device that allows him to track mouse fatigue. It does so through two sensors: one that measures galvanic skin response (GSR) and another that performs electromyography (EMG). Together, these two measurements should give an indication of the amount of muscle soreness. The sensor readout circuits are connected to a Wio Terminal, a small ARM Cortex-M4 development board with a 2.4″ LCD.

However, calculating muscle soreness is not as simple as just adding a few numbers together; in fact the link between the sensor data and the muscles’ state of health is complicated enough that [kutluhan] decided to train a TensorFlow artificial neural network (ANN), taking into account observed stress levels collected in real life. The network ran on the Wio while he used the mouse, pressing buttons to indicate the amount of stress he experienced. After a few rounds of training he ended up with a network that reached an accuracy of more than 80%.

[kutluhan] also designed a rather neat 3D printed enclosure to house the sensor readout boards as well as a battery to power the Wio Terminal. Naturally, the case was graced by a 3D rendition of Pikachu on top (get it? a mouse Pokémon that can paralyze its opponents!). We’ve seen [kutluhan]’s fondness for Pokémon-themed projects in his earlier Jigglypuff CO2 sensor.

Although the setup with multiple sensors doesn’t seem too practical for everyday use, the Mouse Fatigue Estimator might be a useful tool to train yourself to keep good posture and avoid stress while using a mouse. If you also use a keyboard (and who doesn’t?), make sure you’re using that correctly as well.

Continue reading “Avoid Repetitive Strain Injury With Machine Learning – And Pikachu”

Researchers Build Neural Networks With Actual Neurons

Neural networks have become a hot topic over the last decade, put to work on jobs from recognizing image content to generating text and even playing video games. However, these artificial neural networks are essentially just piles of maths inside a computer, and while they are capable of great things, the technology hasn’t yet shown the capability to produce genuine intelligence.

Cortical Labs, based down in Melbourne, Australia, has a different approach. Rather than rely solely on silicon, their work involves growing real biological neurons on electrode arrays, allowing them to be interfaced with digital systems. Their latest work has shown promise that these real biological neural networks can be made to learn, according to a pre-print paper that is yet to go through peer review.
Continue reading “Researchers Build Neural Networks With Actual Neurons”