Pong In A Petri Dish: Teasing Out How Brains Work

Experimental setup for the EAP hydrogel free energy principle test. (Credit: Vincent Strong et al., Cell, 2024)
Experimental setup for the EAP hydrogel free energy principle test. (Credit: Vincent Strong et al., Cell, 2024)

Of the many big, unanswered questions in this Universe, the ones pertaining to the functioning of biological neural networks are probably among the most intriguing. From the lowliest neurally gifted creatures to us brainy mammals, neural networks allow us to learn, to predict and adapt to our environments, and sometimes even stand still and wonder puzzlingly how all of this even works. Such puzzling has led to a number of theories, with a team of researchers recently investigating one such theory, as published in Cell. The focus here was that of Bayesian approaches to brain function, specifically the free energy principle, which postulates that neural networks as inference engines seek to minimize the difference between inputs (i.e. the model of the world as perceived) and its internal model.

This is where Electro Active Polymer (EAP) hydrogel comes into play, as it features free ions that can migrate through the hydrogel in response to inputs. In the experiment, these inputs are related to the ball position in the game of Pong. Much like experiments involving biological neurons, the hydrogel is stimulated via electrodes (in a 2 x 3 grid, matching the 2 by 3 grid of the game world), with other electrodes serving as outputs. The idea is that over time the hydrogel will ‘learn’ to optimize the outputs through ion migration, so that it ‘plays’ the game better, which should be reflected in the scores (i.e. the rally length).

Based on the results some improvement in rally length can be observed, which the researchers present as statistically significant. This would imply that the hydrogel displays active inference and memory. Additional tests with incorrect inputs resulted in a marked decrease in performance. This raises many questions about whether this truly displays emergent memory, and whether this validates the free energy principle as a Bayesian approach to understanding biological neural networks.

To the average Star Trek enthusiast the concept of hydrogels, plasmas, etc. displaying the inklings of intelligent life would probably seem familiar, and for good reason. At this point, we do not have a complete understanding of the operation of the many billions of neurons in our own brains. Doing a bit of prodding and poking at some hydrogel and similar substances in a dish might be just the kind of thing we need to get some fundamental answers.

Ecological System Dynamics For Computing

Some of you may remember that the ship’s computer on Star Trek: Voyager contained bioneural gel packs. Researchers have taken us one step closer to a biocomputing future with a study on the potential of ecological systems for computing.

Neural networks are a big deal in the world of machine learning, and it turns out that ecological dynamics exhibit many of the same properties. Reservoir Computing (RC) is a special type of Recurrent Neural Network (RNN) that feeds inputs into a fixed-dynamics reservoir black box with training only occurring on the outputs, drastically reducing the computational requirements of the system. With some research now embodying these reservoirs into physical objects like robot arms, the researchers wanted to see if biological systems could be used as computing resources.

Using both simulated and real bacterial populations (Tetrahymena thermophila) to respond to temperature stimuli, the researchers showed that ecological system dynamics has the “necessary conditions for computing (e.g. synchronized dynamics in response to the same input sequences) and can make near-future predictions of empirical time series.” Performance is currently lower than other forms of RC, but the researchers believe this will open up an exciting new area of research.

If you’re interested in some other experiments in biocomputing, checkout these RNA-based logic gates, this DNA-based calculator, or this fourteen-legged state machine.

Researchers Build Neural Networks With Actual Neurons

Neural networks have become a hot topic over the last decade, put to work on jobs from recognizing image content to generating text and even playing video games. However, these artificial neural networks are essentially just piles of maths inside a computer, and while they are capable of great things, the technology hasn’t yet shown the capability to produce genuine intelligence.

Cortical Labs, based down in Melbourne, Australia, has a different approach. Rather than rely solely on silicon, their work involves growing real biological neurons on electrode arrays, allowing them to be interfaced with digital systems. Their latest work has shown promise that these real biological neural networks can be made to learn, according to a pre-print paper that is yet to go through peer review.
Continue reading “Researchers Build Neural Networks With Actual Neurons”