A group of developers called [OpenWorm] have mapped the 302 neurons of the Caenorhabditis elegans species of roundworm and created a virtual neural network that can be used to solve all the types of problems a worm might encounter. Which, when you think about it, aren’t much different from those a floor-crawling robots would be confronted with.
In a demo video released by one of the projects founders, [Timothy Busbice], their network is used to control a small Lego-rover equipped with a forward sonar sensor. The robot is able to stop before it hits a wall and determine an appropriate response, which may be to stop, back up, or turn. This is all pretty fantastic when you compare these 302 neural connections to any code you’ve ever written to accomplish the same task! It might be a much more complex route to the same outcome, but its uniquely organic… which makes watching the little Lego-bot fascinating; its stumbling around even looks more like thinking than executing.
I feel obligated to bring up the implications of this project. Since we’re all thinking about it now, let’s all imagine the human brain similarly mapped and able to simulate complex thought processes. If we can pull this off one day, not only will we learn a lot more about how our squishy grey hard drives process information, artificial intelligence will also improve by leaps and bounds. An effort to do this is already in effect, called the connectome project, however since there are a few more connections to map than with the c. elegans’ brain, it’s a feat that is still underway.
The project is called “open”worm, which of course means you can download the code from their website and potentially dabble in neuro-robotics yourself. If you do, we want to hear about your wormy brain bot.
yadda yadda ‘skynet’ yadda yadda
Ok, I’ll bite. We’ll send photos.
This is seriously awesome!
I can’t really find a simple explanation of the neural network on the linked pages (mostly lots of links to other pages there); I wonder if is it entirely deterministic? Are real neurons deterministic? I’d think there physical neurons would be somewhat probabilistic, considering it works via complex chemical processes, which, as far as I know, are probabilistic at a small scale.
Also, how does this network “implement” short-term memory? Does the previous state of each neuron influence the “next” state, or are the neurons “stateless”?
I know something (but not all that much) about “conventional” neural networks used in machine learning, but it seems like this is something quite different; what I get from the FAQ is that they try to accurately model the actual processes going on inside real neurons.
If anyone can find a writeup that is understandable to people (like me) without any background in biology, I would be very interested!
The chemical processes inside the neuron are big enough that random noise doesn’t have a significant impact. So, for all practical purposes they are deterministic.
Except that many neurons, especially in feedback loops, behave more or less chaotically so small variations tend to amplify fast, like how it’s almost impossible to predict the 10th bounce of a real billiard ball because small differences in the table could send it anywhere.
In cellular automata, such as the Game of Life, it’s been deduced that the information throughput or “computational power” of the system is maximized when the rules almost but not quite tip the system into chaotic behaviour, and so it’s postulated that the “thinking power” of a neural network as well is maximized by teetering it on the brink of breaking down into a chaotic mess, or even alternating between chaos and order.
Measurements from real biological brains supports this idea. Observing real brain waves vs. activation patterns in neural network simulating cellular automata that are brought to the brink of chaos seem to behave the same way when the neural network simulation is disturbed with random noise. It’s as if the brain alternates between a calm phase where everything is working deterministically and neatly, and a disordered chaotic phase where everything gets shuffled around, and the “thinking” seems to be done in the transition from one to the other.
So are you suggesting that psychological disorders among humans may be a simple side effect of a system that operates best on the edge?
I don’t think that they’re the same kind of chaos, but that’s a really interesting idea, to be sure.
Many of the most creative people were also insane. (Van Gogh, Byron, etc.)
Or just where on the edge they are. Or how long the chaotic state lasts. Or spreads. Many variables, but a very interesting avenue I’m sure will generate a few PhD thesis.
We don’t have billions of neurons so we can have our actions driven by chaos. If that were a good strategy, it could be implemented with a much smaller brain.
Question is: does it need memory? Perhaps such a simple life form just works with try-success/error-repeat?
No memory in the sense of storing bits and bytes. The robot works completely from how the simulated neurons are wired (the C elegans connectome).
I know this is a pretty shameless plug, but I’m working on a project related to neurons and pretty recently came from the same place as you–minimal biology background, wanted to understand high-level generic neuron functionality. My attempt at a 5-minute explanation: https://www.youtube.com/watch?v=cXKlhdLGdmc
Ah.. but does it pass the “Caenorhabditis elegans” Turing test… are its decisions indistinguishable from a real “Caenorhabditis elegans” I wonder.
Damn, that made me laugh. I do some work with c elegans, and they actually do have individual “personalities”, so the remark really hit home.
Amazing. It is surprising to me that creatures with such limited neural capacity would have distinct personalities.
And here I was previously amazed that my aquarium fish, which have much more complex nervous systems, show distinct personalities…
Even bacteria in a monoxenic culture have a degree of “individuality”. My intuition is that simple creatures are more influenced by their environment (a bacterium’s particular fate is largely dictated by its environment), whereas more complex creatures are “individual” largely by virtue of mechanisms that arise from their overall complexity. In fact, complex organisms generally feature mechanisms, such as sexual reproduction that actually ensure variety.
Of course this is a ridiculous oversimplification, and an unqualified one at that.
“Of course this is a ridiculous oversimplification, and an unqualified one at that.”
Perhaps, but what you say makes sense to me. Thanks for sharing.
check out this excelelnt – growing – book on neural nets with github code
https://github.com/mnielsen/neural-networks-and-deep-learning
It’s 302 neurons and 7000 neural connections (synapses).
If Moor’s law continues, we will have the computing capacity to run a full brain simulation in real time by ~2035. We’ll be able to do it sooner by running it at slower speeds, or by exempting things like the visual cortex. It’s harder to predict when it might be possible to create an AI from scratch, but it’s possible that it might be sooner.
Either way, AI is coming. Although skynet is a dramatization, there’s a risk, however large or small, that such a disruption to society will be harmful to us as a species. I’ve been reading through Nick Bostrom’s “Superintelligence” and he exhaustively lays out what the risks might be, and what humanity might do to survive the invention of a rapidly improving AI.
Moore’s law is unlikely to continue much beyond current levels of miniaturization because chip makers are starting to run up against issues at the molecular level. As it is, some of the connections in today’s ICs are only a small number of atoms wide.
I am of the opinion that future increases in computing performance will come from parallel processing and advances in computer types (quantum, optical, etc.).
Since the Moore’s law is so vaguely ill-defined that one can arbitrarily change what it measures, it’s not a question of feature size. One can claim to continue the Moore’s law by stacking chips to add more transistors to a chip, which is then assumed to increase the computing power of that chip linearily, which is then assumed to mean that it will achieve the computing power of a brain, which is then assumed to mean that it will perform the same function as a brain.
Well if we assume all that is true and will happen, the problem still remains that the power consumption of a computer that has the equivalent power of a human brain today is about a billion times greater than that of a brain. In 20 years, or in 2035, following a rate of 18 months per halving of power requirements, the computer that emulates the human brain in real time will consume as much power as 10 million real brains.
Running just one of these artifical brains would require about 250 MW, or roughly twice the power required to run the Large Hardon Collider at CERN.
The thing is:
The fastest supercomputer to date has a theoretical computing power of 33.86 PFLOPS with the Tianhe-2.
Ray Kurtzweil estimated back in the day that the computing power of a brain is on the order of 20 PFLOPS.
If the linear increase in computing power alone was to make a true AI possible, we should already have one, or Kurtzweil and the futurists were all wrong by at least one order of magnitude.
The brain isn’t just measured by the number of operations. You also need to have the communication bandwidth to transfer massive amounts of data from neuron to neuron. That’s going to be hard to emulate with Xeons across a room, connected by a network cable. And of course, you need the right kind of programming.
We don’t even know what the architecture of the brain is, beyond, “some parts are extremely parallel” and “yeah, data organization is really weird based on references to other objects. It’s sorta like a database but entirely not like a database”. Estimating the ‘compute power’ of the brain is severely misguided.
Which version of Moore’s law? The 12, 18, or 24 month version? Or some as of yet re-defined version when the previous one stops working and they need to re-re-adjust it to keep up the show?
The problem with the Moore argument is, that while we might have the equivalent computing power in numbers, the numbers themselves are meaningless. Are two cars faster than one car? Of course not. It’s the way in which the memory and the processors are arranged, and how they actually work that determines whether they are able to simulate a human brain.
Sure, AI’s are coming. But blindly extrapolating some current success with Moore’s law is just stupid. In this case, the limiting factor is not hardware, so Moore’s law does not apply.
FurtherMoore, it is not that hard to make a neural network with a very large number of nodes/connections. Just constructing that will not give you an AI. It will give you artificial gibberish until you devise a way to teach it to react to certain inputs. In biology, that thing is called growing up and had millions of generations to perfect the process (to ‘good enough’ mind you).
Neural networks are a nice analogue of the natural solution, but you should not see them as magic to solve all your problems. Getting a meaningful output is very hard work.
My hunch is we need to evolve AIs. Trying to construct them directly – even with neural nets – its going to be a near impossible task.
We need to diverse suitable virtual environments that provide selective advantages for intelligence and communication skills. Humans got so intelligence almost certainly do do with a few feedback loops – tool use and communication with other humans being the most likely influences.
In other words; We need to make a species of AI, not just 1 ;)
As momma always said, “if you don’t make an AI that will destroy the world somebody else will and you wont get the credit”
Its not humanity not surviving AI thats the worry.
Its humans using AI thats the worry.
Long long before we have truly sentient machines, we will have AI used as tools – and very dangerous tools.
Humans are deeply flawed creatures and we already have a society that seems to put psychopaths in some of the highest positions. The danger is from these people. The stock market is an easy example here.
Truly conscious AI able to think and make its own decisions? Its a gamble, but its one Id take over people controlling them any day.
Could this be used with “bigger” robots, like the ones here:
http://easyrobotsimulator.com/
So .. someone had to ask: When do we see the “reproduction” path light up? :-)
Something like 99% of all c elegans in a normal, wild type population are hermaphrodites, so maybe there is a wanking network in there somewhere.
The other 1% are males, and they have a few more neurons, every single one of which is there for exactly one purpose ..
Sounds an awful lot like Homo Sapiens. :D
What about stimuli like “hungry”? if this is a copy of the worm’s brain then it has to have the “i am hungry” inputs that you could map from the battery supply. How would it look for food? and can it learn or are the synapses too simplistic?
At the moment I think that is the whistling you may hear in that or one of the other videos on the project.
I would think giving it an input that gets stronger as it’s closer to the charging station would work. You could use a simple RF beacon to detect it and just use the signal strength. You’d end up with noise and false positives but that’s actually what would happen to it in real life. It’d be interesting to see it realize, “I’m not eating” and leave. Use a QI charging mat to do the actual charging and make it use that to see that it’s being given food.
Yes, as another comment stated, I used a sound sensor to simulate the presence of food whereby once a certain decibel threshold is met, I stimulate some of the food sensing sensory neurons. I would whistle, snap my fingers etc to reach the level and stimulate the neurons. My wife would call it and of course when she did, thus stimulating food sensory neurons, the robot would come to her – this was a little freakish even to me :-)
Quite the ambitious project, they might have wanted to start with something simpler, like the brain from a middle manager.
Why a middle manager? If you cut the ‘me me me’ neurons from a typical CEO you don’t have much more than a slugs brain either.
About the same as my teenage daughter.
Good breeding material, if I were you I’d use it.
THAT is pretty damn cool!
Two words come to mind when speaking of worms and robots; Eathworm Jim.
Meh, why not just train a few million neurons in a virtual environment that matches the robot and it’s environment then download that network to the robot? What is the big deal about a worm brain when we can already do much more complex tasks on cheap hardware once the network has been evolved on bigger machines?
The “big deal” is that this “worm brain” (connectome) was scanned in from a real worm. Basically (simplifying things radically here) they froze the worm, then sliced it really thinly, imaging the neurons and their inter-connections – to build up the graph of the connectome.
This connectome is then used to build the neural network. Unfortunately, the connectome does not include the “weights” of the original living organism (no way to image that), so what is actually processing can’t be said to be a digital representation of the original worm’s brain – but rather an empty shell that can then be trained. I haven’t read the project details, but it might be possible that they could gain the weight values from other C. Elegans worms – effectively copying (or maybe transferring – if the worm doesn’t survive the operation?) the brain function of the worm to the connectome.
This is “big deal” because for the first time, we have effectively transferred the processing of a brain (albeit, a simple one) from a biological substrate to a silicon substrate; in effect, we have done a mind upload. Certainly, it doesn’t likely have much practical value (or maybe it will – perhaps it could lead to better robotic vacuums?) – but if you can’t understand the potential implications of this, you may need to think on it more.
“All this has happened before. All this will happen again.” ;)
Has anybody used these 1024-neuron chips?
http://www.cognimem.com/_docs/Datasheet/DS_CM1K.pdf
http://www.cognimem.com/_docs/Technical-Manuals/TM_CM1K_Hardware_Manual.pdf
They claim to be “Pattern Recognition Chip with 1024 Neurons in Parallel” and are available on digikey for 100USD.
Could such a chip be used to emulate this roundworm brain?
How long until someone retrofits this into a Roomba? Substitute the dirt sensor for the olfactory sense, and you could use the feeding and navigation circuitry to clean your carpet. It could potentially be very effective, or at least entertaining.
And as a side note, is anyone else reminded of the lobsters from Accelerando?
” artificial intelligence will also improve by leaps and bounds. ”
Not quite sure. Certainly will help to make studying easier…..but by itself arnt these all just black boxs? We would have no more insight into their “on mass” workings then we do humans.
Also I am not even sure we fully understand neuron – I read an artical a few months back about the brain having mechanical as well as bio and electrical workings. Are we really simulating all a neuron does?
Nice project. However I didn’t find concrete explanations on the website about how the network and the mindstorms robot work together. Does anyone know where I can get more information such that I can build a robot on my own?