When it comes to something as futuristic-sounding as brain-computer interfaces (BCI), our collective minds tend to zip straight to scenes from countless movies, comics, and other works of science-fiction (including more dystopian scenarios). Our mind’s eye fills with everything from the Borg and neural interfaces of Star Trek, to the neural recording devices with parent-controlled blocking features from Black Mirror, and of course the enslavement of the human race by machines in The Matrix.
And now there’s this Elon Musk guy, proclaiming that he’ll be wiring up people’s brains to computers starting next year, as part of this other company of his: Neuralink. Here the promises and imaginings are truly straight from the realm of sci-fi, ranging from ‘reading and writing’ to the brain, curing brain diseases and merging human minds with artificial intelligence. How much of this is just investor speak? Please join us as we take a look at BCIs, neuroprosthetics and what we can expect of these technologies in the coming years.
How to Interface with Biochemistry
The main issue that makes it so hard to interface computers and other devices with the biochemical structures which compose our bodies is that they’re fundamentally different. Whereas our devices are powered (generally) by electricity, our cells use ATP (adenosine triphosphate). Much of our metabolism is targeted towards creating more ATP, for use in just about any process within our bodies. Since our electronic devices are not biochemical in nature, they essentially speak a different language.
Nerve cells (neurons) are an interesting type of cell because they have evolved the ability to use the action potentials generated by ions to communicate signals along the outgrowths from the cell body (axons and dendrites) to other neurons, at sites called synapses where chemical signaling (neurotransmission) occurs using special chemicals called neurotransmitters. Fortunately for us, this is a kind of signaling which we can hack into using our electrical devices.
Much of BCI and neuroprosthetics technology revolves around recording these action potentials by neurons in the brain’s central nervous system (CNS) and peripheral nervous system. Using electrically conductive probes we can measure the produced voltages, amplifying the very weak signals sufficiently to be made usable.
Patching in New Parts — We’ve Done It Before
The field of neuroprosthetics has been around for a long time, with the first cochlear implant being implanted in a patient in 1964 at Stanford University. These devices are without a doubt the most successful example of a neuroprosthetic device. Hundreds of thousands have been implanted in patients, largely restoring the ability to hear. It’s a life-changing thing for every recipient of a successful cochlear implant, and it seems obvious that we should look for other ways this technology can be used to help those in need.
These devices are fairly simple: they have a number of microphones which capture environmental sounds. The signal is processed by a DSP chip, converted to the appropriate signals for the implanted part of the device. This signal is transmitted to the internal implant via inductive coupling and is used to stimulate the cochlear nerve, which the patient then perceives as sound.
Calling neuroprosthetics ‘BCI’ would be somewhat inaccurate, however. The goal of neuroprosthetics is not to establish communication between a computer system and the brain, but merely the restoration of lost functionality, such as one’s hearing, vision, a functional arm or leg, or bridging the damaged part of a paraplegic person’s spinal cord. The presence of a computer system in this path (like the DSP in the cochlear implant) is merely there to enable this functionality, not as an end-goal by itself.
BCI is a much wider field, with many experimental projects and fundamental research that goes far beyond these practical implementations.
This Won’t Hurt a Bit: Neuralink’s Implant Technology
Brain-computer interfacing concerns itself primarily with intercepting the signals from neurons in the brain in order to map and understand the signaling. In order to accomplish this, probes have to be placed as close as possible to the neurons in the area which one intends to monitor. Three approaches are possible:
- non-invasive: the skin is not breached, purely external measurements are made.
- partially invasive: the device is placed on top of the brain itself, inside the skull and underneath the dura mater.
- invasive: the device is directly implanted into the brain tissue.
Obviously the invasive approach will yield the best results, as the measurements are being taken as close to the neurons as possible, allowing the sensor system to differentiate between small groups of neurons instead of averaging over groups of thousands of neurons or more. This is also the reasoning behind Neuralink’s work on different probes, details of which are shared in their recently published paper.
Neuralink developed so-called ‘threads’ which incorporate dozens of individual contacts along the length, with 32 contacts per thread being quoted. By inserting this thread into the grey matter of the target area, they get readings from neurons along the entire length, at various depths. These threads are connected to the device that does the actual sampling. At the moment it’s only been tested with rats, where having the animal survive the experiment is of secondary concern. Ergo it protrudes from the animal’s skull (photograph is in Neuralink’s paper, for the less squeamish), with a USB-C port for easy data access and power delivery.
Assuming that Neuralink can shrink the device to make it easily slot within one’s skull and make it wireless, this invasive procedure would then allow for readings from hundreds to thousands of locations within the brain. If implemented with the dozens to hundreds of threads as suggested by Neuralink officials, then one would get literally thousands to tens of thousands of readings.
The burning question then is of course, how useful are these readings?
Making Sense of the Data is Very Very Difficult
No matter how you collect the data, be it through an external electroencephalogram (EEG) or using invasive sensors, the signals must be processed to decide what the brain is doing. Peaks indicate that one or more neurons experienced an action potential which got picked up by a nearby probe. It is then interpreted in a specific way that considers the location within the brain. If it’s in the motor cortex, then it could mean that the person thought of moving their arm, for example.
The main issue here is that although we have a rough idea of where certain functionality is located in the brain based on prior experiments (and people suffering brain damage during accidents, such as the famous case of Phineas Gage), there’s still a lot we do not know. This is greatly complicated when moving from one person to the next. Human beings are not perfect carbon copies of each other, and our brains are constantly changing their exact layout. Positioning of functionality is tricky.
Research teams at universities around the world have been trying to map spoken words from people who had electrodes embedded in their brains while undergoing surgery, or as part of epilepsy treatment. That Science magazine article reveals that researchers find this task to be far from easy. Even though they were able to hear what the people are saying during invasive monitoring of brain activity, attempts at decoding the brain signal only reached somewhere between 40% and 80% accuracy.
Without such an extended training session — such as when a person cannot speak at all — it’d be nigh impossible to map those vocalizations to specific brain patterns. This is also because speech isn’t just produced by a single part of the brain, but distributed throughout the brain, from the motor cortex to the linguistic center, to sections involved in speech planning.
Essentially, we’re still only beginning to figure out what is needed to understand these signals which we are receiving from the probes embedded in the brains of animals and ourselves. We can already do amazing things with training and calibration, but we’re clearly not even close to the type of brain-computer interface we see in science fiction.
The Difference Between Hype and Science
Our understanding of the human brain is unfortunately rather limited. Although we have a basic understanding of how neurons work, and how they combine into larger networks, it’s only quite recently that we have begun to discover the structure of the networks that make up, for example, the outer part of the brain (cerebral cortex) where the higher-level functions of language and consciousness are thought to originate.
There are between 14 to 16 billion neurons in the cerebral cortex alone. Assuming we just focused our BCI efforts on this part of the brain, those are still a lot of neurons to monitor. Relative to Neuralink’s probes, the scale difference should make it obvious that all we can do at this point is monitoring groups of neurons, trying to integrate and interpret their collective activity.
This is where one can objectively say that what Neuralink has achieved here is an innovative approach to increasing the resolution for such embedded probe arrays, along with a very interesting-looking surgical robot to insert these arrays into brain tissue. When reading the earlier linked paper by Neuralink this becomes apparent as well, when in the Discussion section they refer to the system as ‘a research platform for use in rodents and serves as a prototype for future human implants’.
While there is a lot to look forward to in BCI research, many brilliant minds have been involved in this field since the 1950s, yet progress is understandably slow due to the complexity of what one wishes to achieve and the obvious ethical limitations when it comes to research. Here neuroprosthetics will likely see the most progress the coming years.
When Musk mentions ‘merging humans with AIs’, one also has to take a step back into reality, and realize that the current state of the art for artificial neural networks are hugely simplified models of what happens inside our skulls right now. True artificial intelligence would emerge either through sheer accident, or by our collective knowledge of how biological brains work suddenly skyrocketing.
As much fun as it is to dream about this sci-fi future, the reality is that there is still a lot of hard, tedious science to be done before we can reach that future.