Neuromorphic Computing: What Is It And Where Are We At?

For the last hundred or so years, collectively as humanity, we’ve been dreaming, thinking, writing, singing, and producing movies about a machine that could think, reason, and be intelligent in a similar way to us. The stories beginning with “Erewhon” published in 1872 by Sam Butler, Edgar Allan Poe’s “Maelzel’s Chess Player,” and the 1927 film “Metropolis” showed the idea that a machine could think and reason like a person. Not in magic or fantastical way. They drew from the automata of ancient Greece and Egypt and combined notions of philosophers such as Aristotle, Ramon Llull, Hobbes, and thousands of others.

Their notions of the human mind led them to believe that all rational thought could be expressed as algebra or logic. Later the arrival of circuits, computers, and Moore’s law led to continual speculation that human-level intelligence was just around the corner. Some have heralded it as the savior of humanity, where others portray a calamity as a second intelligent entity rises to crush the first (humans).

The flame of computerized artificial intelligence has brightly burned a few times before, such as in the 1950s, 1980s, and 2010s. Unfortunately, both prior AI booms have been followed by an “AI winter” that falls out of fashion for failing to deliver on expectations. This winter is often blamed on a lack of computer power, inadequate understanding of the brain, or hype and over-speculation. In the midst of our current AI summer, most AI researchers focus on using the steadily increasing computer power available to increase the depth of their neural nets. Despite their name, neural nets are inspired by the neurons in the brain and share only surface-level similarities.

Some researchers believe that human-level general intelligence can be achieved by simply adding more and more layers to these simplified convolutional systems fed by an ever-increasing trove of data. This point is backed up by the incredible things these networks can produce, and it gets a little better every year. However, despite what wonders deep neural nets produce, they still specialize and excel at just one thing. A superhuman Atari playing AI cannot make music or think about weather patterns without a human adding those capabilities. Furthermore, the quality of the input data dramatically impacts the quality of the net, and the ability to make an inference is limited, producing disappointing results in some domains. Some think that recurrent neural nets will never gain the sort of general intelligence and flexibility that our brains offer.

However, some researchers are trying to creating something more brainlike by, you guessed it, more closely emulates a brain. Given that we are in a golden age of computer architecture, now seems the time to create new hardware. This type of hardware is known as Neuromorphic hardware.

Continue reading “Neuromorphic Computing: What Is It And Where Are We At?”

Engineers Develop A Brain On A Chip

Our abilities to multitask, to quickly learn complex maneuvers, and to instantly recognize objects even as infants are just some of the ways that human brains make use of our billions of synapses. Biologically, our brain requires fluid-filled cavities, nerve fibers, and numerous other cells and connections in order to function. This isn’t the case with a new kind of brain recently announced by a team of MIT engineers in Nature Nanotechnology. Compared to the size of a typical human brain, this new “brain-on-a-chip” is able to fit on a piece of confetti.

When you take a look at the chip, it is more similar to tiny metal carving than to any neurological organ. The technology used to design the chip is based on memristors – silicon-based components that mimic the transmissions of synapses. A concatenation of “memory” and “resistor”, they exist as passive circuit elements that retain a relationship between the time integrals of current and voltage across an element. As resistance varies, tiny read charges are able to access a history of applied voltage. This can be accomplished by hysteresis and other non-linear properties of passive circuitry.

These properties can be best observed at nanoscale levels, where they aren’t dwarfed by other electronic and field effects. A tiny positive and negative electrode are separated by a “switching medium”, or space between the two electrodes. Voltage applied to one end causes ions to flow through the medium, forming a conduction channel to the other end. These ions make up the electrical signal transmitted through the circuit.

In order to fabricate these memristors, the researchers used alloys of silver for the positive electrode, and copper alongside silicon for the negative electrode. They sandwiched the two electrodes along an amorphous medium and patterned this on a silicon chip tens of thousands of times to create an array of memristors. To train the memristors, they ran the chips through visual tasks to store images and reproduce them until cleaner versions were produced. These new devices join a new category of research into neuromorphic computing – electronics that function similar to the way the brain’s neural architecture operates.

The opportunity for electronics that are capable of making instantaneous decisions without consulting other devices or the Internet spell the possibility of portable artificial intelligence systems. Though we already have software systems capable of simulating synaptic behavior, developing neuromorphic computing devices could vastly increase the capability of devices to do tasks once thought to belong solely to the human brain.