Neuromorphic Computing: What Is It And Where Are We At?

For the last hundred or so years, collectively as humanity, we’ve been dreaming, thinking, writing, singing, and producing movies about a machine that could think, reason, and be intelligent in a similar way to us. The stories beginning with “Erewhon” published in 1872 by Sam Butler, Edgar Allan Poe’s “Maelzel’s Chess Player,” and the 1927 film “Metropolis” showed the idea that a machine could think and reason like a person. Not in magic or fantastical way. They drew from the automata of ancient Greece and Egypt and combined notions of philosophers such as Aristotle, Ramon Llull, Hobbes, and thousands of others.

Their notions of the human mind led them to believe that all rational thought could be expressed as algebra or logic. Later the arrival of circuits, computers, and Moore’s law led to continual speculation that human-level intelligence was just around the corner. Some have heralded it as the savior of humanity, where others portray a calamity as a second intelligent entity rises to crush the first (humans).

The flame of computerized artificial intelligence has brightly burned a few times before, such as in the 1950s, 1980s, and 2010s. Unfortunately, both prior AI booms have been followed by an “AI winter” that falls out of fashion for failing to deliver on expectations. This winter is often blamed on a lack of computer power, inadequate understanding of the brain, or hype and over-speculation. In the midst of our current AI summer, most AI researchers focus on using the steadily increasing computer power available to increase the depth of their neural nets. Despite their name, neural nets are inspired by the neurons in the brain and share only surface-level similarities.

Some researchers believe that human-level general intelligence can be achieved by simply adding more and more layers to these simplified convolutional systems fed by an ever-increasing trove of data. This point is backed up by the incredible things these networks can produce, and it gets a little better every year. However, despite what wonders deep neural nets produce, they still specialize and excel at just one thing. A superhuman Atari playing AI cannot make music or think about weather patterns without a human adding those capabilities. Furthermore, the quality of the input data dramatically impacts the quality of the net, and the ability to make an inference is limited, producing disappointing results in some domains. Some think that recurrent neural nets will never gain the sort of general intelligence and flexibility that our brains offer.

However, some researchers are trying to creating something more brainlike by, you guessed it, more closely emulates a brain. Given that we are in a golden age of computer architecture, now seems the time to create new hardware. This type of hardware is known as Neuromorphic hardware.

Continue reading “Neuromorphic Computing: What Is It And Where Are We At?”

A Gesture Recognizing Armband

Gesture recognition usually involves some sort of optical system watching your hands, but researchers at UC Berkeley took a different approach. Instead they are monitoring the electrical signals in the forearm that control the muscles, and creating a machine learning model to recognize hand gestures.

The sensor system is a flexible PET armband with 64 electrodes screen printed onto it in silver conductive ink, attached to a standalone AI processing module.  Since everyone’s arm is slightly different, the system needs to be trained for a specific user, but that also means that the specific electrical signals don’t have to be isolated as it learns to recognize patterns.

The challenging part of this is that the patterns don’t remain constant over time, and will change depending on factors such as sweat, arm position,  and even just biological changes. To deal with this the model can update itself on the device over time as the signal changes. Another part of this research that we appreciate is that all the inferencing, training, and updating happens locally on the AI chip in the armband. There is no need to send data to an external device or the “cloud” for processing, updating, or third-party data mining. Unfortunately the research paper with all the details is behind a paywall.

Continue reading “A Gesture Recognizing Armband”

Carbon Monoxide: Hunting A Silent Killer

Walt and Molly Weber had just finished several long weeks of work. He was an FBI agent on an important case. She had a management job at Houghton Mifflin. On a sunny Friday evening in February of 1995, the two embarked on a much needed weekend skiing getaway. They drove five hours to the Sierra Mountains in California’s Mammoth Lakes ski area. This was a last-minute trip, so most of the nicer hotels were booked. The tired couple checked in at a lower cost motel at around 11:30pm on Friday night. They quickly settled in and went to bed, planning for an early start with a 7am wakeup call Saturday morning.

When the front desk called on Saturday, no one answered the phone. The desk manager figured they had gotten an early start and were already on the slopes. Sunday was the same. It wasn’t until a maid went to check on the room that the couple were found to be still in bed, unresponsive.

Continue reading “Carbon Monoxide: Hunting A Silent Killer”

Humanoid Robot Kinects With Its Enviroment

[Malte Ahlers] from Germany, After having completed a PhD in neurobiology, decided to build a human sized humanoid robot torso. [Malte] has an interest in robotics and wanted to  show case some of his skills.The project is still in its early development but as you will see in the video he has achieved a nice build so far.

A1 consists of a Human sized torso with two arms, each with five (or six, including the gripper) axes of rotation, which have been based on the robolink joints from German company igus.de. The joints are tendon driven by stepper motors with a planetary gear head attached. Using an experimental controller which he has built, [Malte] can monitor the position of the axis by monitoring the encoders embedded in the joints.

The A1 torso features a head with two degrees of freedom, which is equipped with a Microsoft Kinect sensor and two Logitech QuickCam Pro 9000 cameras. With this functionality the head can spatially ”see” and ”hear”. The head also has speakers for voice output, which can be accompanied by an animated gesture on the LCD screen lip movements for example. The hands feature a simple gripping tool based on FESTO FinGripper finger to allow the picking up of misc items.