AI Watches You Sleep; Knows When You Dream

If you’ve never been a patient at a sleep laboratory, monitoring a person as they sleep is an involved process of wires, sensors, and discomfort. Seeking a better method, MIT researchers — led by [Dina Katabi] and in collaboration with Massachusetts General Hospital — have developed a device that can non-invasively identify the stages of sleep in a patient.

Approximately the size of a laptop and mounted on a wall near the patient, the device measures the minuscule changes in reflected low-power RF signals. The wireless signals are analyzed by a deep neural-network AI and predicts the various sleep stages — light, deep, and REM sleep — of the patient, negating the task of manually combing through the data. Despite the sensitivity of the device, it is able to filter out irrelevant motions and interference, focusing on the breathing and pulse of the patient.

What’s novel here isn’t so much the hardware as it is the processing methodology. The researchers use both convolutional and recurrent neural networks along with what they call an adversarial training regime:

Our training regime involves 3 players: the feature encoder (CNN-RNN), the sleep stage predictor, and the source discriminator. The encoder plays a cooperative game with the predictor to predict sleep stages, and a minimax game against the source discriminator. Our source discriminator deviates from the standard domain-adversarial discriminator in that it takes as input also the predicted distribution of sleep stages in addition to the encoded features. This dependence facilitates accounting for inherent correlations between stages and individuals, which cannot be removed without degrading the performance of the predictive task.

Anyone out there want to give this one a try at home? We’d love to see a HackRF and GNU Radio used to record RF data. The researchers compare the RF to WiFi so repurposing a 2.4 GHz radio to send out repeating uniformed transmissions is a good place to start. Dump it into TensorFlow and report back.

Continue reading “AI Watches You Sleep; Knows When You Dream”

Design and 3D Print Robots with Interactive Robogami

Internals of 3D printed “print and fold” robot. [Image source: MIT CSAIL]
Robot design traditionally separates the body geometry from the mechanics of the gait, but they both have a profound effect upon one another. What if you could play with both at once, and crank out useful prototypes cheaply using just about any old 3D printer? That’s where Interactive Robogami comes in. It’s a tool from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) that aims to let people design, simulate, and then build simple robots with a “3D print, then fold” approach. The idea behind the system is partly to take advantage of the rapid prototyping afforded by 3D printers, but mainly it’s to change how the design work is done.

To make a robot, the body geometry and limb design are all done and simulated in the Robogami tool, where different combinations can have a wild effect on locomotion. Once a design is chosen, the end result is a 3D printable flat pack which is then assembled into the final form with a power supply, Arduino, and servo motors.

A white paper is available online and a demonstration video is embedded below. It’s debatable whether these devices on their own qualify as “robots” since they have no sensors, but as a tool to quickly prototype robot body geometries and gaits it’s an excitingly clever idea.

Continue reading “Design and 3D Print Robots with Interactive Robogami”

The PDP-1: The Machine that Started Hacker Culture

One of my bucket list destinations is the Computer History Museum in Mountain View, California — I know, I aim high. I’d be chagrined to realize that my life has spanned a fair fraction of the Information Age, but I think I’d get a kick out of seeing the old machines, some of which I’ve actually laid hands on. But the machines I’d most like to see are the ones that predate me, and the ones that contributed to the birth of the hacker culture in which I and a lot of Hackaday regulars came of age.

If you were to trace hacker culture back to its beginning, chances are pretty good that the machine you’d find at the root of it all is the Digital Equipment Corporation’s PDP-1. That’s a tall claim for a machine that was introduced in 1959 and only sold 53 units, compared to contemporary offerings from IBM that sold tens of thousands of units. And it’s true that the leading edge of the explosion of digital computing in the late 50s and early 60s was mainly occupied by “big iron” machines, and that mainframes did a lot to establish the foundations for all the advances that were to come.

Continue reading “The PDP-1: The Machine that Started Hacker Culture”

How To Telepathically Tell A Robot It Screwed Up

Training machines to effectively complete tasks is an ongoing area of research. This can be done in a variety of ways, from complex programming interfaces, to systems that understand commands in natural langauge. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wanted to see if it was possible for humans to communicate more directly when training a robot. Their system allows a user to correct a robot’s actions using only their brain.

The concept is simple – using an EEG cap to detect brainwaves, the system measures a special type of brain signals called “error-related potentials”. Simply noticing the robot making a mistake allows the robot to correct itself, and for a nice extra touch – blush in embarassment.

This interface allows for a very intuitive way of working with a robot – upon noticing a mistake, the robot is able to automatically stop or correct its behaviour. Currently the system is only capable of being used for very simple tasks – the video shows the robot sorting objects of two types into corresponding bins. The robot knows that if the human has detected an error, it must simply place the object in the other bin. Further research seeks to expand the possibilities of using this automatic brainwave feedback to train robots for more complex tasks. You can read the research paper here.

MIT’s CSAIL work on lots of exciting projects – their video microphone technology is truly astounding.

[Thanks to Adam Connor-Simmons for the tip!]

Owning Hacker As A Word

To a casual observer it might seem as though our community is in the news rather a lot at the moment. It’s all about hacks on our TV screens in the soap opera of Washington politics, who hacked this, whether those people over there helped that lot hack the other lot, or even whether that person’s emails could have been hacked on that server. Keeping up with it as an outsider can become a full-time job.

XKCD 932 says it all. (CC BY-NC 2.5)
XKCD 932 says it all. (CC BY-NC 2.5)

Of course, as we all know even if the mainstream journalists (or should I refer to them colloquially as “hacks”?) don’t, it’s not us they’re talking about. Their hackers are computer criminals, while we are people with some of the hardware and software skills to bend technology to our will, even beyond what its designers might have intended. And that divergence between the way we use the word in a sense of reappropriation and they use it in disapprobation sometimes puts us in an odd position. Explaining to a sober-suited businessman as the director of a hackspace, that no, we’re not *those*hackers can sometimes  feel like skating on thin ice.

Continue reading “Owning Hacker As A Word”

Nylon Fibre Artificial Muscles — Powered by Lasers!

If only we had affordable artificial muscles, we might see rapid advances in prosthetic limbs, robots, exo-skeletons, implants, and more. With cost being one of the major barriers — in addition to replicating the marvel of our musculature that many of us take for granted — a workable solution seems a way off. A team of researchers at MIT present a potential answer to these problems by showing nylon fibres can be used as synthetic muscles.

Some polymer fibre materials have the curious property of increasing in  diameter while decreasing in length when heated. Taking advantage of this, the team at MIT were able to sculpt nylon fibre and — using a number of heat sources, namely lasers — could direct it to bend in a specific direction. More complex movement requires an array of heat sources which isn’t practical — yet — but seeing a nylon fibre dance tickles the imagination.

Continue reading “Nylon Fibre Artificial Muscles — Powered by Lasers!”

Umbrella Drones — Jellyfish Of The Sky

Mount an umbrella to a drone and there you go, you have a flying umbrella. When [Alan Kwan] tried to do just that he found it wasn’t quite so simple. The result, once he’d worked it out though, is haunting. You get an uneasy feeling like you’re underwater watching jellyfish floating around you.

A grad student in MIT’s ACT (Art, Culture and Technology) program, [Alan’s] idea was to produce a synesthesia-like result in the viewer by having an inanimate object, an umbrella, appear as an animate object, a floating jellyfish. He first tried simply attaching the umbrella to an off-the-shelf drone. Since electronics occupy the center of the drone, the umbrella had to be mounted off-center. But he discovered that drones want most of their mass in the center and so that didn’t work. With the help of a classmate and input from peers and faculty he made a new drone with carbon fiber and metal parts that allowed him to mount the umbrella in the center. To further help with stability, the batteries were attached to the very bottom of the umbrella’s pole.

In addition to just making them fly, [Alan] also wanted the umbrella to gently undulate like a jellyfish, slowly opening and closing a little. He tried mounting servo motors inside the umbrella for the task. These turned out to be too heavy, but also unnecessary. Once flying outside at just the right propeller speed, the umbrellas undulated on their own. Watch them doing this in the video below accompanied by haunting music that makes you feel you’re watching a scene from Blade Runner.

Continue reading “Umbrella Drones — Jellyfish Of The Sky”