If you’ve never been a patient at a sleep laboratory, monitoring a person as they sleep is an involved process of wires, sensors, and discomfort. Seeking a better method, MIT researchers — led by [Dina Katabi] and in collaboration with Massachusetts General Hospital — have developed a device that can non-invasively identify the stages of sleep in a patient.
Approximately the size of a laptop and mounted on a wall near the patient, the device measures the minuscule changes in reflected low-power RF signals. The wireless signals are analyzed by a deep neural-network AI and predicts the various sleep stages — light, deep, and REM sleep — of the patient, negating the task of manually combing through the data. Despite the sensitivity of the device, it is able to filter out irrelevant motions and interference, focusing on the breathing and pulse of the patient.
What’s novel here isn’t so much the hardware as it is the processing methodology. The researchers use both convolutional and recurrent neural networks along with what they call an adversarial training regime:
Our training regime involves 3 players: the feature encoder (CNN-RNN), the sleep stage predictor, and the source discriminator. The encoder plays a cooperative game with the predictor to predict sleep stages, and a minimax game against the source discriminator. Our source discriminator deviates from the standard domain-adversarial discriminator in that it takes as input also the predicted distribution of sleep stages in addition to the encoded features. This dependence facilitates accounting for inherent correlations between stages and individuals, which cannot be removed without degrading the performance of the predictive task.
Anyone out there want to give this one a try at home? We’d love to see a HackRF and GNU Radio used to record RF data. The researchers compare the RF to WiFi so repurposing a 2.4 GHz radio to send out repeating uniformed transmissions is a good place to start. Dump it into TensorFlow and report back.
Continue reading “AI Watches You Sleep; Knows When You Dream”
We keep seeing more and more Tensor Flow neural network projects. We also keep seeing more and more things running in the browser. You don’t have to be Mr. Spock to see this one coming. TensorFire runs neural networks in the browser and claims that WebGL allows it to run as quickly as it would on the user’s desktop computer. The main page is a demo that stylizes images, but if you want more detail you’ll probably want to visit the project page, instead. You might also enjoy the video from one of the creators, [Kevin Kwok], below.
TensorFire has two parts: a low-level language for writing massively parallel WebGL shaders that operate on 4D tensors and a high-level library for importing models from Keras or TensorFlow. The authors claim it will work on any GPU and–in some cases–will be actually faster than running native TensorFlow.
Continue reading “Neural Nets In The Browser: Why Not?”
As a fun project I thought I’d put Google’s Inception-v3 neural network on a Raspberry Pi to see how well it does at recognizing objects first hand. It turned out to be not only fun to implement, but also the way I’d implemented it ended up making for loads of fun for everyone I showed it to, mostly folks at hackerspaces and such gatherings. And yes, some of it bordering on pornographic — cheeky hackers.
An added bonus many pointed out is that, once installed, no internet access is required. This is state-of-the-art, standalone object recognition with no big brother knowing what you’ve been up to, unlike with that nosey Alexa.
But will it lead to widespread useful AI? If a neural network can recognize every object around it, will that lead to human-like skills? Read on. Continue reading “DIY Raspberry Neural Network Sees All, Recognizes Some”
From the Forbin Project, to HAL 9000, to War Games, movies are replete with smart computers that decide to put humans in their place. If you study literature, you’ll find that science fiction isn’t usually about the future, it is about the present disguised as the future, and smart computers usually represent something like robots taking your job, or nuclear weapons destroying your town.
Lately, I’ve been seeing something disturbing, though. [Elon Musk], [Bill Gates], [Steve Wozniak], and [Stephen Hawking] have all gone on record warning us that artificial intelligence is dangerous. I’ll grant you, all of those people must be smarter than I am. I’ll even stipulate that my knowledge of AI techniques is a little behind the times. But, what? Unless I’ve been asleep at the keyboard for too long, we are nowhere near having the kind of AI that any reasonable person would worry about being actually dangerous in the ways they are imagining.
Smart Guys Posturing
Keep in mind, I’m interpreting their comments as saying (essentially): “Soon machines will think and then they will out-think us and be impossible to control.” It is easy to imagine something like a complex AI making a bad decision while driving a car or an airplane, sure. But the computer that parallel parks your car isn’t going to suddenly take over your neighborhood and put brain implants in your dogs and cats. Anyone who thinks that is simply not thinking about how these things work. The current state of computer programming makes that as likely as saying, “Perhaps my car will start flying and we can go to Paris.” Ain’t happening.
Continue reading “Kids! Don’t Try This At Home! Robot Destroys Mankind”