AI: This Decade’s Worst Buzz Word

In hacker circles, the “Internet of Things” is often the object of derision. Do we really need the IoT toaster? But there’s one phrase that — while not new — is really starting to annoy me in its current incarnation: AI or Artificial Intelligence.

The problem isn’t the phrase itself. It used to mean a collection of techniques used to make a computer look like it was smart enough to, say, play a game or hold a simulated conversation. Of course, in the movies it means HAL9000. Lately, though, companies have been overselling the concept and otherwise normal people are taking the bait.

The Alexa Effect

Not to pick on Amazon, but all of the home assistants like Alexa and Google Now tout themselves as AI. By the most classic definition, that’s true. AI techniques include matching natural language to predefined templates. That’s really all these devices are doing today. Granted the neural nets that allow for great speech recognition and reproduction are impressive. But they aren’t true intelligence nor are they even necessarily direct analogs of a human brain.

Continue reading “AI: This Decade’s Worst Buzz Word”

Raspberry Pi AI Plays Piano

[Zack] watched a video of [Dan Tepfer] using a computer with a MIDI keyboard to do some automatic fills when playing. He decided he wanted to do better and set out to create an AI that would learn–in real time–how to insert style-appropriate tunes in the gap between the human performance.

If you want the code, you can find it on GitHub. However, the really interesting part is the log of his experiences, successes, and failures. If you want to see the result, check out the video below where he riffs for about 30 seconds and the AI starts taking over for the melody when the performer stops.

Continue reading “Raspberry Pi AI Plays Piano”

AI Watches You Sleep; Knows When You Dream

If you’ve never been a patient at a sleep laboratory, monitoring a person as they sleep is an involved process of wires, sensors, and discomfort. Seeking a better method, MIT researchers — led by [Dina Katabi] and in collaboration with Massachusetts General Hospital — have developed a device that can non-invasively identify the stages of sleep in a patient.

Approximately the size of a laptop and mounted on a wall near the patient, the device measures the minuscule changes in reflected low-power RF signals. The wireless signals are analyzed by a deep neural-network AI and predicts the various sleep stages — light, deep, and REM sleep — of the patient, negating the task of manually combing through the data. Despite the sensitivity of the device, it is able to filter out irrelevant motions and interference, focusing on the breathing and pulse of the patient.

What’s novel here isn’t so much the hardware as it is the processing methodology. The researchers use both convolutional and recurrent neural networks along with what they call an adversarial training regime:

Our training regime involves 3 players: the feature encoder (CNN-RNN), the sleep stage predictor, and the source discriminator. The encoder plays a cooperative game with the predictor to predict sleep stages, and a minimax game against the source discriminator. Our source discriminator deviates from the standard domain-adversarial discriminator in that it takes as input also the predicted distribution of sleep stages in addition to the encoded features. This dependence facilitates accounting for inherent correlations between stages and individuals, which cannot be removed without degrading the performance of the predictive task.

Anyone out there want to give this one a try at home? We’d love to see a HackRF and GNU Radio used to record RF data. The researchers compare the RF to WiFi so repurposing a 2.4 GHz radio to send out repeating uniformed transmissions is a good place to start. Dump it into TensorFlow and report back.

Continue reading “AI Watches You Sleep; Knows When You Dream”

Neural Nets in the Browser: Why Not?

We keep seeing more and more Tensor Flow neural network projects. We also keep seeing more and more things running in the browser. You don’t have to be Mr. Spock to see this one coming. TensorFire runs neural networks in the browser and claims that WebGL allows it to run as quickly as it would on the user’s desktop computer. The main page is a demo that stylizes images, but if you want more detail you’ll probably want to visit the project page, instead. You might also enjoy the video from one of the creators, [Kevin Kwok], below.

TensorFire has two parts: a low-level language for writing massively parallel WebGL shaders that operate on 4D tensors and a high-level library for importing models from Keras or TensorFlow. The authors claim it will work on any GPU and–in some cases–will be actually faster than running native TensorFlow.

Continue reading “Neural Nets in the Browser: Why Not?”

A Neural Network Can Now be Your Writing Assistant

Writing is a difficult job; though, as a primarily word-based site, we may be a little biased here at Hackaday. Not only does a writer have to know the basics, like what a semicolon is and when to use one, they also need to build sentences that convey information in a manner that is pleasant to read. As many commenters like to point out, even we struggle with this on occasion (lauded and scholarly as we are).

Wouldn’t it be better if we could let our computers do the heavy lifting for us? After all, a monkey with infinite time will eventually write Shakespeare and all that. Surely, a computer can be programmed to do all that fancy word assembly while we sit back and enjoy some coffee. Well, that’s what [Robin Sloan] set out to do with a recurrent neural network-powered writing assistant.

Alright, so it doesn’t actually write completely on its own. Instead, [Robin’s] software takes advantage of [JC Johnson’s] torch-rnn project, and integrates it into Atom to autocomplete sentences. [Robin] trained his neural network on hundreds of old issues of the sci-fi magazines Galaxy and IF Magazine, which are available at the Internet Archive. Once the server and corresponding Atom package are installed, a writer can simply push the Tab key and the sentence will be completed.

The results are interesting. [Robin] himself says “it’s like writing with a deranged but very well-read parrot on your shoulder.” While it’s not likely to be used as a serious writing tool anytime soon, the potential is certainly intriguing. When trained on relevant source material, the integration into software like Atom could be very useful. If a neural network can compose music, surely it can write some silly tech articles.

[thanks to Tim Trzepacz for the tip!]

Typewriter image: LjL (Public domain).

Wrap Your Mind Around Neural Networks

Artificial Intelligence is playing an ever increasing role in the lives of civilized nations, though most citizens probably don’t realize it. It’s now commonplace to speak with a computer when calling a business. Facebook is becoming scary accurate at recognizing faces in uploaded photos. Physical interaction with smart phones is becoming a thing of the past… with Apple’s Siri and Google Speech, it’s slowly but surely becoming easier to simply talk to your phone and tell it what to do than typing or touching an icon. Try this if you haven’t before — if you have an Android phone, say “OK Google”, followed by “Lumos”. It’s magic!

Advertisements for products we’re interested in pop up on our social media accounts as if something is reading our minds. Truth is, something is reading our minds… though it’s hard to pin down exactly what that something is. An advertisement might pop up for something that we want, even though we never realized we wanted it until we see it. This is not coincidental, but stems from an AI algorithm.

At the heart of many of these AI applications lies a process known as Deep Learning. There has been a lot of talk about Deep Learning lately, not only here on Hackaday, but all over the interwebs. And like most things related to AI, it can be a bit complicated and difficult to understand without a strong background in computer science.

If you’re familiar with my quantum theory articles, you’ll know that I like to take complicated subjects, strip away the complication the best I can and explain it in a way that anyone can understand. It is the goal of this article to apply a similar approach to this idea of Deep Learning. If neural networks make you cross-eyed and machine learning gives you nightmares, read on. You’ll see that “Deep Learning” sounds like a daunting subject, but is really just a $20 term used to describe something whose underpinnings are relatively simple.

Continue reading “Wrap Your Mind Around Neural Networks”

Neural Networks Walk Better Than Humans for Game Animation

Modern day video games have come a long way from Mario the plumber hopping across the screen. Incredibly intricate environments of games today are part of the lure for new gamers and this experience is brought to life by the characters interacting with the scene. However the illusion of the virtual world is disrupted by unnatural movements of the figures in performing actions such as turning around suddenly or climbing a hill.

To remedy the abrupt movements, [Daniel Holden et. al] recently published a paper (PDF) and a video showing a method to greatly improve the real-time character control mechanism. The proposed system uses a neural network that has been trained using a large data set of walking, jumping and other sequences on various terrains. The key is breaking down the process of bipedal movement and its cyclic behaviour into a series of sub-steps or phases. Each phase translates to a natural posture for the character while moving. The system precomputes the next-phases offline to conserve computational resources at runtime. Then considering user control, previous pose of the character(including joint positions) and terrain geometry, the consequent frame of the animation is computed. The computation is done by a regression network that calculates future position of the joints and a blending function is used for Motion Matching as described in a presentation (PDF) and video by [Simon Clavet]. Continue reading “Neural Networks Walk Better Than Humans for Game Animation”