DIY Raspberry Neural Network Sees All, Recognizes Some

As a fun project I thought I’d put Google’s Inception-v3 neural network on a Raspberry Pi to see how well it does at recognizing objects first hand. It turned out to be not only fun to implement, but also the way I’d implemented it ended up making for loads of fun for everyone I showed it to, mostly folks at hackerspaces and such gatherings. And yes, some of it bordering on pornographic — cheeky hackers.

An added bonus many pointed out is that, once installed, no internet access is required. This is state-of-the-art, standalone object recognition with no big brother knowing what you’ve been up to, unlike with that nosey Alexa.

But will it lead to widespread useful AI? If a neural network can recognize every object around it, will that lead to human-like skills? Read on. Continue reading “DIY Raspberry Neural Network Sees All, Recognizes Some”

Self-Driving RC Cars with TensorFlow; Raspberry Pi or MacBook Onboard

You might think that you do not have what it takes to build a self-driving car, but you’re wrong. The mistake you’ve made is assuming that you’ll be controlling a two-ton death machine. Instead, you can give it a shot without the danger and on a relatively light budget. [Otavio] and [Will] got into self-driving vehicles using radio controlled (RC) cars.

[Otavio] slapped a MacBook Pro on an RC car to do the heavy lifting and called it carputer. The computer reads Hall effect sensor data from the motor to establish distance traveled (this can be used to calculate speed) and watches the stream from a webcam perched on the chassis. These two sources are fed into a neural network using TensorFlow. You train the system by driving the vehicle manually through the course a few times and then let it drive itself.

In the video interview below, you get a look at the car and [Otavio] gives commentary on how the system works as we see playback of a few races, including the Sparkfun 2016 Autonomous Vehicle Competition. I apologize for the poor audio, they lost the booth lottery and were next door to an incredibly noisy robot band (video proof) so we were basically shouting at each other. But I think you’ll agree it’s worth it to get a look at the races. Continue reading “Self-Driving RC Cars with TensorFlow; Raspberry Pi or MacBook Onboard”

Neural Networks: You’ve Got It So Easy

Neural networks are all the rage right now with increasing numbers of hackers, students, researchers, and businesses getting involved. The last resurgence was in the 80s and 90s, when there was little or no World Wide Web and few neural network tools. The current resurgence started around 2006. From a hacker’s perspective, what tools and other resources were available back then, what’s available now, and what should we expect for the future? For myself, a GPU on the Raspberry Pi would be nice.

Continue reading “Neural Networks: You’ve Got It So Easy”

Introduction To TensorFlow

I had great fun writing neural network software in the 90s, and I have been anxious to try creating some using TensorFlow.

Google’s machine intelligence framework is the new hotness right now. And when TensorFlow became installable on the Raspberry Pi, working with it became very easy to do. In a short time I made a neural network that counts in binary. So I thought I’d pass on what I’ve learned so far. Hopefully this makes it easier for anyone else who wants to try it, or for anyone who just wants some insight into neural networks.

Continue reading “Introduction To TensorFlow”

Ten Minute TensorFlow Speech Recognition

Like a lot of people, we’ve been pretty interested in TensorFlow, the Google neural network software. If you want to experiment with using it for speech recognition, you’ll want to check out [Silicon Valley Data Science’s] GitHub repository which promises you a fast setup for a speech recognition demo. It even covers which items you need to install if you are using a CUDA GPU to accelerate processing or if you aren’t.

Another interesting thing is the use of TensorBoard to visualize the resulting neural network. This tool offers up a page in your browser that lets you visualize what’s really going on inside the neural network. There’s also speech data in the repository, so it is practically a one-stop shop for getting started. If you haven’t seen TensorBoard in action, you might enjoy the video from Google, below.

Continue reading “Ten Minute TensorFlow Speech Recognition”

Creepy Speaking Neural Networks

Tech artist [Alexander Reben] has shared some work in progress with us. It’s a neural network trained on various famous peoples’ speech (YouTube, embedded below). [Alexander]’s artistic goal is to capture the “soul” of a person’s voice, in much the same way as death masks of centuries past. Of course, listening to [Alexander]’s Rob Boss is no substitute for actually watching an old Bob Ross tape — indeed it never even manages to say “happy little trees” — but it is certainly recognizable as the man himself, and now we can generate an infinite amount of his patter.

Behind the scenes, he’s using WaveNet to train the networks. Basically, the algorithm splits up an audio stream into chunks and tries to predict the next chunk based on the previous state. Some pre-editing of the training audio data was necessary — removing the laughter and applause from the Colbert track for instance — but it was basically just plugged right in.

The network seems to over-emphasize sibilants; we’ve never heard Barack Obama hiss quite like that in real life. Feeding noise into machines that are set up as pattern-recognizers tends to push them to the limits. But in keeping with the name of this series of projects, the “unreasonable humanity of algorithms”, it does pretty well.

He’s also done the same thing with multiple speakers (also YouTube), in this case 110 people with different genders and accents. The variation across people leads to a smoother, more human sound, but it’s also not clearly anyone in particular. It’s meant to be continuously running out of a speaker inside a sculpture’s mouth. We’re a bit creeped out, in a good way.

We’ve covered some of [Alexander]’s work before, from the wince-inducing “Robot Bites Man” to the intellectual-conceptual “All Prior Art“. Keep it coming, [Alexander]!

Continue reading “Creepy Speaking Neural Networks”

Google Machine Learning Made Simple(r)

If you’ve looked at machine learning, you may have noticed that a lot of the examples are interesting but hard to follow. That’s why [Jostmey] created Naked Tensor, a bare-minimum example of using TensorFlow. The example is simple, just doing some straight line fits on some data points. One example shows how it is done in series, one in parallel, and another for an 8-million point dataset. All the code is in Python.

If you haven’t run into it yet, TensorFlow is an open source library from Google. To quote from its website:

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

Continue reading “Google Machine Learning Made Simple(r)”