Perceptrons in C++

Last time, I talked about a simple kind of neural net called a perceptron that you can cause to learn simple functions. For the purposes of experimenting, I coded a simple example using Excel. That’s handy for changing things on the fly, but not so handy for putting the code in a microcontroller. This time, I’ll show you how the code looks in C++ and also tell you more about what you can do when faced with a more complex problem.

Continue reading “Perceptrons in C++”

Machine Learning: Foundations

When you want a person to do something, you train them. When you want a computer to do something, you program it. However, there are ways to make computers learn, at least in some situations. One technique that makes this possible is the perceptron learning algorithm. A perceptron is a computer simulation of a nerve, and there are various ways to change the perceptron’s behavior based on either example data or a method to determine how good (or bad) some outcome is.

What’s a Perceptron?

I’m no biologist, but apparently a neuron has a bunch of inputs and if the level of those inputs gets to a certain level, the neuron “fires” which means it stimulates the input of another neuron further down the line. Not all inputs are created equally: in the mathematical model of them, they have different weighting. Input A might be on a hair trigger, while it might take inputs B and C on together to wake up the neuron in question.
Continue reading “Machine Learning: Foundations”

Self-Driving R/C Car Uses An Intel NUC

Self-driving cars are something we are continually told will be the Next Big Thing. It’s nothing new, we’ve seen several decades of periodic demonstrations of the technology as it has evolved. Now we have real prototype cars on real roads rather than test tracks, and though they are billion-dollar research vehicles from organisations with deep pockets and a long view it is starting to seem that this is a technology we have a real chance of seeing at a consumer level.

A self-driving car may seem as though it is beyond the abilities of a Hackaday reader, but while it might be difficult to produce safe collision avoidance of a full-sized car on public roads it’s certainly not impossible to produce something with a little more modest capabilities. [Jaimyn Mayer] and [Kendrick Tan] have done just that, creating a self-driving R/C car that can follow a complex road pattern without human intervention.

The NUC's-eye view. The green line is a human's steering, the blue line the computed steering.
The NUC’s-eye view. The green line is a human’s steering, the blue line the computed steering.

Unexpectedly they have eschewed the many ARM-based boards as the brains of the unit, instead going for an Intel NUC mini-PC powered by a Core i5 as the brains of the unit. It’s powered by a laptop battery bank, and takes input from a webcam. Direction and throttle can be computed by the NUC and sent to an Arduino which handles the car control. There is also a radio control channel allowing the car to be switched from autonomous to human controlled to emergency stop modes.

They go into detail on the polarizing and neutral density filters they used with their webcam, something that may make interesting reading for anyone interested in machine vision. All their code is open source, and can be found linked from their write-up. Meanwhile the video below the break shows their machine on their test circuit, completing it with varying levels of success.

Continue reading “Self-Driving R/C Car Uses An Intel NUC”

TensorFlow Robot Recognizes Objects

Children can do lots of things that robots and computers have trouble with. Climbing stairs, for example, is a tough thing for a robot. Recognizing objects is another area where humans are generally much better than robots. Kids can recognize blocks, shapes, colors, and extrapolate combinations and transformations.

Google’s open-source TensorFlow software can help. It is a machine learning system used in Google’s own speech recognition, search, and other products. It is also used in quite a few non-Google projects. [Lukas Biewald] recently built a robot around some stock pieces (including a Raspberry Pi) and enlisted TensorFlow to allow the robot to recognize objects. You can see a video of the device, below.

Continue reading “TensorFlow Robot Recognizes Objects”

Hallucinating Machines Generate Tiny Video Clips

Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.

The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).

Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.

Neural Network Targets Cats with a Sprinkler System

It’s overkill, but it’s really cool. [Bob Bond] took an NVIDIA Jetson TX1 single-board computer and a webcam and wirelessly combined them with his lawn sprinklers. Now, when his neighbors’ cats come to poop in his yard, a carefully trained neural network detects them and gets them wet.

It is absolutely the case that this could have been done with a simple motion sensor, but if the neural network discriminates sufficiently well between cats and (for instance) his wife, this is an improved solution for sure. Because the single-board computer he’s chosen for the project has a ridiculous amount of horsepower, he can afford to do a lot of image processing, so there’s a chance that everyone on two legs will stay dry. And the code is up on GitHub for you to see, if you’re interested.

[Bob] promises more detail about the neural network in the future. We can’t wait. (And we’d love to see a sentry-turret style build in the future. Think of the water savings!)

Via the NVIDIA blog, and thanks [Jaqen] for the tip!

World’s Tiniest Violin Uses Radar and Machine Learning

The folks at [Design I/O] have come up with a way for you to play the world’s tiniest violin by rubbing your fingers together and actually have it play a violin sound. For those who don’t know, when you want to express mock sympathy for someone’s complaints you can rub your thumb and index finger together and say “You hear that? It’s the world’s smallest violin and it’s playing just for you”, except that now they can actually hear the violin, while your gestures control the volume and playback.

[Design I/O] combined a few technologies to accomplish this. The first is Google’s Project Soli, a tiny radar on a chip. Project Soli’s goal is to do away with physical controls by using a miniature radar for doing touchless gesture interactions. Sliding your thumb across the side of your outstretched index finger, for example, can be interpreted as moving a slider to change the numerical value of something, perhaps turning up the air conditioner in your car. Check out Google’s cool demo video of their radar and gestures below.

Project Soli’s radar is the input side for this other intriguing technology: the Wekinator, a free open source machine learning software intended for artists and musicians. The examples on their website paint an exciting picture. You give Wekinator inputs and outputs and then tell it to train its model.

The output side in this case is violin music. The input is whatever the radar detects. Wekinator does the heavy lifting for you, just give it input like radar monitored finger movements, and it’ll learn your chosen gestures and perform the appropriately trained output.

[Design I/O] is likely doing more than just using Wekinator’s front end as they’re also using openFrameworks, an open source C++ toolkit. Also interesting with Wekinator is their use of the Open Sound Control (OSC) protocol for communicating over the network to get its inputs and outputs. You can see [Design I/O]’s end result demonstrated in the video below.

Continue reading “World’s Tiniest Violin Uses Radar and Machine Learning”