Google Builds A Synthesizer With Neural Nets And Raspberry Pis.

AI is the new hotness! It’s 1965 or 1985 all over again! We’re in the AI Rennisance Mk. 2, and Google, in an attempt to showcase how AI can allow creators to be more… creative has released a synthesizer built around neural networks.

The NSynth Super is an experimental physical interface from Magenta, a research group within the Big G that explores how machine learning tools can create art and music in new ways. The NSynth Super does this by mashing together a Kaoss Pad, samples that sound like General MIDI patches, and a neural network.

Here’s how the NSynth works: The NSynth hardware accepts MIDI signals from a keyboard, DAW, or whatever. These MIDI commands are fed into an openFrameworks app that uses pre-compiled (with Machine Learning™!) samples from various instruments. This openFrameworks app combines and mixes these samples in relation to whatever the user inputs via the NSynth controller. If you’ve ever wanted to hear what the combination of a snare drum and a bassoon sounds like, this does it. Basically, you’re looking at a Kaoss pad controlling rompler that takes four samples and combines them, with the power of Neural Networks. The project comes with a set of pre-compiled and neural networked samples, but you can use this interface to mix your own samples, provided you have a beefy computer with an expensive GPU.

Not to undermine the work that went into this project, but thousands of synth heads will be disappointed by this project. The creation of new audio samples requires training with a GPU; the hardest and most computationally expensive part of neural networks is the training, not the performance. Without a nice graphics card, you’re limited to whatever samples Google has provided here.

Since this is Open Source, all the files are available, and it’s a project that uses a Raspberry Pi with a laser-cut enclosure, there is a huge demand for this machine learning Kaoss pad. The good news is that there’s a group buy on, and there’s already a seller on Tindie should you want a bare PCB. You can, of course, roll your own, and the Digikey cart for all the SMD parts comes to about $40 USD. This doesn’t include the OLED ($2 from China), the Raspberry Pi, or the laser cut enclosure, but it’s a start. Of course, for those of you who haven’t passed the 0805 SMD solder test, it looks like a few people will be selling assembled versions (less Pi) for $50-$60.

Is it cool? Yes, but a basement-bound producer that wants to add this to a track will quickly learn that training machine learning algorithms cost far more than playing with machine algorithms. The hardware is neat, but brace yourself for disappointment. Just like AI suffered in the late 60s and the late 80s. We’re in the AI Renaissance Mk. 2, after all.

Continue reading “Google Builds A Synthesizer With Neural Nets And Raspberry Pis.”

This Radio Gets Pour Reception

When was the last time you poured water onto your radio to turn it on?

Designed collaboratively by [Tore Knudsen], [Simone Okholm Hansen] and [Victor Permild], Pour Reception seeks to challenge what constitutes an interface, and how elements of play can create a new experience for a relatively everyday object.

Lacking buttons or knobs of any kind, Pour Reception appears an inert acrylic box with two glasses resting on top. A detachable instruction card cues the need for water, and pouring some into the glasses wakes the radio.

Continue reading “This Radio Gets Pour Reception”

AI Listens to Radio

We’ve seen plenty of examples of neural networks listening to speech, reading characters, or identifying images. KickView had a different idea. They wanted to learn to recognize radio signals. Not just any radio signals, but Orthogonal Frequency Division Multiplexing (OFDM) waveforms.

OFDM is a modulation method used by WiFi, cable systems, and many other systems. In particular, they look at an 802.11g signal with a bandwidth of 20 MHz. The question is given a receiver for 802.11g, how can you reliably detect that an 802.11ac signal — up to 160 MHz — is using your channel? To demonstrate the technique they decided to detect 20 MHz signals using a 5 MHz bandwidth.

Continue reading “AI Listens to Radio”

Neural Networking: Robots Learning From Video

Humans are very good at watching others and imitating what they do. Show someone a video of flipping a switch to turn on a CNC machine and after a single viewing they’ll be able to do it themselves. But can a robot do the same?

Bear in mind that we want the demonstration video to be of a human arm and hand flipping the switch. When the robot does it, the camera that is its eye will be seeing its robot arm and gripper. So somehow it’ll have to know that its robot parts are equivalent to the human parts in the demonstration video. Oh, and the switch in the demonstration video may be a different model and make, and the CNC machine may be a different one, though we’ll at least put the robot within reach of its switch.

Sound difficult?

Researchers from Google Brain and the University of Southern California have done it. In their paper describing how, they talk about a few different experiments but we’ll focus on just one, getting a robot to imitate pouring a liquid from a container into a cup.

Continue reading “Neural Networking: Robots Learning From Video”

AI Prosthesis Is Music To Our Ears

Prostheses are a great help to those who have lost limbs, or who never had them in the first place. Over the past few decades there has been a great deal of research done to make these essential devices more useful, creating prostheses that are capable of movement and more accurately recreating the functions of human body parts. At Georgia Tech, they’re working on just that, with the help of AI.

[Jason Barnes] lost his arm in a work accident, which prevented him from playing the piano the way he used to. The researchers at Georgia Tech worked with him, eventually producing a prosthetic arm that, unlike most, actually has individual finger control. This is achieved through the use of an ultrasound probe, which is used to detect muscle movements elsewhere on his body, with enough detail to allow the control of individual fingers. This is done through a TensorFlow-based neural network which analyses the ultrasound data to determine which finger the user is trying to move. The use of ultrasound was the major breakthrough which made this possible; previous projects have often relied on electromyogram sensors to read muscle impulses but these lack the resolution required.

The prosthesis is nicknamed the “Skywalker arm”, after its similarities to the prostheses seen in the Star Wars films. It’s not [Jason]’s first advanced prosthetic, either – Georgia Tech has also equipped him with an advanced drumming prosthesis. This allows him to use two sticks with a single arm, the second stick using advanced AI routines to drum along with the music in the room.

It’s great to see music being used as a driver to create high-performance prosthetics and push the state of the art forward. We’re sure [Jason] enjoys performing with the new hardware, too. But perhaps you’d like to try something similar, even though you’ve got two hands already? Try this on for size.

Continue reading “AI Prosthesis Is Music To Our Ears”

Tensorflow Tutorial Uses Python

Around the Hackaday secret bunker, we’ve been talking quite a bit about machine learning and neural networks. There’s been a lot of renewed interest in the topic recently because of the success of TensorFlow. If you are adept at Python and remember your high school algebra, you might enjoy [Oliver Holloway’s] tutorial on getting started with Tensorflow in Python.

[Oliver] gives links on how to do the setup with notes on Python versions. Then he shows some basic setup operations. From there, he has the software “learn” how to classify random points that either fall into a circle or don’t. Granted, this is easy enough to do with traditional programming, so it isn’t a great practical example, but it is illustrative for learning purposes.

Given that it is easy to algorithmically decide which points are in the circle and which are not, it is simple to develop training data. It is also easy to look at the result and see how close it is to the actual circle. You’ll see that it takes a lot of slow learning before the result space looks like a circle and not a triangle or some other odd shape.

Continue reading “Tensorflow Tutorial Uses Python”

Smarter Phones In Your Hacks With TensorFlow Lite

One way to run a compute-intensive neural network on a hack has been to put a decent laptop onboard. But wouldn’t it be great if you could go smaller and cheaper by using a phone instead? If your neural network was written using Google’s TensorFlow framework then you’ve had the option of using TensorFlow Mobile, but it doesn’t use any of the phone’s accelerated hardware, and so it might not have been fast enough.

TensorFlow Lite architecture
TensorFlow Lite architecture

Google has just released a new solution, the developer preview of TensofFlow Lite for iOS and Android and announced plans to support Raspberry Pi 3. On Android, the bottom layer is the Android Neural Networks API which makes use of the phone’s DSP, GPU and/or any other specialized hardware to speed up computations. Failing that, it falls back on the CPU.

Currently, fewer operators are supported than with TensforFlor Mobile, but more will be added. (Most of what you do in TensorFlow is done through operators, or ops. See our introduction to TensorFlow article if you need a refresher on how TensorFlow works.) The Lite version is intended to be the successor to Mobile. As with Mobile, you’d only do inference on the device. That means you’d train the neural network elsewhere, perhaps on a GPU-rich desktop or on a GPU farm over the network, and then make use of the trained network on your device.

What are we envisioning here? How about replacing the MacBook Pro on the self-driving RC cars we’ve talked about with a much smaller, lighter and less power-hungry Android phone? The phone even has a camera and an IMU built-in, though you’d need a way to talk to the rest of the hardware in lieu of GPIO.

You can try out TensorFlow Lite fairly easily by going to their GitHub and downloading a pre-built binary. We suspect that’s what was done to produce the first of the demonstration videos below.

Continue reading “Smarter Phones In Your Hacks With TensorFlow Lite”