Piano Genie Trained a Neural Net to Play 88-Key Piano with 8 Arcade Buttons

Want to sound great on a Piano using only your coding skills? Enter Piano Genie, the result of a research project from Google AI and DeepMind. You press any of eight buttons while a neural network makes sure the piano plays something cool — compensating in real time for what’s already been played.

Almost anyone new to playing music who sits down at a piano will produce a sound similar to that of a cat chasing a mouse through a tangle of kitchen pots. Who can blame them, given the sea of 88 inexplicable keys sitting before them? But they’ll quickly realize that playing keys in succession in one direction will produce sounds with consistently increasing or decreasing pitch. They’ll also learn that pressing keys for different lengths of times can improve the melody. But there’s still 88 of them and plenty more to learn, such as which keys will sound harmonious when played together.

Piano Genie training architectureWith Pinao Genie, gone are the daunting 88 keys, replaced with a 3D-printed box of eight arcade-style buttons which they made by following this Adafruit tutorial. A neural network maps those eight buttons to something meaningful on the 88-key piano keyboard. Being a neural network, the mapping isn’t a fixed one-to-one or even one-to-many. Instead, it’s trained to play something which should sound good taking into account what was play previously and won`t necessarily be the same each time.

To train it they use data from the approximately 1400 performances of the International Piano e-Competition. The result can be quite good as you can see and hear in the video below. The buttons feed into a computer but the computer plays the result on an actual piano.

For training, the neural network really consists of two networks. One is an encoder, in this case a recurrent neural network (RNN) which takes piano sequences and learns to output a vector. In the diagram, the vector is in the middle and has one element for each of the eight buttons. The second network is the decoder, also an RNN. It’s trained to turn that eight-element vector back into the same music which was fed into the encoder.

Once trained, only the decoder is used. The eight-button keyboard feeds into the vector, and the decoder outputs suitable notes. The fact that they’re RNNs means that rather than learning a fixed one-to-many mapping, the network takes into account what was previously played in order to come up with something which hopefully sounds pleasing. To give the user a little more creative control, they also trained it to realize when the user is playing a rising or falling melody and to output the same. See their paper for how the turned polyphonic sound into monophonic and back again.

If you prefer a different style of music you can train it on a MIDI collection of your own choosing using their open-sourced model. Or you can try it out as is right now through their web interface. I’ll admit, I started out just banging on it, producing the same noise I would get if I just hammered away randomly on a piano. Then I switched to thinking of making melodies and the result started sounding better. So some music background and practice still helps. For the video below, the researcher admits to having already played for a few hours.

This isn’t the first project we’ve covered by these Google researchers. Another was this music synthesizer again using neural networks but this time with a Raspberry Pi. And if our discussion of recurrent neural networks went a bit over your head, check out our overview of neural networks.

Continue reading “Piano Genie Trained a Neural Net to Play 88-Key Piano with 8 Arcade Buttons”

Google Discovers Google+ Servers Are Still Running

Google is pulling the plug on their social network, Google+. Users still have the better part of a year to say their goodbyes, but if the fledgling social network was a ghost town before, news of its imminent shutdown isn’t likely to liven the place up. A quick check of the site as of this writing reveals many users are already posting their farewell messages, and while there’s some rallying behind petitions to keep the lights on, the majority realize that once Google has fallen out of love with a project there’s little chance of a reprieve.

To say that this is a surprise would be disingenuous. We’d wager a lot of you already thought it was gone, honestly. It’s no secret that Google’s attempt at a “Facebook Killer” was anything but, and while there was a group of dedicated users to be sure, it never attained anywhere near the success of its competition.

According to a blog post from Google, the network’s anemic user base isn’t the only reason they’ve decided to wind down the service. A previously undisclosed security vulnerability also hastened its demise, a revelation which will particularly sting those who joined for the privacy-first design Google touted. While this fairly transparent postmortem allows us to answer what ended Google’s grand experiment in social networking, there’s still one questions left unanswered. Where are the soon to be orphaned Google+ users supposed to go?

Continue reading “Google Discovers Google+ Servers Are Still Running”

It’s The Web, Basically

If you are of a certain age, you probably learned to program in Basic. Even if you aren’t, a lot of microcontroller hobbyists got started on the Basic Stamp, and there are plenty of other places where to venerable language still hides out. But if you want to write cool browser applications, you have to write JavaScript, right? Google will now let you code your web pages in Basic. Known as WWWBasic, this is — of course — a Javascript hack that you can load remotely into a web page and then have your page use Basic for customization. You can even import the thing into Node.js and use Basic inside your JavaScript, although it is hard to think of why you’d want to.

According to the project’s documentation — which is pretty sparse so far, we’re afraid — the Basic program is compiled into JavaScript on page load. There are a few examples, so you can generally pick up what’s available to use. There are graphics, the ability to read a keyboard key, and a way to handle the mouse.

Continue reading “It’s The Web, Basically”

Don’t Look Now, But Your Necklace is Listening

There was a time when the average person was worried about the government or big corporations listening in on their every word. It was a quaint era, full of whimsy and superstition. Today, a good deal of us are paying for the privilege to have constantly listening microphones in multiple rooms of our house, largely so we can avoid having to use our hands to turn the lights on and off. Amazing what a couple years and a strong advertising push can do.

So if we’re going to be funneling everything we say to one or more of our corporate overlords anyway, why not make it fun? For example, check out this speech-to-image necklace developed by [Stephanie Nemeth]. As you speak, the necklace listens in and finds (usually) relevant images to display. Conceptually this could be used as an assistive communication technology, but we’re cool with it being a meme display device for now.

Hardware wise, the necklace is just a Raspberry Pi 3, a USB microphone, and a HyperPixel 4.0 touch screen. The Pi Zero would arguably be the better choice for hanging around your neck, but [Stephanie] notes that there’s some compatibility issues with Node.js on the Zero’s ARM6 processor. She details a workaround, but says there’s no guarantee it will work with her code.

The JavaScript software records audio from the microphone with SoX, and then runs that through the Google Cloud Speech-to-Text service to figure out what the wearer is saying. Finally it does a Google image search on the captured words using the custom search JSON API to find pictures to show on the display. There’s a user-supplied list of words to ignore so it doesn’t try looking up images for function words (such as “and” or “however”), though presumably it can also be used to blacklist certain imagery you might not want popping up on your chest in mixed company.

We’d be interested in seeing somebody implement this software on a Raspberry Pi powered digital frame to display artwork that changes based on what the people in the room are talking about. Like in Antitrust, but without Tim Robbins offing anyone.

[Thanks to DarkSim905 for the tip.]

Modern Wizard Summons Familiar Spirit

In European medieval folklore, a practitioner of magic may call for assistance from a familiar spirit who takes an animal form disguise. [Alex Glow] is our modern-day Merlin who invoked the magical incantations of 3D printing, Arduino, and Raspberry Pi to summon her familiar Archimedes: The AI Robot Owl.

The key attraction in this build is Google’s AIY Vision kit. Specifically the vision processing unit that tremendously accelerates image classification tasks running on an attached Raspberry Pi Zero W. It no longer consumes several seconds to analyze each image, classification can now run several times per second, all performed locally. No connection to Google cloud required. (See our earlier coverage for more technical details.) The default demo application of a Google AIY Vision kit is a “joy detector” that looks for faces and attempts to determine if a face is happy or sad. We’ve previously seen this functionality mounted on a robot dog.

[Alex] aimed to go beyond the default app (and default box) to create Archimedes, who was to reward happy people with a sticker. As a moving robotic owl, Archimedes had far more crowd appeal than the vision kit’s default cardboard box. All the kit components have been integrated into Archimedes’ head. One eye is the expected Pi camera, the other eye is actually the kit’s piezo buzzer. The vision kit’s LED-illuminated button now tops the dapper owl’s hat.

Archimedes was created to join in Google’s promotion efforts. Their presence at this Maker Faire consisted of two tents: one introductory “Learn to Solder” tent where people can create a blinky LED badge, and the other tent is focused on their line of AIY kits like this vision kit. Filled with demos of what the kits can do aside from really cool robot owls.

Hopefully these promotional efforts helped many AIY kits find new homes in the hands of creative makers. It’s pretty exciting that such a powerful and inexpensive neural net processor is now widely available, and we look forward to many more AI-powered hacks to come.

Continue reading “Modern Wizard Summons Familiar Spirit”

Location Sharing with Google Home

With Google’s near-monopoly on the internet, it can be difficult to get around in cyberspace without encountering at least some aspect of this monolithic, data-gathering giant. It usually takes a concerted effort, but it is technically possible to do. While [Mat] is still using some Google products, he has at least figured out a way to get Google Home to work with location data without actually sharing that data with Google, which is a step in the right direction.

[Mat]’s goal was to use Google’s location sharing features through Google Home, but without the creepiness factor of Google knowing everything about his life, and also without the hassle of having to use Google Maps. He’s using a few things to pull this off, including a NodeRED server running on a Raspberry Pi Zero, a free account from If This Then That (IFTTT), Tasker with AutoRemote plugin, and the Google Maps API key. With all of that put together, and some configuration of IFTTT he can ask his Google assistant (or Google Home) for location data, all without sharing that data with Google.

This project is a great implementation of Google’s tools and a powerful use of IFTTT. And, as a bonus, it gets around some of the creepiness factor that Google tends to incorporate in their quest to know all the data.

Continue reading “Location Sharing with Google Home”

Google Lowers The Artificial Intelligence Bar With Complete DIY Kits

Last year, Google released an artificial intelligence kit aimed at makers, with two different flavors: Vision to recognize people and objections, and Voice to create a smart speaker. Now, Google is back with a new version to make it even easier to get started.

The main difference in this year’s (v1.1) kits is that they include some basic hardware, such as a Raspberry Pi and an SD card. While this might not be very useful to most Hackaday readers, who probably have a spare Pi (or 5) lying around, this is invaluable for novice makers or the educational market. These audiences now have access to an all-in-one solution to build projects and learn more about artificial intelligence.

We’ve previously seen toys, phones, and intercoms get upgrades with an AIY kit, but would love to see more! [Mike Rigsby] has used one in his robot dog project to detect when people are smiling. These updated kits are available at Target (Voice, Vision). If the kit is too expensive, our own [Inderpreet Singh] can show you how to build your own.

Via [BGR].