The Naughty AIs That Gamed The System

Artificial intelligence (AI) is undergoing somewhat of a renaissance in the last few years. There’s been plenty of research into neural networks and other technologies, often based around teaching an AI system to achieve certain goals or targets. However, this method of training is fraught with danger, because just like in the movies – the computer doesn’t always play fair.

It’s often very much a case of the AI doing exactly what it’s told, rather than exactly what you intended. Like a devious child who will gladly go to bed in the literal sense, but will not actually sleep, this can cause unexpected, and often quite hilarious results. [Victoria] has created a master list of scholarly references regarding exactly this.

The list spans a wide range of cases. There’s the amusing evolutionary algorithm designed to create creatures capable of high-speed movement, which merely spawned very tall creatures that generated these speeds by falling over. More worryingly, there’s the AI trained to identify toxic and edible mushrooms, which simply picked up on the fact that it was presented with the two types in alternating order. This ended up being an unreliable model in the real world. Similarly, the model designed to assess malignancy of skin cancers determined that lesions photographed with rulers for scale were more likely to be cancerous.

[Victoria] refers to this as “specification gaming”. One can draw parallels to classic sci-fi stories around the “Laws of Robotics”, where robots take such laws to their literal extremes, often causing great harm in the process. It’s an interesting discussion of the difficulty in training artificially intelligent systems to achieve their set goals without undesirable side effects.

We’ve seen plenty of work in this area before – like this use of evolutionary algorithms in circuit design.

Piano Genie Trained a Neural Net to Play 88-Key Piano with 8 Arcade Buttons

Want to sound great on a Piano using only your coding skills? Enter Piano Genie, the result of a research project from Google AI and DeepMind. You press any of eight buttons while a neural network makes sure the piano plays something cool — compensating in real time for what’s already been played.

Almost anyone new to playing music who sits down at a piano will produce a sound similar to that of a cat chasing a mouse through a tangle of kitchen pots. Who can blame them, given the sea of 88 inexplicable keys sitting before them? But they’ll quickly realize that playing keys in succession in one direction will produce sounds with consistently increasing or decreasing pitch. They’ll also learn that pressing keys for different lengths of times can improve the melody. But there’s still 88 of them and plenty more to learn, such as which keys will sound harmonious when played together.

Piano Genie training architectureWith Pinao Genie, gone are the daunting 88 keys, replaced with a 3D-printed box of eight arcade-style buttons which they made by following this Adafruit tutorial. A neural network maps those eight buttons to something meaningful on the 88-key piano keyboard. Being a neural network, the mapping isn’t a fixed one-to-one or even one-to-many. Instead, it’s trained to play something which should sound good taking into account what was play previously and won`t necessarily be the same each time.

To train it they use data from the approximately 1400 performances of the International Piano e-Competition. The result can be quite good as you can see and hear in the video below. The buttons feed into a computer but the computer plays the result on an actual piano.

For training, the neural network really consists of two networks. One is an encoder, in this case a recurrent neural network (RNN) which takes piano sequences and learns to output a vector. In the diagram, the vector is in the middle and has one element for each of the eight buttons. The second network is the decoder, also an RNN. It’s trained to turn that eight-element vector back into the same music which was fed into the encoder.

Once trained, only the decoder is used. The eight-button keyboard feeds into the vector, and the decoder outputs suitable notes. The fact that they’re RNNs means that rather than learning a fixed one-to-many mapping, the network takes into account what was previously played in order to come up with something which hopefully sounds pleasing. To give the user a little more creative control, they also trained it to realize when the user is playing a rising or falling melody and to output the same. See their paper for how the turned polyphonic sound into monophonic and back again.

If you prefer a different style of music you can train it on a MIDI collection of your own choosing using their open-sourced model. Or you can try it out as is right now through their web interface. I’ll admit, I started out just banging on it, producing the same noise I would get if I just hammered away randomly on a piano. Then I switched to thinking of making melodies and the result started sounding better. So some music background and practice still helps. For the video below, the researcher admits to having already played for a few hours.

This isn’t the first project we’ve covered by these Google researchers. Another was this music synthesizer again using neural networks but this time with a Raspberry Pi. And if our discussion of recurrent neural networks went a bit over your head, check out our overview of neural networks.

Continue reading “Piano Genie Trained a Neural Net to Play 88-Key Piano with 8 Arcade Buttons”

Kinetic Sculpture Achieves Balance Through Machine Learning

We all know how important it is to achieve balance in life, or at least so the self-help industry tells us. How exactly to achieve balance is generally left as an exercise to the individual, however, with varying results. But what about our machines? Will there come a day when artificial intelligences and their robotic bodies become so stressed that they too will search for an elusive and ill-defined sense of balance?

We kid, but only a little; who knows what the future field of machine psychology will discover? Until then, this kinetic sculpture that achieves literal balance might hold lessons for human and machine alike. Dubbed In Medio Stat Virtus, or “In the middle stands virtue,” [Astrid Kraniger]’s kinetic sculpture explores how a simple system can find a stable equilibrium with machine learning. The task seems easy: keep a ball centered on a track suspended by two cables. The length of the cables is varied by stepper motors, while the position of the ball is detected by the difference in weight between the two cables using load cells scavenged from luggage scales. The motors raise and lower each side to even out the forces on each, eventually achieving balance.

The twist here is that rather than a simple PID loop or another control algorithm, [Astrid] chose to apply machine learning to the problem using the Q-Behave library. The system detects when the difference between the two weights is decreasing and “rewards” the algorithm so that it learns what is required of it. The result is a system that gently settles into equilibrium. Check out the video below; it’s strangely soothing.

We’ve seen self-balancing systems before, from ball-balancing Stewart platforms to Segway-like two-wheel balancers. One wonders if machine learning could be applied to these systems as well.

Continue reading “Kinetic Sculpture Achieves Balance Through Machine Learning”

Redeem Your Irresponsible 90s Self

If you were a youth in the 90s, odds are good that you were a part of the virtual pet fad and had your very own beeping Tamagotchi to take care of, much to the chagrin of your parents. Without the appropriate amout of attention each day, the pets could become sick or die, and the only way to prevent this was to sneak the toy into class and hope it didn’t make too much noise. A more responsible solution to this problem would have been to build something to take care of your virtual pet for you.

An art installation in Moscow is using an Arduino to take care of five Tamagotchis simultaneously in a virtal farm of sorts. The system is directly wired to all five toys to simulate button presses, and behaves ideally to make sure all the digital animals are properly cared for. Although no source code is provided, it seems to have some sort of machine learning capability in order to best care for all five pets at the same time. The system also prints out the statuses on a thermal printer, so you can check up on the history of all of the animals.

The popularity of these toys leads to a lot of in-depth investigation of what really goes on inside them, and a lot of other modifications to the original units and to the software. You can get a complete ROM dump of one, build a giant one, or even take care of an infinite number of them. Who would have thought a passing fad would have so much hackability?

Continue reading “Redeem Your Irresponsible 90s Self”

AI Finds More Space Chatter

Scientists don’t know exactly what fast radio bursts (FRBs) are. What they do know is that they come from a long way away. In fact, one that occurs regularly comes from a galaxy 3 billion light years away. They could form from neutron stars or they could be extraterrestrials phoning home. The other thing is — thanks to machine learning — we now know about a lot more of them. You can see a video from Berkeley, below. and find more technical information, raw data, and [Danielle Futselaar’s] killer project graphic seen above from at their site.

The first FRB came to the attention of [Duncan Lorimer] and [David Narkevic] in 2007 while sifting through data from 2001. These broadband bursts are hard to identify since they last a matter of milliseconds. Researchers at Berkeley trained software using previously known FRBs. They then gave the software 5 hours of recordings of activity from one part of the sky and found 72 previously unknown FRBs.

Continue reading “AI Finds More Space Chatter”

Katherine Scott: Earth’s Daily Photo Through 200 Cubesat Cameras

Every year at Supercon there is a critical mass of awesome people, and last year Sophi Kravitz was able to sneak away from the festivities for this interview with Katherine Scott. Kat was a judge for the 2017 Hackaday Prize. She specializes in computer vision, robotics, and manufacturing and was the image analytics team lead at Planet Labs when this interview was filmed.

You’re going to chuckle at the beginning of the video as Kat and Sophi recount the kind of highjinks going on at the con. In the hardware hacking area there were impromptu experiments in melting aluminum with gallium, and one of the afternoon’s organized workshop combined wood and high voltage to create lichtenberg figures. Does anyone else smell burning? Don’t forget to grab your 2018 Hackaday Superconference tickets and join in the fun this year!

Below you’ll find the interview which dives into Kat’s work with satellite imaging.

Continue reading “Katherine Scott: Earth’s Daily Photo Through 200 Cubesat Cameras”

Hummingbirds, 3D Printing, and Deep Learning

Setting camera traps in your garden to see what local wildlife is around is quite popular. But [Chris Lam] has just one subject in mind: the hummingbird. He devised a custom setup to capture the footage he wanted using some neat tech.

To attract the hummingbirds, [Chris] used an off-the-shelf feeder — no need to re-invent the wheel there. To obtain the closeup footage required, a 4K action cam was used. This was attached to the feeder with a 3D-printed mount that [Chris] designed.

When it came to detecting the presence of a hummingbird in the video, there were various approaches that could have been considered. On the hardware side, PIR and ultrasonic distance sensors are popular for projects of this kind, but [Chris] wanted a pure software solution. The commonly used motion detection libraries for this type of project might have fallen over here, since the whole feeder was swinging in the air on a string, so [Chris] opted for machine learning.

A RESNET architecture was used to run a classification on each frame, to determine if the image contained a hummingbird or not. The initial attempt was not greatly successful, but after cropping the image to a smaller area around the feeder, classification accuracy greatly increased. After a bit of FFmpeg magic, the selected snippets were concatenated to make one video containing all the interesting parts; you can see the result in the clip after the break.

It seems that machine learning and wildlife cams are a match made in heaven. We’ve already written about a proof-of-concept project which identifies different animals in the footage when motion is detected.

Continue reading “Hummingbirds, 3D Printing, and Deep Learning”