The Naughty AIs That Gamed The System

Artificial intelligence (AI) is undergoing somewhat of a renaissance in the last few years. There’s been plenty of research into neural networks and other technologies, often based around teaching an AI system to achieve certain goals or targets. However, this method of training is fraught with danger, because just like in the movies – the computer doesn’t always play fair.

It’s often very much a case of the AI doing exactly what it’s told, rather than exactly what you intended. Like a devious child who will gladly go to bed in the literal sense, but will not actually sleep, this can cause unexpected, and often quite hilarious results. [Victoria] has created a master list of scholarly references regarding exactly this.

The list spans a wide range of cases. There’s the amusing evolutionary algorithm designed to create creatures capable of high-speed movement, which merely spawned very tall creatures that generated these speeds by falling over. More worryingly, there’s the AI trained to identify toxic and edible mushrooms, which simply picked up on the fact that it was presented with the two types in alternating order. This ended up being an unreliable model in the real world. Similarly, the model designed to assess malignancy of skin cancers determined that lesions photographed with rulers for scale were more likely to be cancerous.

[Victoria] refers to this as “specification gaming”. One can draw parallels to classic sci-fi stories around the “Laws of Robotics”, where robots take such laws to their literal extremes, often causing great harm in the process. It’s an interesting discussion of the difficulty in training artificially intelligent systems to achieve their set goals without undesirable side effects.

We’ve seen plenty of work in this area before – like this use of evolutionary algorithms in circuit design.

Open Data Cam Combines Camera, GPU, and Neural Network in an Artisanal DIY Cereal Box

The engineers and product designers at [moovel lab] have created the Open Data Cam – an AI camera platform that can identify and count objects as they move through its field of view – along with an open source guide for making your own.

Step one: get out your ruler and utility knife. In this world of ubiquitous 3D-printers they’ve taken a decidedly low-tech approach to the project’s enclosure: a cut, folded, and zip-tied plastic box, with a cardboard frame inside to hold the electronic bits. It’s “splash proof” and certainly cheap to make, but we’re a little worried about cooling and physical protection for the electronics inside, as they’re not exactly cheap and rugged components.

So what’s inside? An Nvidia Jetson TX2 board, a LiPo battery with some charging circuitry, and a standard webcam. The special sauce, however, is the software, which is available on GitHub. [Moovel lab]’s engineers have put together a nice-looking wifi-accessible mobile UI for marking the areas where you’d like the software to identify and tally objects. The actual object detection and identification tasks are performed by the speedy YOLO neural network, a task the Nvidia board’s GPU is of course well suited for.

As the Open Data Cam’s unblinking glass eye gazes upon our urban environments, it will log its observations in an ancient and mysterious language: CSV. It’s up to you, human, to interpret this information and use it for good.

A summary video and build time lapse are embedded after the break.

Continue reading “Open Data Cam Combines Camera, GPU, and Neural Network in an Artisanal DIY Cereal Box”

Kinetic Sculpture Achieves Balance Through Machine Learning

We all know how important it is to achieve balance in life, or at least so the self-help industry tells us. How exactly to achieve balance is generally left as an exercise to the individual, however, with varying results. But what about our machines? Will there come a day when artificial intelligences and their robotic bodies become so stressed that they too will search for an elusive and ill-defined sense of balance?

We kid, but only a little; who knows what the future field of machine psychology will discover? Until then, this kinetic sculpture that achieves literal balance might hold lessons for human and machine alike. Dubbed In Medio Stat Virtus, or “In the middle stands virtue,” [Astrid Kraniger]’s kinetic sculpture explores how a simple system can find a stable equilibrium with machine learning. The task seems easy: keep a ball centered on a track suspended by two cables. The length of the cables is varied by stepper motors, while the position of the ball is detected by the difference in weight between the two cables using load cells scavenged from luggage scales. The motors raise and lower each side to even out the forces on each, eventually achieving balance.

The twist here is that rather than a simple PID loop or another control algorithm, [Astrid] chose to apply machine learning to the problem using the Q-Behave library. The system detects when the difference between the two weights is decreasing and “rewards” the algorithm so that it learns what is required of it. The result is a system that gently settles into equilibrium. Check out the video below; it’s strangely soothing.

We’ve seen self-balancing systems before, from ball-balancing Stewart platforms to Segway-like two-wheel balancers. One wonders if machine learning could be applied to these systems as well.

Continue reading “Kinetic Sculpture Achieves Balance Through Machine Learning”

Jump into AI with a Neural Network of your Own

One of the difficulties in learning about neural networks is finding a problem that is complex enough to be instructive but not so complex as to impede learning. [ThomasNield] had an idea: Create a neural network to learn if you should put a light or dark font on a particular colored background. He has a great video explaining it all (see below) and code in Kotlin.

[Thomas] is very interested in optimization, so his approach is very much based on mathematics and algorithms of optimization. One thing that’s handy is that there is already an algorithm for making this determination. He found it on Stack Exchange, but we’re sure it’s in a textbook or paper somewhere. The existing algorithm makes the neural network really impractical, but it makes training easy since you can algorithmically develop a training set of data.

Once trained, the neural network works well. He wrote a small GUI and you can even select among various models.

Don’t let the Kotlin put you off. It is a derivative of Java and uses the same JVM. The code is very similar, other than it infers types and also adds functional program tools. However, the libraries and the principles employed will work with Java and, in many cases, the concepts will apply no matter what you are doing.

If you want to hardware accelerate your neural networks, there’s a stick for that. If you prefer C and you want something lean and mean, try TINN.

Continue reading “Jump into AI with a Neural Network of your Own”

The Little Cat That Could

Most humans take a year to learn their first steps, and they are notoriously clumsy. [Hartvik Line] taught a robotic cat to walk [YouTube link] in less time, but this cat had a couple advantages over a pre-toddler. The first advantage was that it had four legs, while the second came from a machine learning technique called genetic algorithms that surpassed human fine-tuning in two hours. That’s a pretty good benchmark.

The robot itself is an impressive piece inspired by robots at EPFL, a research institute in Switzerland. All that Swiss engineering is not easy for one person to program, much less a student, but that is exactly what happened. “Nixie,” as she is called, is a part of a master thesis for [Hartvik] at the University of Stavanger in Norway. Machine learning efficiency outstripped human meddling very quickly, and it can even relearn to walk if the chassis is damaged.

We have been watching genetic algorithm programming for more than half of a decade, and Skynet hasn’t popped forth, however we have a robot kitty taking its first steps.

Continue reading “The Little Cat That Could”

Artistic Collaboration With AI

Ever since Google’s Deep Dream results were made public several years ago, there has been major interest in the application of AI and neural network technologies to artistic endeavors. [Helena Sarin] has been experimenting in just this field, exploring the possibilities of collaborating with the ghost in the machine.

This image was generated with a landscape model using a dataset containing covers of Japanese poetry books.

The work is centered around the use of Generative Adversarial Networks, or GANs. [Helena] describes using a GAN to create artworks as a sort of game. An apprentice attempts to create new works in the style of their established master, while a critic attempts to determine whether the artworks are created by the master or the apprentice. As the apprentice improves, the critic must become more discerning; as the critic becomes more discerning, the apprentice must improve further. It is through this mechanism that the model improves itself.

[Helena] has spent time experimenting with CycleGAN in the artistic realm after first using it in a work project, and has primarily trained it on her own original artworks to create new pieces with wild and exciting results. She shares several tips on how best to work with the technology, around the necessary computing and storage requirements, as well as ways to step out of the box to create more diverse outputs.

Neural networks are hot lately, with plenty of research going on in the field. There’s plenty of fun projects, too – like this cartoonifying camera we featured recently.

AI Finds More Space Chatter

Scientists don’t know exactly what fast radio bursts (FRBs) are. What they do know is that they come from a long way away. In fact, one that occurs regularly comes from a galaxy 3 billion light years away. They could form from neutron stars or they could be extraterrestrials phoning home. The other thing is — thanks to machine learning — we now know about a lot more of them. You can see a video from Berkeley, below. and find more technical information, raw data, and [Danielle Futselaar’s] killer project graphic seen above from at their site.

The first FRB came to the attention of [Duncan Lorimer] and [David Narkevic] in 2007 while sifting through data from 2001. These broadband bursts are hard to identify since they last a matter of milliseconds. Researchers at Berkeley trained software using previously known FRBs. They then gave the software 5 hours of recordings of activity from one part of the sky and found 72 previously unknown FRBs.

Continue reading “AI Finds More Space Chatter”