Kinetic Sculpture Achieves Balance Through Machine Learning

We all know how important it is to achieve balance in life, or at least so the self-help industry tells us. How exactly to achieve balance is generally left as an exercise to the individual, however, with varying results. But what about our machines? Will there come a day when artificial intelligences and their robotic bodies become so stressed that they too will search for an elusive and ill-defined sense of balance?

We kid, but only a little; who knows what the future field of machine psychology will discover? Until then, this kinetic sculpture that achieves literal balance might hold lessons for human and machine alike. Dubbed In Medio Stat Virtus, or “In the middle stands virtue,” [Astrid Kraniger]’s kinetic sculpture explores how a simple system can find a stable equilibrium with machine learning. The task seems easy: keep a ball centered on a track suspended by two cables. The length of the cables is varied by stepper motors, while the position of the ball is detected by the difference in weight between the two cables using load cells scavenged from luggage scales. The motors raise and lower each side to even out the forces on each, eventually achieving balance.

The twist here is that rather than a simple PID loop or another control algorithm, [Astrid] chose to apply machine learning to the problem using the Q-Behave library. The system detects when the difference between the two weights is decreasing and “rewards” the algorithm so that it learns what is required of it. The result is a system that gently settles into equilibrium. Check out the video below; it’s strangely soothing.

We’ve seen self-balancing systems before, from ball-balancing Stewart platforms to Segway-like two-wheel balancers. One wonders if machine learning could be applied to these systems as well.

Continue reading “Kinetic Sculpture Achieves Balance Through Machine Learning”

Jump Into AI With A Neural Network Of Your Own

One of the difficulties in learning about neural networks is finding a problem that is complex enough to be instructive but not so complex as to impede learning. [ThomasNield] had an idea: Create a neural network to learn if you should put a light or dark font on a particular colored background. He has a great video explaining it all (see below) and code in Kotlin.

[Thomas] is very interested in optimization, so his approach is very much based on mathematics and algorithms of optimization. One thing that’s handy is that there is already an algorithm for making this determination. He found it on Stack Exchange, but we’re sure it’s in a textbook or paper somewhere. The existing algorithm makes the neural network really impractical, but it makes training easy since you can algorithmically develop a training set of data.

Once trained, the neural network works well. He wrote a small GUI and you can even select among various models.

Don’t let the Kotlin put you off. It is a derivative of Java and uses the same JVM. The code is very similar, other than it infers types and also adds functional program tools. However, the libraries and the principles employed will work with Java and, in many cases, the concepts will apply no matter what you are doing.

If you want to hardware accelerate your neural networks, there’s a stick for that. If you prefer C and you want something lean and mean, try TINN.

Continue reading “Jump Into AI With A Neural Network Of Your Own”

The Little Cat That Could

Most humans take a year to learn their first steps, and they are notoriously clumsy. [Hartvik Line] taught a robotic cat to walk [YouTube link] in less time, but this cat had a couple advantages over a pre-toddler. The first advantage was that it had four legs, while the second came from a machine learning technique called genetic algorithms that surpassed human fine-tuning in two hours. That’s a pretty good benchmark.

The robot itself is an impressive piece inspired by robots at EPFL, a research institute in Switzerland. All that Swiss engineering is not easy for one person to program, much less a student, but that is exactly what happened. “Nixie,” as she is called, is a part of a master thesis for [Hartvik] at the University of Stavanger in Norway. Machine learning efficiency outstripped human meddling very quickly, and it can even relearn to walk if the chassis is damaged.

We have been watching genetic algorithm programming for more than half of a decade, and Skynet hasn’t popped forth, however we have a robot kitty taking its first steps.

Continue reading “The Little Cat That Could”

Artistic Collaboration With AI

Ever since Google’s Deep Dream results were made public several years ago, there has been major interest in the application of AI and neural network technologies to artistic endeavors. [Helena Sarin] has been experimenting in just this field, exploring the possibilities of collaborating with the ghost in the machine.

This image was generated with a landscape model using a dataset containing covers of Japanese poetry books.

The work is centered around the use of Generative Adversarial Networks, or GANs. [Helena] describes using a GAN to create artworks as a sort of game. An apprentice attempts to create new works in the style of their established master, while a critic attempts to determine whether the artworks are created by the master or the apprentice. As the apprentice improves, the critic must become more discerning; as the critic becomes more discerning, the apprentice must improve further. It is through this mechanism that the model improves itself.

[Helena] has spent time experimenting with CycleGAN in the artistic realm after first using it in a work project, and has primarily trained it on her own original artworks to create new pieces with wild and exciting results. She shares several tips on how best to work with the technology, around the necessary computing and storage requirements, as well as ways to step out of the box to create more diverse outputs.

Neural networks are hot lately, with plenty of research going on in the field. There’s plenty of fun projects, too – like this cartoonifying camera we featured recently.

AI Finds More Space Chatter

Scientists don’t know exactly what fast radio bursts (FRBs) are. What they do know is that they come from a long way away. In fact, one that occurs regularly comes from a galaxy 3 billion light years away. They could form from neutron stars or they could be extraterrestrials phoning home. The other thing is — thanks to machine learning — we now know about a lot more of them. You can see a video from Berkeley, below. and find more technical information, raw data, and [Danielle Futselaar’s] killer project graphic seen above from at their site.

The first FRB came to the attention of [Duncan Lorimer] and [David Narkevic] in 2007 while sifting through data from 2001. These broadband bursts are hard to identify since they last a matter of milliseconds. Researchers at Berkeley trained software using previously known FRBs. They then gave the software 5 hours of recordings of activity from one part of the sky and found 72 previously unknown FRBs.

Continue reading “AI Finds More Space Chatter”

Nvidia Transforms Standard Video Into Slow Motion Using AI

Nvidia is back at it again with another awesome demo of applied machine learning: artificially transforming standard video into slow motion – they’re so good at showing off what AI can do that anyone would think they were trying to sell hardware for it.

Though most modern phones and cameras have an option to record in slow motion, it often comes at the expense of resolution, and always at the expense of storage space. For really high frame rates you’ll need a specialist camera, and you often don’t know that you should be filming in slow motion until after an event has occurred. Wouldn’t it be nice if we could just convert standard video to slow motion after it was recorded?

That’s just what Nvidia has done, all nicely documented in a paper. At its heart, the algorithm must take two frames, and artificially create one or more frames in between. This is not a manual algorithm that interpolates frames, this is a fully fledged deep-learning system. The Convolutional Neural Network (CNN) was trained on over a thousand videos – roughly 300k individual frames.

Since none of the parameters of the CNN are time-dependent, it’s possible to generate as many intermediate frames as required, something which sets this solution apart from previous approaches.  In some of the shots in their demo video, 30fps video is converted to 240fps; this requires the creation of 7 additional frames for every pair of consecutive frames.

The video after the break is seriously impressive, though if you look carefully you can see the odd imperfection, like the hockey player’s skate or dancer’s arm. Deep learning is as much an art as a science, and if you understood all of the research paper then you’re doing pretty darn well. For the rest of us, get up to speed by wrapping your head around neural networks, and trying out the simplest Tensorflow example.

Continue reading “Nvidia Transforms Standard Video Into Slow Motion Using AI”

Inverted Pendulum For The Control Enthusiast

Once you step into the world of controls, you quickly realize that controlling even simple systems isn’t as easy as applying voltage to a servo. Before you start working on your own bipedal robot or scratch-built drone, though, you might want to get some practice with this intricate field of engineering. A classic problem in this area is the inverted pendulum, and [Philip] has created a great model of this which helps illustrate the basics of controls, with some AI mixed in.

Called the ZIPY, the project is a “Cart Pole” design that uses a movable cart on a trolley to balance a pendulum above. The pendulum is attached at one point to the cart. By moving the cart back and forth, the pendulum can be kept in a vertical position. The control uses the OpenAI Gym toolkit which is a way to easily use reinforcement learning algorithms in your own projects. With some Python, some 3D printed parts, and the toolkit, [Philip] was able to get his project to successfully balance the pendulum on the cart.

Of course, the OpenAI Gym toolkit is useful for many more projects where you might want some sort of machine learning to help out. If you want to play around with machine learning without having to build anything, though, you can also explore it in your browser.

Continue reading “Inverted Pendulum For The Control Enthusiast”