Inverted Pendulum For The Control Enthusiast

Once you step into the world of controls, you quickly realize that controlling even simple systems isn’t as easy as applying voltage to a servo. Before you start working on your own bipedal robot or scratch-built drone, though, you might want to get some practice with this intricate field of engineering. A classic problem in this area is the inverted pendulum, and [Philip] has created a great model of this which helps illustrate the basics of controls, with some AI mixed in.

Called the ZIPY, the project is a “Cart Pole” design that uses a movable cart on a trolley to balance a pendulum above. The pendulum is attached at one point to the cart. By moving the cart back and forth, the pendulum can be kept in a vertical position. The control uses the OpenAI Gym toolkit which is a way to easily use reinforcement learning algorithms in your own projects. With some Python, some 3D printed parts, and the toolkit, [Philip] was able to get his project to successfully balance the pendulum on the cart.

Of course, the OpenAI Gym toolkit is useful for many more projects where you might want some sort of machine learning to help out. If you want to play around with machine learning without having to build anything, though, you can also explore it in your browser.

Continue reading “Inverted Pendulum For The Control Enthusiast”

Neural Networks Using Doom Level Creator Like It’s 1993

Readers of a certain vintage will remember the glee of building your own levels for DOOM. There was something magical about carefully crafting a level and then dialing up your friends for a death match session on the new map. Now computers scientists are getting in on that fun in a new way. Researchers from Politecnico di Milano are using artificial intelligence to create new levels for the classic DOOM shooter (PDF whitepaper).

While procedural level generation has been around for decades, recent advances in machine learning to generate game content (usually levels) are different because they don’t use a human-defined algorithm. Instead, they generate new content by using existing, human-generated levels as a model. In effect they learn from what great game designers have already done and apply those lesson to new level generation. The screenshot shown above is an example of an AI generated level and the gameplay can be seen in the video below.

The idea of an AI generating levels is simple in concept but difficult in execution. The researchers used Generative Adversarial Networks (GANs) to analyze existing DOOM maps and then generate new maps similar to the originals. GANs are a type of neural network which learns from training data and then generates similar data. They considered two types of GANs when generating new levels: one that just used the appearance of the training maps, and another that used both the appearance and metrics such as the number of rooms, perimeter length, etc. If you’d like a better understanding of GANs, [Steven Dufresne] covered it in his guide to the evolving world of neural networks.

While both networks used in this project produce good levels, the one that included other metrics resulted in higher quality levels. However, while the AI-generated levels appeared similar at a high level to human-generated levels, many of the little details that humans tend to include were omitted. This is partially due to a lack of good metrics to describe levels and AI-generated data.

Example DOOM maps generated by AI. Each row is one map, and each image is one aspect of the map (floor, height, things, and walls, from left to right)

We can only guess that these researcher’s next step is to use similar techniques to create an entire game (levels, characters, and music) via AI. After all, how hard can it be?? Joking aside, we would love to see you take this concept and run with it. We’re dying to play through some gnarly levels whipped up by the AI from Hackaday readers!

Continue reading “Neural Networks Using Doom Level Creator Like It’s 1993”

Google Lowers The Artificial Intelligence Bar With Complete DIY Kits

Last year, Google released an artificial intelligence kit aimed at makers, with two different flavors: Vision to recognize people and objections, and Voice to create a smart speaker. Now, Google is back with a new version to make it even easier to get started.

The main difference in this year’s (v1.1) kits is that they include some basic hardware, such as a Raspberry Pi and an SD card. While this might not be very useful to most Hackaday readers, who probably have a spare Pi (or 5) lying around, this is invaluable for novice makers or the educational market. These audiences now have access to an all-in-one solution to build projects and learn more about artificial intelligence.

We’ve previously seen toys, phones, and intercoms get upgrades with an AIY kit, but would love to see more! [Mike Rigsby] has used one in his robot dog project to detect when people are smiling. These updated kits are available at Target (Voice, Vision). If the kit is too expensive, our own [Inderpreet Singh] can show you how to build your own.

Via [BGR].

Neural Network Names Nightshades

Neural networks are a core area of the artificial intelligence field. They can be trained on abstract data sets and be put to all manner of useful duties, like driving cars while ignoring road hazards or identifying cats in images. Recently, a biologist approached AI researcher [Janelle Shane] with a problem – could she help him name some tomatoes?

It’s a problem with a simple cause – like most people, [Darren] enjoys experimenting with tomato genetics, and thus requires a steady supply of names to designate the various varities produced in this work. It can be taxing on the feeble human brain, so a silicon-based solution is ideal.

[Janelle] decided to use the char-rnn library built by [Andrej Karpathy] to do the heavy lifting. After training it on a list of over 11,000 existing tomato varieties, the neural network was then asked to strike out on its own.

The results are truly fantastic – whether you’re partial to a Speckled Garfech or you prefer the smooth flavor of the Golden Pow, there’s a tomato to suit your tastes. When the network was retrained with additional content in the form of names of metal bands, the results get even better – it’s only a matter of time before Angels of Saucing reach a supermarket shelf near you.

On the surface, it’s a fun project with whimsical output, but fundamentally it highlights how much can be accomplished these days by standing on the shoulders of giants, so to speak. Now, if you need some assistance growing your tomatoes, the machines can help there, too.

 

AI Listens to Radio

We’ve seen plenty of examples of neural networks listening to speech, reading characters, or identifying images. KickView had a different idea. They wanted to learn to recognize radio signals. Not just any radio signals, but Orthogonal Frequency Division Multiplexing (OFDM) waveforms.

OFDM is a modulation method used by WiFi, cable systems, and many other systems. In particular, they look at an 802.11g signal with a bandwidth of 20 MHz. The question is given a receiver for 802.11g, how can you reliably detect that an 802.11ac signal — up to 160 MHz — is using your channel? To demonstrate the technique they decided to detect 20 MHz signals using a 5 MHz bandwidth.

Continue reading “AI Listens to Radio”

AI Prosthesis Is Music To Our Ears

Prostheses are a great help to those who have lost limbs, or who never had them in the first place. Over the past few decades there has been a great deal of research done to make these essential devices more useful, creating prostheses that are capable of movement and more accurately recreating the functions of human body parts. At Georgia Tech, they’re working on just that, with the help of AI.

[Jason Barnes] lost his arm in a work accident, which prevented him from playing the piano the way he used to. The researchers at Georgia Tech worked with him, eventually producing a prosthetic arm that, unlike most, actually has individual finger control. This is achieved through the use of an ultrasound probe, which is used to detect muscle movements elsewhere on his body, with enough detail to allow the control of individual fingers. This is done through a TensorFlow-based neural network which analyses the ultrasound data to determine which finger the user is trying to move. The use of ultrasound was the major breakthrough which made this possible; previous projects have often relied on electromyogram sensors to read muscle impulses but these lack the resolution required.

The prosthesis is nicknamed the “Skywalker arm”, after its similarities to the prostheses seen in the Star Wars films. It’s not [Jason]’s first advanced prosthetic, either – Georgia Tech has also equipped him with an advanced drumming prosthesis. This allows him to use two sticks with a single arm, the second stick using advanced AI routines to drum along with the music in the room.

It’s great to see music being used as a driver to create high-performance prosthetics and push the state of the art forward. We’re sure [Jason] enjoys performing with the new hardware, too. But perhaps you’d like to try something similar, even though you’ve got two hands already? Try this on for size.

Continue reading “AI Prosthesis Is Music To Our Ears”

Google’s AIY Vision Kit Augments Pi With Vision Processor

Google has announced their soon to be available Vision Kit, their next easy to assemble Artificial Intelligence Yourself (AIY) product. You’ll have to provide your own Raspberry Pi Zero W but that’s okay since what makes this special is Google’s VisionBonnet board that they do provide, basically a low power neural network accelerator board running TensorFlow.

AIY VisionBonnet with Myriad 2 (MA2450) chip
AIY VisionBonnet with Myriad 2 (MA2450) chip

The VisionBonnet is built around the Intel® Movidius™ Myriad 2 (aka MA2450) vision processing unit (VPU) chip. See the video below for an overview of this chip, but what it allows is the rapid processing of compute-intensive neural networks. We don’t think you’d use it for training the neural nets, just for doing the inference, or in human terms, for making use of the trained neural nets. It may be worth getting the kit for this board alone to use in your own hacks. An alternative is to get Modivius’s Neural Compute Stick, which has the same chip on a USB stick for around $80, not quite double the Vision Kit’s $45 price tag.

The Vision Kit isn’t out yet so we can’t be certain of the details, but based on the hardware it looks like you’ll point the camera at something, press a button and it will speak. We’ve seen this before with this talking object recognizer on a Pi 3 (full disclosure, it was made by yours truly) but without the hardware acceleration, a single object recognition took around 10 seconds. In the vision kit we expect the recognition will be in real-time. So the Vision Kit may be much more dynamic than that. And in case it wasn’t clear, a key feature is that nothing is done on the cloud here, all processing is local.

The kit comes with three different applications: an object recognition one that can recognize up to 1000 different classes of objects, another that recognizes faces and their expressions, and a third that detects people, cats, and dogs. While you can get up to a lot of mischief with just that, you can run your own neural networks too. If you need a refresher on TensorFlow then check out our introduction. And be sure to check out the Myriad 2 VPU video below the break.

Continue reading “Google’s AIY Vision Kit Augments Pi With Vision Processor”