Modern Wizard Summons Familiar Spirit

In European medieval folklore, a practitioner of magic may call for assistance from a familiar spirit who takes an animal form disguise. [Alex Glow] is our modern-day Merlin who invoked the magical incantations of 3D printing, Arduino, and Raspberry Pi to summon her familiar Archimedes: The AI Robot Owl.

The key attraction in this build is Google’s AIY Vision kit. Specifically the vision processing unit that tremendously accelerates image classification tasks running on an attached Raspberry Pi Zero W. It no longer consumes several seconds to analyze each image, classification can now run several times per second, all performed locally. No connection to Google cloud required. (See our earlier coverage for more technical details.) The default demo application of a Google AIY Vision kit is a “joy detector” that looks for faces and attempts to determine if a face is happy or sad. We’ve previously seen this functionality mounted on a robot dog.

[Alex] aimed to go beyond the default app (and default box) to create Archimedes, who was to reward happy people with a sticker. As a moving robotic owl, Archimedes had far more crowd appeal than the vision kit’s default cardboard box. All the kit components have been integrated into Archimedes’ head. One eye is the expected Pi camera, the other eye is actually the kit’s piezo buzzer. The vision kit’s LED-illuminated button now tops the dapper owl’s hat.

Archimedes was created to join in Google’s promotion efforts. Their presence at this Maker Faire consisted of two tents: one introductory “Learn to Solder” tent where people can create a blinky LED badge, and the other tent is focused on their line of AIY kits like this vision kit. Filled with demos of what the kits can do aside from really cool robot owls.

Hopefully these promotional efforts helped many AIY kits find new homes in the hands of creative makers. It’s pretty exciting that such a powerful and inexpensive neural net processor is now widely available, and we look forward to many more AI-powered hacks to come.

Continue reading “Modern Wizard Summons Familiar Spirit”

Neural Networks Using Doom Level Creator Like It’s 1993

Readers of a certain vintage will remember the glee of building your own levels for DOOM. There was something magical about carefully crafting a level and then dialing up your friends for a death match session on the new map. Now computers scientists are getting in on that fun in a new way. Researchers from Politecnico di Milano are using artificial intelligence to create new levels for the classic DOOM shooter (PDF whitepaper).

While procedural level generation has been around for decades, recent advances in machine learning to generate game content (usually levels) are different because they don’t use a human-defined algorithm. Instead, they generate new content by using existing, human-generated levels as a model. In effect they learn from what great game designers have already done and apply those lesson to new level generation. The screenshot shown above is an example of an AI generated level and the gameplay can be seen in the video below.

The idea of an AI generating levels is simple in concept but difficult in execution. The researchers used Generative Adversarial Networks (GANs) to analyze existing DOOM maps and then generate new maps similar to the originals. GANs are a type of neural network which learns from training data and then generates similar data. They considered two types of GANs when generating new levels: one that just used the appearance of the training maps, and another that used both the appearance and metrics such as the number of rooms, perimeter length, etc. If you’d like a better understanding of GANs, [Steven Dufresne] covered it in his guide to the evolving world of neural networks.

While both networks used in this project produce good levels, the one that included other metrics resulted in higher quality levels. However, while the AI-generated levels appeared similar at a high level to human-generated levels, many of the little details that humans tend to include were omitted. This is partially due to a lack of good metrics to describe levels and AI-generated data.

Example DOOM maps generated by AI. Each row is one map, and each image is one aspect of the map (floor, height, things, and walls, from left to right)

We can only guess that these researcher’s next step is to use similar techniques to create an entire game (levels, characters, and music) via AI. After all, how hard can it be?? Joking aside, we would love to see you take this concept and run with it. We’re dying to play through some gnarly levels whipped up by the AI from Hackaday readers!

Continue reading “Neural Networks Using Doom Level Creator Like It’s 1993”

Google Lowers The Artificial Intelligence Bar With Complete DIY Kits

Last year, Google released an artificial intelligence kit aimed at makers, with two different flavors: Vision to recognize people and objections, and Voice to create a smart speaker. Now, Google is back with a new version to make it even easier to get started.

The main difference in this year’s (v1.1) kits is that they include some basic hardware, such as a Raspberry Pi and an SD card. While this might not be very useful to most Hackaday readers, who probably have a spare Pi (or 5) lying around, this is invaluable for novice makers or the educational market. These audiences now have access to an all-in-one solution to build projects and learn more about artificial intelligence.

We’ve previously seen toys, phones, and intercoms get upgrades with an AIY kit, but would love to see more! [Mike Rigsby] has used one in his robot dog project to detect when people are smiling. These updated kits are available at Target (Voice, Vision). If the kit is too expensive, our own [Inderpreet Singh] can show you how to build your own.

Via [BGR].

Google’s AIY Vision Kit Augments Pi With Vision Processor

Google has announced their soon to be available Vision Kit, their next easy to assemble Artificial Intelligence Yourself (AIY) product. You’ll have to provide your own Raspberry Pi Zero W but that’s okay since what makes this special is Google’s VisionBonnet board that they do provide, basically a low power neural network accelerator board running TensorFlow.

AIY VisionBonnet with Myriad 2 (MA2450) chip
AIY VisionBonnet with Myriad 2 (MA2450) chip

The VisionBonnet is built around the Intel® Movidius™ Myriad 2 (aka MA2450) vision processing unit (VPU) chip. See the video below for an overview of this chip, but what it allows is the rapid processing of compute-intensive neural networks. We don’t think you’d use it for training the neural nets, just for doing the inference, or in human terms, for making use of the trained neural nets. It may be worth getting the kit for this board alone to use in your own hacks. An alternative is to get Modivius’s Neural Compute Stick, which has the same chip on a USB stick for around $80, not quite double the Vision Kit’s $45 price tag.

The Vision Kit isn’t out yet so we can’t be certain of the details, but based on the hardware it looks like you’ll point the camera at something, press a button and it will speak. We’ve seen this before with this talking object recognizer on a Pi 3 (full disclosure, it was made by yours truly) but without the hardware acceleration, a single object recognition took around 10 seconds. In the vision kit we expect the recognition will be in real-time. So the Vision Kit may be much more dynamic than that. And in case it wasn’t clear, a key feature is that nothing is done on the cloud here, all processing is local.

The kit comes with three different applications: an object recognition one that can recognize up to 1000 different classes of objects, another that recognizes faces and their expressions, and a third that detects people, cats, and dogs. While you can get up to a lot of mischief with just that, you can run your own neural networks too. If you need a refresher on TensorFlow then check out our introduction. And be sure to check out the Myriad 2 VPU video below the break.

Continue reading “Google’s AIY Vision Kit Augments Pi With Vision Processor”

Wrap Your Mind Around Neural Networks

Artificial Intelligence is playing an ever increasing role in the lives of civilized nations, though most citizens probably don’t realize it. It’s now commonplace to speak with a computer when calling a business. Facebook is becoming scary accurate at recognizing faces in uploaded photos. Physical interaction with smart phones is becoming a thing of the past… with Apple’s Siri and Google Speech, it’s slowly but surely becoming easier to simply talk to your phone and tell it what to do than typing or touching an icon. Try this if you haven’t before — if you have an Android phone, say “OK Google”, followed by “Lumos”. It’s magic!

Advertisements for products we’re interested in pop up on our social media accounts as if something is reading our minds. Truth is, something is reading our minds… though it’s hard to pin down exactly what that something is. An advertisement might pop up for something that we want, even though we never realized we wanted it until we see it. This is not coincidental, but stems from an AI algorithm.

At the heart of many of these AI applications lies a process known as Deep Learning. There has been a lot of talk about Deep Learning lately, not only here on Hackaday, but all over the interwebs. And like most things related to AI, it can be a bit complicated and difficult to understand without a strong background in computer science.

If you’re familiar with my quantum theory articles, you’ll know that I like to take complicated subjects, strip away the complication the best I can and explain it in a way that anyone can understand. It is the goal of this article to apply a similar approach to this idea of Deep Learning. If neural networks make you cross-eyed and machine learning gives you nightmares, read on. You’ll see that “Deep Learning” sounds like a daunting subject, but is really just a $20 term used to describe something whose underpinnings are relatively simple.

Continue reading “Wrap Your Mind Around Neural Networks”

Neural Networks Walk Better Than Humans for Game Animation

Modern day video games have come a long way from Mario the plumber hopping across the screen. Incredibly intricate environments of games today are part of the lure for new gamers and this experience is brought to life by the characters interacting with the scene. However the illusion of the virtual world is disrupted by unnatural movements of the figures in performing actions such as turning around suddenly or climbing a hill.

To remedy the abrupt movements, [Daniel Holden et. al] recently published a paper (PDF) and a video showing a method to greatly improve the real-time character control mechanism. The proposed system uses a neural network that has been trained using a large data set of walking, jumping and other sequences on various terrains. The key is breaking down the process of bipedal movement and its cyclic behaviour into a series of sub-steps or phases. Each phase translates to a natural posture for the character while moving. The system precomputes the next-phases offline to conserve computational resources at runtime. Then considering user control, previous pose of the character(including joint positions) and terrain geometry, the consequent frame of the animation is computed. The computation is done by a regression network that calculates future position of the joints and a blending function is used for Motion Matching as described in a presentation (PDF) and video by [Simon Clavet]. Continue reading “Neural Networks Walk Better Than Humans for Game Animation”

Google AIY: Artificial Intelligence Yourself

When Amazon released the API to their voice service Alexa, they basically forced any serious players in this domain to bring their offerings out into the hacker/maker market as well. Now Google and Raspberry Pi have come together to bring us ‘Artificial Intelligence Yourself’ or AIY.

A free hardware kit made by Google was distributed with Issue 57 of the MagPi Magazine which is targeted at makers and hobbyists which you can see in the video after the break. The kit contains a Raspberry Pi Voice Hat, a microphone board, a speaker and a number of small bits to mount the kit on a Raspberry Pi 3. Putting all of it together and following the instruction on the official site gets you a Google Voice Interaction Kit with a bunch of IOs just screaming to be put to good use.

The source code for the python app can be downloaded from GitHub and consists of a loop that awaits a trigger. This trigger can be a press of a button or a clap near the microphones. When a trigger is detected, the recorder function takes over sending the stream to the Google Cloud. Speech-to-Text conversion happens there and the result is returned via a Text-To-Speech engine that helps the system talk back. The repository suggests that the official Voice Kit SD Image (893 MB download) is based on Raspbian so don’t go reflashing a memory card right away, you should be able to add this to an existing install.

Continue reading “Google AIY: Artificial Intelligence Yourself”