AI At The Edge Hack Chat

Join us Wednesday at noon Pacific time for the AI at the Edge Hack Chat with John Welsh from NVIDIA!

Machine learning was once the business of big iron like IBM’s Watson or the nearly limitless computing power of the cloud. But the power in AI is moving away from data centers to the edge, where IoT devices are doing things once unheard of. Embedded systems capable of running modern AI workloads are now cheap enough for almost any hacker to afford, opening the door to applications and capabilities that were once only science fiction dreams.

John Welsh is a Developer Technology Engineer with NVIDIA, a leading company in the Edge computing space. He’ll be dropping by the Hack Chat to discuss NVIDIA’s Edge offerings, like the Jetson Nano we recently reviewed. Join us as we discuss NVIDIA’s complete Jetson embedded AI product line up, getting started with Edge AI, and where Edge AI is headed.

join-hack-chat

Our Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, May 1 at noon Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Stethoscopes, Electronics, And Artificial Intelligence

For all the advances in medical diagnostics made over the last two centuries of modern medicine, from the ability to peer deep inside the body with the help of superconducting magnets to harnessing the power of molecular biology, it seems strange that the enduring symbol of the medical profession is something as simple as the stethoscope. Hardly a medical examination goes by without the frigid kiss of a stethoscope against one’s chest, while we search the practitioner’s face for a telltale frown revealing something wrong from deep inside us.

The stethoscope has changed little since its invention and yet remains a valuable if problematic diagnostic tool. Efforts have been made to solve these problems over the years, but only with relatively recent advances in digital signal processing (DSP), microelectromechanical systems (MEMS), and artificial intelligence has any real progress been made. This leaves so-called smart stethoscopes poised to make a real difference in diagnostics, especially in the developing world and under austere or emergency situations.

Continue reading “Stethoscopes, Electronics, And Artificial Intelligence”

Rise Of The Unionized Robots

For the first time, a robot has been unionized. This shouldn’t be too surprising as a European Union resolution has already recommended creating a legal status for robots for purposes of liability and a robot has already been made a citizen of one country. Naturally, these have been done either to stimulate discussion before reality catches up or as publicity stunts.

Dum-E spraying Tony StarkWhat would reality have to look like before a robot should be given legal status similar to that of a human? For that, we can look to fiction.

Tony Stark, the fictional lead character in the Iron Man movies, has a robot called Dum-E which is little more than an industrial robot arm. However, Stark interacts with it using natural language and it clearly has feelings which it demonstrates from its posture and sounds of sadness when Stark scolds it after needlessly sprays Stark using a fire extinguisher. In one movie Dum-E saves Stark’s life while making sounds of compassion. And when Stark makes Dum-E wear a dunce cap for some unexplained transgression, Dum-E appears to get even by shooting something at Stark. So while Dum-E is a robot assistant capable of responding to natural language, something we’re sure Hackaday readers would love to have in our workshops, it also has emotions and acts on its own volition.

Here’s an exercise to try to find the boundary between a tool and a robot deserving of personhood.

Continue reading “Rise Of The Unionized Robots”

The Naughty AIs That Gamed The System

Artificial intelligence (AI) is undergoing somewhat of a renaissance in the last few years. There’s been plenty of research into neural networks and other technologies, often based around teaching an AI system to achieve certain goals or targets. However, this method of training is fraught with danger, because just like in the movies – the computer doesn’t always play fair.

It’s often very much a case of the AI doing exactly what it’s told, rather than exactly what you intended. Like a devious child who will gladly go to bed in the literal sense, but will not actually sleep, this can cause unexpected, and often quite hilarious results. [Victoria] has created a master list of scholarly references regarding exactly this.

The list spans a wide range of cases. There’s the amusing evolutionary algorithm designed to create creatures capable of high-speed movement, which merely spawned very tall creatures that generated these speeds by falling over. More worryingly, there’s the AI trained to identify toxic and edible mushrooms, which simply picked up on the fact that it was presented with the two types in alternating order. This ended up being an unreliable model in the real world. Similarly, the model designed to assess malignancy of skin cancers determined that lesions photographed with rulers for scale were more likely to be cancerous.

[Victoria] refers to this as “specification gaming”. One can draw parallels to classic sci-fi stories around the “Laws of Robotics”, where robots take such laws to their literal extremes, often causing great harm in the process. It’s an interesting discussion of the difficulty in training artificially intelligent systems to achieve their set goals without undesirable side effects.

We’ve seen plenty of work in this area before – like this use of evolutionary algorithms in circuit design.

The Little Cat That Could

Most humans take a year to learn their first steps, and they are notoriously clumsy. [Hartvik Line] taught a robotic cat to walk [YouTube link] in less time, but this cat had a couple advantages over a pre-toddler. The first advantage was that it had four legs, while the second came from a machine learning technique called genetic algorithms that surpassed human fine-tuning in two hours. That’s a pretty good benchmark.

The robot itself is an impressive piece inspired by robots at EPFL, a research institute in Switzerland. All that Swiss engineering is not easy for one person to program, much less a student, but that is exactly what happened. “Nixie,” as she is called, is a part of a master thesis for [Hartvik] at the University of Stavanger in Norway. Machine learning efficiency outstripped human meddling very quickly, and it can even relearn to walk if the chassis is damaged.

We have been watching genetic algorithm programming for more than half of a decade, and Skynet hasn’t popped forth, however we have a robot kitty taking its first steps.

Continue reading “The Little Cat That Could”

Modern Wizard Summons Familiar Spirit

In European medieval folklore, a practitioner of magic may call for assistance from a familiar spirit who takes an animal form disguise. [Alex Glow] is our modern-day Merlin who invoked the magical incantations of 3D printing, Arduino, and Raspberry Pi to summon her familiar Archimedes: The AI Robot Owl.

The key attraction in this build is Google’s AIY Vision kit. Specifically the vision processing unit that tremendously accelerates image classification tasks running on an attached Raspberry Pi Zero W. It no longer consumes several seconds to analyze each image, classification can now run several times per second, all performed locally. No connection to Google cloud required. (See our earlier coverage for more technical details.) The default demo application of a Google AIY Vision kit is a “joy detector” that looks for faces and attempts to determine if a face is happy or sad. We’ve previously seen this functionality mounted on a robot dog.

[Alex] aimed to go beyond the default app (and default box) to create Archimedes, who was to reward happy people with a sticker. As a moving robotic owl, Archimedes had far more crowd appeal than the vision kit’s default cardboard box. All the kit components have been integrated into Archimedes’ head. One eye is the expected Pi camera, the other eye is actually the kit’s piezo buzzer. The vision kit’s LED-illuminated button now tops the dapper owl’s hat.

Archimedes was created to join in Google’s promotion efforts. Their presence at this Maker Faire consisted of two tents: one introductory “Learn to Solder” tent where people can create a blinky LED badge, and the other tent is focused on their line of AIY kits like this vision kit. Filled with demos of what the kits can do aside from really cool robot owls.

Hopefully these promotional efforts helped many AIY kits find new homes in the hands of creative makers. It’s pretty exciting that such a powerful and inexpensive neural net processor is now widely available, and we look forward to many more AI-powered hacks to come.

Continue reading “Modern Wizard Summons Familiar Spirit”

Neural Networks Using Doom Level Creator Like It’s 1993

Readers of a certain vintage will remember the glee of building your own levels for DOOM. There was something magical about carefully crafting a level and then dialing up your friends for a death match session on the new map. Now computers scientists are getting in on that fun in a new way. Researchers from Politecnico di Milano are using artificial intelligence to create new levels for the classic DOOM shooter (PDF whitepaper).

While procedural level generation has been around for decades, recent advances in machine learning to generate game content (usually levels) are different because they don’t use a human-defined algorithm. Instead, they generate new content by using existing, human-generated levels as a model. In effect they learn from what great game designers have already done and apply those lesson to new level generation. The screenshot shown above is an example of an AI generated level and the gameplay can be seen in the video below.

The idea of an AI generating levels is simple in concept but difficult in execution. The researchers used Generative Adversarial Networks (GANs) to analyze existing DOOM maps and then generate new maps similar to the originals. GANs are a type of neural network which learns from training data and then generates similar data. They considered two types of GANs when generating new levels: one that just used the appearance of the training maps, and another that used both the appearance and metrics such as the number of rooms, perimeter length, etc. If you’d like a better understanding of GANs, [Steven Dufresne] covered it in his guide to the evolving world of neural networks.

While both networks used in this project produce good levels, the one that included other metrics resulted in higher quality levels. However, while the AI-generated levels appeared similar at a high level to human-generated levels, many of the little details that humans tend to include were omitted. This is partially due to a lack of good metrics to describe levels and AI-generated data.

Example DOOM maps generated by AI. Each row is one map, and each image is one aspect of the map (floor, height, things, and walls, from left to right)

We can only guess that these researcher’s next step is to use similar techniques to create an entire game (levels, characters, and music) via AI. After all, how hard can it be?? Joking aside, we would love to see you take this concept and run with it. We’re dying to play through some gnarly levels whipped up by the AI from Hackaday readers!

Continue reading “Neural Networks Using Doom Level Creator Like It’s 1993”