AI at the Edge Hack Chat

Join us Wednesday at noon Pacific time for the AI at the Edge Hack Chat with John Welsh from NVIDIA!

Machine learning was once the business of big iron like IBM’s Watson or the nearly limitless computing power of the cloud. But the power in AI is moving away from data centers to the edge, where IoT devices are doing things once unheard of. Embedded systems capable of running modern AI workloads are now cheap enough for almost any hacker to afford, opening the door to applications and capabilities that were once only science fiction dreams.

John Welsh is a Developer Technology Engineer with NVIDIA, a leading company in the Edge computing space. He’ll be dropping by the Hack Chat to discuss NVIDIA’s Edge offerings, like the Jetson Nano we recently reviewed. Join us as we discuss NVIDIA’s complete Jetson embedded AI product line up, getting started with Edge AI, and where Edge AI is headed.

join-hack-chat

Our Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, May 1 at noon Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

But Can Your AI Recognize Slugs?

The common garden slug is a mystery. Observing these creatures as they slowly emerge from their slimy lairs each evening, it’s hard to imagine how much damage they can do. With paradoxical speed, they can mow down row after row of tender seedlings, leaving nothing but misery in their mucusy wake.

To combat this slug menace, [Tegwyn☠Twmffat] (the [☠] is silent) is developing this AI-powered slug busting system. The squeamish or those challenged by the ethics of slug eradication can relax: no slugs have been harmed yet. So far [Tegwyn] has concentrated on the detection of slugs, a considerably non-trivial problem since there are few AI models that are already trained for slugs.

So far, [Tegwyn] has acquired 5,712 images of slugs in their natural environment – no mean feat as they only come out at night, they blend into their background, and their slimy surface makes for challenging reflections. The video below shows moderate success of the trained model using a static image of a slug; it also gives a glimpse at the hardware used, which includes an Nvidia Jetson TX2. [Tegwyn] plans to capture even more images to refine the model and boost it up from the 50 to 60% confidence level to something that will allow for the remediation phase of the project, which apparently involves lasers. Although he’s willing to entertain other methods of disposal; perhaps a salt-shooting turret gun?

This isn’t the first garden-tending project [Tegwyn] has tackled. You may recall The Weedinator, his 2018 Hackaday Prize entry. This slug buster is one of his entries for the 2019 Hackaday Prize, which was just announced. We’re looking forward to seeing the onslaught of cool new projects everyone will be coming up with.

Continue reading “But Can Your AI Recognize Slugs?”

Hackaday Links: March 17, 2019

There’s now an official Raspberry Pi keyboard and mouse. The mouse is a mouse clad in pink and white plastic, but the Pi keyboard has some stuff going for it. It’s small, which is what you want for a Pi keyboard, and it has a built-in USB hub. Even Apple got that idea right with the first iMac keyboard. The keyboard and mouse combo are available for £22.00

A new Raspberry Pi keyboard and a commemorative 50p coin from the Royal Mint featuring the works of Stephen Hawking? Wow, Britain is tearing up the headlines recently.

Just because, here’s a Power Wheels Barbie Jeep with a 55 HP motor. Interesting things to note here is how simple this build actually is. If you look at some of the Power Wheels Racing cars, they have actual diffs on the rear axle. This build gets a ton of points for the suspension, though. Somewhere out there on the Internet, there is the concept of the perfect Power Wheels conversion. There might be a drive shaft instead of a drive chain, there might be an electrical system, and someone might have figured out how someone over the age of 12 can fit comfortably in a Power Wheels Jeep. No one has done it yet.

AI is taking away our free speech! Free speech, as you’re all aware, applies to all speech in all forms, in all venues. Except you specifically can’t yell fire in a movie theater, that’s the one exception. Now AI researchers are treading on your right to free speech, an affront to the Gadsden flag flying over our compound and the ‘no step on snek’ patch on our tactical balaclava, with a Chrome plugin. This plugin filter’s ‘toxic’ comments with AI, but there’s an unintended consequence: people want need to read what I have to say, and this will filter it out! The good news is that it doesn’t work on Hackaday because our commenting system is terrible.

This week was the 30th anniversary of the World Wide Web, first proposed on March 11, 1989 by Tim Berners-Lee. The web, and to a greater extent, the Internet, is the single most impactful invention of the last five hundred years; your overly simplistic view of world history can trace modern western hegemony and the reconnaissance to Gutenberg’s invention of the printing press, and so it will be true with the Internet. Tim’s NeXT cube, in a case behind glass at CERN, will be viewed with the same reverence as Gutenberg’s first printing press (if it had survived, but you get where I’m going with this). Five hundred years from now, the major historical artifact from the 20th century will be a NeXT cube, that was, coincidentally, made by Steve Jobs. If you want to get your hands on a NEXT cube, be prepared to pony up, but Adafruit has a great authorial for running Openstep on a virtual machine. If you want the real experience, you can pick up a NeXT keyboard and mouse relatively cheaply.

Sometimes you need an RCL box, so here’s one on Kickstarter. Yeah, it’s kind of expensive. Have you ever bought every value of inductor?

A Game Boy Supercomputer for AI Research

Reinforcement learning has been a hot-button area of research into artificial intelligence. This is a method where software agents make decisions and refine these over time based on analyzing resulting outcomes. [Kamil Rocki] had been exploring this field, but needed some more powerful tools. As it turned out, a cluster of emulated Game Boys running at a billion FPS was just the ticket.

The trick to efficient development of reinforcement learning systems is to be able to run things quickly. If it takes an AI one thousand attempts to clear level 1 of Super Mario Bros., you’d better hope you’re not running that in real time. [Kamil] started by coding a Game Boy emulator in C. By then implementing it in Verilog, [Kamil] was able to create a cluster of emulated Game Boys that enabled games to be run at breakneck speed, greatly speeding the training and development process.

[Kamil] goes into detail about how the work came to revolve around the Game Boy platform. After initial work with the Atari 2600, which is somewhat of a defacto standard in RL circles, [Kamil] began to explore further. It was desired to have an environment with a well-documented CPU,  a simple display to cut down on the preprocessing required, and a wide selection of games.

The goal of the project is to allow [Kamil] to explore the transfer of knowledge from one game to another in RL systems. The aim is to determine whether for an AI, skills at Metroid can help in Prince of Persia, for example. This is arguably true for human players, but it remains to be seen if this can be carried over for RL systems.

It’s rather advanced work, on both a hardware emulation level and in terms of AI research. Similar work has been done, training a computer to play Super Mario through monitoring score and world values. We can’t wait to see where this research leads in years to come.

This Cardboard Box Can Tell You What It Sees

It wasn’t that long ago that talking to computers was the preserve of movies and science fiction. Slowly, voice recognition improved, and these days it’s getting to be pretty usable. The technology has moved beyond basic keywords, and can now parse sentences in natural language. [Liz Meyers] has been working with the technology, creating WhatIsThat – an AI that can tell you what it’s looking at.

Adding a camera to Google’s AIY Voice Kit makes for a versatile object identification system.

The device is built around Google’s AIY Voice Kit, which consists of a Raspberry Pi with some additional hardware and software to enable it to process voice queries. [Liz] combined this with a Raspberry Pi camera and the Google Cloud Vision API. This allows WhatIsThat to respond to users asking questions by taking a photo, and then identifying what it sees in the frame.

It may seem like a frivolous project to those with working vision, but there is serious potential for this technology in the accessibility space. The device can not only describe things like animals or other objects, it can also read text aloud and even identify logos. The ability of the software to go beyond is impressive – a video demonstration shows the AI correctly identifying a Boston Terrier, and attributing a quote to Albert Einstein.

Artificial intelligence has made a huge difference to the viability of voice recognition – because it’s one thing to understand the words, and another to understand what they mean when strung together. Video after the break.

[Thanks to Baldpower for the tip!]

Continue reading “This Cardboard Box Can Tell You What It Sees”

Stethoscopes, Electronics, and Artificial Intelligence

For all the advances in medical diagnostics made over the last two centuries of modern medicine, from the ability to peer deep inside the body with the help of superconducting magnets to harnessing the power of molecular biology, it seems strange that the enduring symbol of the medical profession is something as simple as the stethoscope. Hardly a medical examination goes by without the frigid kiss of a stethoscope against one’s chest, while we search the practitioner’s face for a telltale frown revealing something wrong from deep inside us.

The stethoscope has changed little since its invention and yet remains a valuable if problematic diagnostic tool. Efforts have been made to solve these problems over the years, but only with relatively recent advances in digital signal processing (DSP), microelectromechanical systems (MEMS), and artificial intelligence has any real progress been made. This leaves so-called smart stethoscopes poised to make a real difference in diagnostics, especially in the developing world and under austere or emergency situations.

Continue reading “Stethoscopes, Electronics, and Artificial Intelligence”