AI Listens to Radio

We’ve seen plenty of examples of neural networks listening to speech, reading characters, or identifying images. KickView had a different idea. They wanted to learn to recognize radio signals. Not just any radio signals, but Orthogonal Frequency Division Multiplexing (OFDM) waveforms.

OFDM is a modulation method used by WiFi, cable systems, and many other systems. In particular, they look at an 802.11g signal with a bandwidth of 20 MHz. The question is given a receiver for 802.11g, how can you reliably detect that an 802.11ac signal — up to 160 MHz — is using your channel? To demonstrate the technique they decided to detect 20 MHz signals using a 5 MHz bandwidth.

Continue reading “AI Listens to Radio”

AI Prosthesis Is Music To Our Ears

Prostheses are a great help to those who have lost limbs, or who never had them in the first place. Over the past few decades there has been a great deal of research done to make these essential devices more useful, creating prostheses that are capable of movement and more accurately recreating the functions of human body parts. At Georgia Tech, they’re working on just that, with the help of AI.

[Jason Barnes] lost his arm in a work accident, which prevented him from playing the piano the way he used to. The researchers at Georgia Tech worked with him, eventually producing a prosthetic arm that, unlike most, actually has individual finger control. This is achieved through the use of an ultrasound probe, which is used to detect muscle movements elsewhere on his body, with enough detail to allow the control of individual fingers. This is done through a TensorFlow-based neural network which analyses the ultrasound data to determine which finger the user is trying to move. The use of ultrasound was the major breakthrough which made this possible; previous projects have often relied on electromyogram sensors to read muscle impulses but these lack the resolution required.

The prosthesis is nicknamed the “Skywalker arm”, after its similarities to the prostheses seen in the Star Wars films. It’s not [Jason]’s first advanced prosthetic, either – Georgia Tech has also equipped him with an advanced drumming prosthesis. This allows him to use two sticks with a single arm, the second stick using advanced AI routines to drum along with the music in the room.

It’s great to see music being used as a driver to create high-performance prosthetics and push the state of the art forward. We’re sure [Jason] enjoys performing with the new hardware, too. But perhaps you’d like to try something similar, even though you’ve got two hands already? Try this on for size.

Continue reading “AI Prosthesis Is Music To Our Ears”

Google’s AIY Vision Kit Augments Pi With Vision Processor

Google has announced their soon to be available Vision Kit, their next easy to assemble Artificial Intelligence Yourself (AIY) product. You’ll have to provide your own Raspberry Pi Zero W but that’s okay since what makes this special is Google’s VisionBonnet board that they do provide, basically a low power neural network accelerator board running TensorFlow.

AIY VisionBonnet with Myriad 2 (MA2450) chip
AIY VisionBonnet with Myriad 2 (MA2450) chip

The VisionBonnet is built around the Intel® Movidius™ Myriad 2 (aka MA2450) vision processing unit (VPU) chip. See the video below for an overview of this chip, but what it allows is the rapid processing of compute-intensive neural networks. We don’t think you’d use it for training the neural nets, just for doing the inference, or in human terms, for making use of the trained neural nets. It may be worth getting the kit for this board alone to use in your own hacks. An alternative is to get Modivius’s Neural Compute Stick, which has the same chip on a USB stick for around $80, not quite double the Vision Kit’s $45 price tag.

The Vision Kit isn’t out yet so we can’t be certain of the details, but based on the hardware it looks like you’ll point the camera at something, press a button and it will speak. We’ve seen this before with this talking object recognizer on a Pi 3 (full disclosure, it was made by yours truly) but without the hardware acceleration, a single object recognition took around 10 seconds. In the vision kit we expect the recognition will be in real-time. So the Vision Kit may be much more dynamic than that. And in case it wasn’t clear, a key feature is that nothing is done on the cloud here, all processing is local.

The kit comes with three different applications: an object recognition one that can recognize up to 1000 different classes of objects, another that recognizes faces and their expressions, and a third that detects people, cats, and dogs. While you can get up to a lot of mischief with just that, you can run your own neural networks too. If you need a refresher on TensorFlow then check out our introduction. And be sure to check out the Myriad 2 VPU video below the break.

Continue reading “Google’s AIY Vision Kit Augments Pi With Vision Processor”

High-Speed Drones Use AI to Spoil the Fun

Some people look forward to the day when robots have taken over all our jobs and given us an economy where we can while our days away on leisure activities. But if your idea of play is drone racing, you may be out of luck if this AI pilot for high-speed racing drones has anything to say about it.

NASA’s Jet Propulsion Lab has been working for the past two years to develop the algorithms needed to let high-performance UAVs navigate typical drone racing obstacles, and from the look of the tests in the video below, they’ve made a lot of progress. The system is vision based, with the AI drones equipped with wide-field cameras looking both forward and down. The indoor test course has seemingly random floor tiles scattered around, which we guess provide some kind of waypoints for the drones. A previous video details a little about the architecture, and it seems the drones are doing the computer vision on-board, which we find pretty impressive.

Despite the program being bankrolled by Google, we’re sure no evil will come of this, and that we’ll be in no danger of being chased down by swarms of high-speed flying killbots anytime soon. For now we can take solace in the fact that JPL’s algorithms still can’t beat an elite human pilot like [Ken Loo], who bested the bots overall. But alarmingly, the human did no better than the bots on his first lap, which suggests that once the AI gets a little creativity and intuition like that needed to best a Go champion, [Ken] might need to find another line of work.

Continue reading “High-Speed Drones Use AI to Spoil the Fun”

Prototyping, Making A Board For, And Coding An ARM Neural Net Robot

[Sean Hodgins]’s calls his three-part video series an Arduino Neural Network Robot but we’d rather call it an enjoyable series on prototyping, designing a board with surface mount parts, assembling it, and oh yeah, putting a neural network on it, all the while offering plenty of useful tips.

In part one, prototype and design, he starts us out with a prototype using a breadboard. The final robot isn’t on an Arduino, but instead is on a custom-made board built around an ARM Cortex-M0+ processor. However, for the prototype, he uses a SparkFun SAM21 Arduino-sized board, a Pololu DRV8835 dual motor driver board, four photoresistors, two motors, a battery, and sundry other parts.

Once he’s proven the prototype works, he creates the schematic for his custom board. Rather than start from scratch, he goes to SparkFun’s and Pololu’s websites for the schematics of their boards and incorporates those into his design. From there he talks about how and why he starts out in a CAD program, then moves on to KiCad where he talks about his approach to layout.

Part two is about soldering and assembly, from how he sorts the components while still in their shipping packages, to tips on doing the reflow in a toaster oven, and fixing bridges and parts that aren’t on all their pads, including the microprocessor.

In Part three he writes the code. The robot’s objective is simple, run away from the light. He first tests the photoresistors without the motors and then writes a procedural program to make the robot afraid of the light, this time with the motors. Finally, he writes the neural network code, but not before first giving a decent explanation of how the neural network works. He admits that you don’t really need a neural network to make the robot run away from the light. But from his comparisons of the robot running using the procedural approach and then the neural network approach, we think the neural network one responds better to what would be the in-between cases for the procedural approach. Admittedly, it could be that a better procedural version could be written, but having the neural network saved him the trouble and he’s shown us a lot that can be reused from the effort.

In case you want to replicate this, [Sean]’s provided a GitHub page with BOM, code and so on. Check out all three parts below, or watch just the parts that interest you.

Continue reading “Prototyping, Making A Board For, And Coding An ARM Neural Net Robot”

Google’s Inception Sees This Turtle as a Gun; Image Recognition Camouflage

The good people at MIT’s Computer Science and Artificial Intelligence Laboratory [CSAIL] have found a way of tricking Google’s InceptionV3 image classifier into seeing a rifle where there actually is a turtle. This is achieved by presenting the classifier with what is called ‘adversary examples’.

Adversary examples are a proven concept for 2D stills. In 2014 [Goodfellow], [Shlens] and [Szegedy] added imperceptible noise to the image of a panda that from then on was classified as gibbon. This method relies on the image being undisturbed and can be overcome by zooming, blurring or rotating the image.

The applicability for real world shenanigans has been seriously limited but this changes everything. This weaponized turtle is a color 3D print that is reliably misclassified by the algorithm from any point of view. To achieve this, some knowledge about the classifier is required to generate misleading input. The image transformations, such as rotation, scaling and skewing but also color corrections and even print errors are added to the input and the result is then optimized to reliably mislead the algorithm. The whole process is documented in [CSAIL]’s paper on the method.

What this amounts to is camouflage from machine vision. Assuming that the method also works the other way around, the possibility of disguising guns (or anything else) as turtles has serious implications for automated security systems.

As this turtle targets the Inception algorithm, it should be able to fool the DIY image recognition talkbox that Hackaday’s own [Steven Dufresne] built.

Thanks to [Adam] for the tip.

Listen to the Netherworld with Artificial Intelligence

It’s that time of year again, and with Halloween arguably being the hacker’s perfect holiday, we’re starting to see a tick up in projects with a spooky theme. Most seem to do with making some otherwise tame Halloween decorations scarily awesome, but this is different — using artificial intelligence to search for ghosts.

It seems like [Matt Reed]’s “DeepWhisper” project is meant to be taken as light-hearted fun for the spooky season, but there may be a touch of seriousness to his efforts to listen in on ghostly conversations. The principle behind this is electronic voice phenomena (EVP), whereby the metabolically and/or dimensionally challenged are purported to influence electronic systems, resulting in heavily processed audio clips that seem to have a whispered endearment from the departed or a threat from a malevolent spirit. DeepWhisper takes this a step further by using a Raspberry Pi to feed audio into the Google Cloud Speech API for analysis. If anything is whispered in one of the 110 or so languages Google knows, it’ll get displayed on a screen. [Matt] plans to set DeepWhisper up in the aptly-named Butchertown section of Nashville and live-stream the results next week.

It’ll be interesting to see what Google’s neural network makes out of the random noise it will probably only ever hear. And [Matt] is planning on releasing his code for all to see, so there may be some valuable cloud techniques to learn from DeepWhisper. But in the unlikely event that he does discover ghosts, it’s nice to know you can have the tools and the talent to bust ’em.

Continue reading “Listen to the Netherworld with Artificial Intelligence”