Neural Networking: Robots Learning From Video

Humans are very good at watching others and imitating what they do. Show someone a video of flipping a switch to turn on a CNC machine and after a single viewing they’ll be able to do it themselves. But can a robot do the same?

Bear in mind that we want the demonstration video to be of a human arm and hand flipping the switch. When the robot does it, the camera that is its eye will be seeing its robot arm and gripper. So somehow it’ll have to know that its robot parts are equivalent to the human parts in the demonstration video. Oh, and the switch in the demonstration video may be a different model and make, and the CNC machine may be a different one, though we’ll at least put the robot within reach of its switch.

Sound difficult?

Researchers from Google Brain and the University of Southern California have done it. In their paper describing how, they talk about a few different experiments but we’ll focus on just one, getting a robot to imitate pouring a liquid from a container into a cup.

Continue reading “Neural Networking: Robots Learning From Video”

AI Prosthesis Is Music To Our Ears

Prostheses are a great help to those who have lost limbs, or who never had them in the first place. Over the past few decades there has been a great deal of research done to make these essential devices more useful, creating prostheses that are capable of movement and more accurately recreating the functions of human body parts. At Georgia Tech, they’re working on just that, with the help of AI.

[Jason Barnes] lost his arm in a work accident, which prevented him from playing the piano the way he used to. The researchers at Georgia Tech worked with him, eventually producing a prosthetic arm that, unlike most, actually has individual finger control. This is achieved through the use of an ultrasound probe, which is used to detect muscle movements elsewhere on his body, with enough detail to allow the control of individual fingers. This is done through a TensorFlow-based neural network which analyses the ultrasound data to determine which finger the user is trying to move. The use of ultrasound was the major breakthrough which made this possible; previous projects have often relied on electromyogram sensors to read muscle impulses but these lack the resolution required.

The prosthesis is nicknamed the “Skywalker arm”, after its similarities to the prostheses seen in the Star Wars films. It’s not [Jason]’s first advanced prosthetic, either – Georgia Tech has also equipped him with an advanced drumming prosthesis. This allows him to use two sticks with a single arm, the second stick using advanced AI routines to drum along with the music in the room.

It’s great to see music being used as a driver to create high-performance prosthetics and push the state of the art forward. We’re sure [Jason] enjoys performing with the new hardware, too. But perhaps you’d like to try something similar, even though you’ve got two hands already? Try this on for size.

Continue reading “AI Prosthesis Is Music To Our Ears”

Tensorflow Tutorial Uses Python

Around the Hackaday secret bunker, we’ve been talking quite a bit about machine learning and neural networks. There’s been a lot of renewed interest in the topic recently because of the success of TensorFlow. If you are adept at Python and remember your high school algebra, you might enjoy [Oliver Holloway’s] tutorial on getting started with Tensorflow in Python.

[Oliver] gives links on how to do the setup with notes on Python versions. Then he shows some basic setup operations. From there, he has the software “learn” how to classify random points that either fall into a circle or don’t. Granted, this is easy enough to do with traditional programming, so it isn’t a great practical example, but it is illustrative for learning purposes.

Given that it is easy to algorithmically decide which points are in the circle and which are not, it is simple to develop training data. It is also easy to look at the result and see how close it is to the actual circle. You’ll see that it takes a lot of slow learning before the result space looks like a circle and not a triangle or some other odd shape.

Continue reading “Tensorflow Tutorial Uses Python”

TensorFlow Lite demos

Smarter Phones In Your Hacks With TensorFlow Lite

One way to run a compute-intensive neural network on a hack has been to put a decent laptop onboard. But wouldn’t it be great if you could go smaller and cheaper by using a phone instead? If your neural network was written using Google’s TensorFlow framework then you’ve had the option of using TensorFlow Mobile, but it doesn’t use any of the phone’s accelerated hardware, and so it might not have been fast enough.

TensorFlow Lite architecture
TensorFlow Lite architecture

Google has just released a new solution, the developer preview of TensofFlow Lite for iOS and Android and announced plans to support Raspberry Pi 3. On Android, the bottom layer is the Android Neural Networks API which makes use of the phone’s DSP, GPU and/or any other specialized hardware to speed up computations. Failing that, it falls back on the CPU.

Currently, fewer operators are supported than with TensforFlor Mobile, but more will be added. (Most of what you do in TensorFlow is done through operators, or ops. See our introduction to TensorFlow article if you need a refresher on how TensorFlow works.) The Lite version is intended to be the successor to Mobile. As with Mobile, you’d only do inference on the device. That means you’d train the neural network elsewhere, perhaps on a GPU-rich desktop or on a GPU farm over the network, and then make use of the trained network on your device.

What are we envisioning here? How about replacing the MacBook Pro on the self-driving RC cars we’ve talked about with a much smaller, lighter and less power-hungry Android phone? The phone even has a camera and an IMU built-in, though you’d need a way to talk to the rest of the hardware in lieu of GPIO.

You can try out TensorFlow Lite fairly easily by going to their GitHub and downloading a pre-built binary. We suspect that’s what was done to produce the first of the demonstration videos below.

Continue reading “Smarter Phones In Your Hacks With TensorFlow Lite”

Prototyping, Making A Board For, And Coding An ARM Neural Net Robot

[Sean Hodgins]’s calls his three-part video series an Arduino Neural Network Robot but we’d rather call it an enjoyable series on prototyping, designing a board with surface mount parts, assembling it, and oh yeah, putting a neural network on it, all the while offering plenty of useful tips.

In part one, prototype and design, he starts us out with a prototype using a breadboard. The final robot isn’t on an Arduino, but instead is on a custom-made board built around an ARM Cortex-M0+ processor. However, for the prototype, he uses a SparkFun SAM21 Arduino-sized board, a Pololu DRV8835 dual motor driver board, four photoresistors, two motors, a battery, and sundry other parts.

Once he’s proven the prototype works, he creates the schematic for his custom board. Rather than start from scratch, he goes to SparkFun’s and Pololu’s websites for the schematics of their boards and incorporates those into his design. From there he talks about how and why he starts out in a CAD program, then moves on to KiCad where he talks about his approach to layout.

Part two is about soldering and assembly, from how he sorts the components while still in their shipping packages, to tips on doing the reflow in a toaster oven, and fixing bridges and parts that aren’t on all their pads, including the microprocessor.

In Part three he writes the code. The robot’s objective is simple, run away from the light. He first tests the photoresistors without the motors and then writes a procedural program to make the robot afraid of the light, this time with the motors. Finally, he writes the neural network code, but not before first giving a decent explanation of how the neural network works. He admits that you don’t really need a neural network to make the robot run away from the light. But from his comparisons of the robot running using the procedural approach and then the neural network approach, we think the neural network one responds better to what would be the in-between cases for the procedural approach. Admittedly, it could be that a better procedural version could be written, but having the neural network saved him the trouble and he’s shown us a lot that can be reused from the effort.

In case you want to replicate this, [Sean]’s provided a GitHub page with BOM, code and so on. Check out all three parts below, or watch just the parts that interest you.

Continue reading “Prototyping, Making A Board For, And Coding An ARM Neural Net Robot”

Neural Network Really Ties The Room Together

If there’s one thing that Hollywood knows about hackers, it’s that they absolutely love data visualizations. Sometimes it’s projected on a big wall (Hackers, WarGames), other times it’s gibberish until the plot says otherwise (Sneakers, The Matrix). But no matter what, it has to look cool. No hacker worth his or her salt can possibly work unless they’ve got an evolving Venn diagram or spectral waterfall running somewhere in the background.

Inspired by Hollywood portrayals, specifically one featured in Avengers: Age of Ultron, [Zack Akil] decided it was time to secure his place in the pantheon of hacker wall visualizations. But not content to just show meaningless nonsense on his wall, he set out to create something that was at least showing actual data.

[Zack] created a neural network to work through multi-label classification data in Python using the scikit-learn machine learning suite. The code takes the values from the neutral network training algorithm and converts them to RGB colors by way of an Arduino. Each “node” in the neutral network is 3D printed in translucent filament, and fitted with an RGB LED module. These modules are then connected to each other via side-glow fiber optic tubes, so that the colors within the tubes are mixed depending on the colors of the nodes they are attached to. This allows for a very organic “growing” effect, as colors move through the network node-by-node.

In the end this particular visualization doesn’t really mean anything; the data it’s working on only exists for the purposes of the visualization itself. But [Zack] succeeded in creating a practical visualization of machine learning, and if you’re the kind of person who needs to keep tabs on learning algorithms, some variation of this design may be just what you’re looking for.

If AI isn’t your thing but you still want a wall of RGB LEDs, maybe you can use this phased array antenna visualizer instead. If you’re really hip, maybe you’ll go the analog route and put a big gauge on the wall.

Continue reading “Neural Network Really Ties The Room Together”

Artificial Intelligence At The Top Of A Professional Sport

The lights dim and the music swells as an elite competitor in a silk robe passes through a cheering crowd to take the ring. It’s a blueprint familiar to boxing, only this pugilist won’t be throwing punches.

OpenAI created an AI bot that has beaten the best players in the world at this year’s International championship. The International is an esports competition held annually for Dota 2, one of the most competitive multiplayer online battle arena (MOBA) games.

Each match of the International consists of two 5-player teams competing against each other for 35-45 minutes. In layman’s terms, it is an online version of capture the flag. While the premise may sound simple, it is actually one of the most complicated and detailed competitive games out there. The top teams are required to practice together daily, but this level of play is nothing new to them. To reach a professional level, individual players would practice obscenely late, go to sleep, and then repeat the process. For years. So how long did the AI bot have to prepare for this competition compared to these seasoned pros? A couple of months.

Continue reading “Artificial Intelligence At The Top Of A Professional Sport”