Nvidia Teaching Robots To Master IKEA Kitchens

The current wave of excitement around machine learning kicked off when graphics processors were repurposed to make training deep neural networks practical. Nvidia found themselves the engine of a new revolution and seized their opportunity to help push frontiers of research. Their research lab in Seattle will focus on one such field: making robots smart enough to work alongside humans in an IKEA kitchen.

Today’s robots are mostly industrial machines that require workspaces designed for robots. They run day and night, performing repetitive tasks, usually inside cages to keep squishy humans out of harm’s way. Robots will need to be a lot smarter about their surroundings before we could safely dismantle those cages. While there are some industrial robots making a start in this arena, they have a hard time justifying their price premium. (Example: financial difficulty of Rethink Robotics, who made the Baxter and Sawyer robots.)

So there’s a lot of room for improvement in this field, and this evolution will need a training environment offering tasks of varying difficulty levels for robots. Anywhere from the rigorous structured environment where robots work well today, to a dynamic unstructured environment where robots are hopelessly lost. Lab lead Dr. Dieter Fox explained how a kitchen is ideal. A meticulously cleaned and organized kitchen is very similar to an industrial setting. From there, we can gradually make a kitchen more challenging for a robot. For example: today’s robots can easily pick up a can with its rigid regular shape, but what about a half-full bag of flour? And from there, learn to pick up a piece of fresh fruit without bruising it. These tasks share challenges with many other tasks outside of a kitchen.

This isn’t about building a must-have home cooking robot, it’s about working through the range of challenges shared with common kitchen tasks. The lab has a lot of neat hardware, but its success will be measured by the software, and like all research, published results should be reproducible by other labs. You don’t have a high-end robotics lab in your house, but you do have a kitchen. That’s why it’s not just any kitchen, but an IKEA kitchen, to take advantage of the fact they are standardized, affordable, and available around the world for other robot researchers to benchmark against.

Most of us can experiment in a kitchen, IKEA or not. We have access to all the other tools we need: affordable AI hardware from Google, from Beaglebone, and from Nvidia. And we certainly have no shortage of robot arms and manipulators on these pages, ranging from a small laser-cut MeArm to our 2018 Hackaday Prize winner Dexter.

AI at the Edge Hack Chat

Join us Wednesday at noon Pacific time for the AI at the Edge Hack Chat with John Welsh from NVIDIA!

Machine learning was once the business of big iron like IBM’s Watson or the nearly limitless computing power of the cloud. But the power in AI is moving away from data centers to the edge, where IoT devices are doing things once unheard of. Embedded systems capable of running modern AI workloads are now cheap enough for almost any hacker to afford, opening the door to applications and capabilities that were once only science fiction dreams.

John Welsh is a Developer Technology Engineer with NVIDIA, a leading company in the Edge computing space. He’ll be dropping by the Hack Chat to discuss NVIDIA’s Edge offerings, like the Jetson Nano we recently reviewed. Join us as we discuss NVIDIA’s complete Jetson embedded AI product line up, getting started with Edge AI, and where Edge AI is headed.

join-hack-chat

Our Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, May 1 at noon Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Finding Plastic Spaghetti with Machine Learning

Among 3D printer owners, “spaghetti” is the common term for the tangled mess of stringy plastic that’s often the result of a failed print. Fear of their print bed turning into a hot plate of PLA spaghetti is enough to keep many users from leaving their machines operating overnight or while they’re out of the house. Accordingly, we’ve seen a number of methods that allow the human operator to watch their print remotely to make sure everything is progressing smoothly.

But unless you plan on keeping your eyes on your phone the entire time you’re out of the house, there’s still a chance some PETG pasta might sneak its way out. Enter the Spaghetti Detective, an open source project that lets machine learning take over when you can’t sit watching the printer all day. Their system plugs into Octoprint to monitor your print in real-time and pause it if it starts looking particularly stringy. The concept is still under development, but judging by the gallery of results submitted by users, the system seems to have a knack for identifying non-edible noodles.

Once the software comes out of beta it looks like the team is going to try to monetize it by providing hosting and monitoring services for a monthly fee, but as it’s an open source project, you’re also able to run the software on your own machine. Though the documentation notes that the lowly Raspberry Pi doesn’t have quite what it takes to handle the image recognition routines, so you’ll need a proper computer if you want to self-host the service. Could be a good use for that old laptop you’ve got kicking around the lab.

As demonstrated in the video after the break, the system’s “spaghetti confidence” is shown with a simple to understand gauge: green is a good-looking print, and red means the detective is getting a sniff of the stringy stuff. If your print dips into the red too much, Octoprint is commanded to pause the print. The user can then look at the last image from the printer and decide to either cancel the print entirely, or resume if the Spaghetti Detective got a little overzealous.

Frankly, it’s a brilliant idea and we’re very interested to see where it goes from here. Assuming you’ve got Octoprint controlling your 3D printer there are some very clever monitoring systems out there currently, but since spaghetti isn’t the only thing a rogue 3D printer can cook up, having an extra line of defense sounds like a good idea to us.

Continue reading “Finding Plastic Spaghetti with Machine Learning”

A Game Boy Supercomputer for AI Research

Reinforcement learning has been a hot-button area of research into artificial intelligence. This is a method where software agents make decisions and refine these over time based on analyzing resulting outcomes. [Kamil Rocki] had been exploring this field, but needed some more powerful tools. As it turned out, a cluster of emulated Game Boys running at a billion FPS was just the ticket.

The trick to efficient development of reinforcement learning systems is to be able to run things quickly. If it takes an AI one thousand attempts to clear level 1 of Super Mario Bros., you’d better hope you’re not running that in real time. [Kamil] started by coding a Game Boy emulator in C. By then implementing it in Verilog, [Kamil] was able to create a cluster of emulated Game Boys that enabled games to be run at breakneck speed, greatly speeding the training and development process.

[Kamil] goes into detail about how the work came to revolve around the Game Boy platform. After initial work with the Atari 2600, which is somewhat of a defacto standard in RL circles, [Kamil] began to explore further. It was desired to have an environment with a well-documented CPU,  a simple display to cut down on the preprocessing required, and a wide selection of games.

The goal of the project is to allow [Kamil] to explore the transfer of knowledge from one game to another in RL systems. The aim is to determine whether for an AI, skills at Metroid can help in Prince of Persia, for example. This is arguably true for human players, but it remains to be seen if this can be carried over for RL systems.

It’s rather advanced work, on both a hardware emulation level and in terms of AI research. Similar work has been done, training a computer to play Super Mario through monitoring score and world values. We can’t wait to see where this research leads in years to come.

Leigh Johnson’s Guide To Machine Vision On Raspberry Pi

We salute hackers who make technology useful for people in emerging markets. Leigh Johnson joined that select group when she accepted the challenge to build portable machine vision units that work offline and can be deployed for under $100 each. For hardware, a Raspberry Pi with camera plus screen can fit under that cost ceiling, and the software to give it sight is the focus of her 2018 Hackaday Superconference presentation. (Video also embedded below.)

The talk is a very concise 13 minutes, so Leigh flies through definitions of basic terms, before quickly naming TensorFlow and Keras as the tools she used. The time she saved here was spent on explaining what convolutional neural networks are and how they work, just enough to prepare the audience. But all of that is really just background, the meat of the talk is self-contained examples that Leigh has put together and made available online. I love to see that since it means you go beyond just watching and try it out for yourself. Continue reading “Leigh Johnson’s Guide To Machine Vision On Raspberry Pi”

Ludwig Promises Easy Machine Learning from Uber

Machine learning has brought an old idea — neural networks — to bear on a range of previously difficult problems such as handwriting and speech recognition. Better software and hardware has made it feasible to apply sophisticated machine learning algorithms that would have previously been only possible on giant supercomputers. However, there’s still a learning curve for developing both models and software to use these trained models. Uber — you know, the guys that drive you home when you’ve had a bit too much — have what they are calling a “code-free deep learning toolbox” named Ludwig. The promise is you can create, train, and use models to extract features from data without writing any code. You can find the project itself on GitHub.io.

The toolbox is built over TensorFlow and they claim:

Ludwig is unique in its ability to help make deep learning easier to understand for non-experts and enable faster model improvement iteration cycles for experienced machine learning developers and researchers alike. By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures rather than data wrangling.

Continue reading “Ludwig Promises Easy Machine Learning from Uber”

Foundations For Machine Learning In English (Or Russian)

We are big fans of posts and videos that try to give you a gut-level intuition on technical topics. While [vas3k’s] post “Machine Learning for Everyone” fits the bill, we knew we’d like it from the opening sentences:

Machine Learning is like sex in high school. Everyone is talking about it, a few know what to do, and only your teacher is doing it.”

That sets the tone. What follows is a very comprehensive exposition of machine learning fundamentals. There is no focus on a particular tool, instead this is all the underpinnings. The original post was in Russian, but the English version is easy to read and doesn’t come off as a poor machine translation.

Continue reading “Foundations For Machine Learning In English (Or Russian)”