A Car Phone — No, Not That Kind

Autonomous vehicle development is a field of technology that remains relatively elusive to the average hacker, what with the needing a whole car and all. Instead of having to deal with such a large scale challenge, [Piotr Sokólski] has instead turned to implementing the same principles on the scale of a small radio-controlled car.

Wanting to lower the barrier of entry for developing software for self-driving cars, he based his design off of something you’re likely to have lying around already: a smartphone. He cites the Google Cardboard project for his inspiration, with how it made VR more accessible without needing expensive hardware. The phone is able to control the actuators and wheel motors through a custom board, which it talks to via a Bluetooth connection. And since the camera points up in the way the phone is mounted in the frame, [Piotr] came up with a really clever solution of using a mirror as a periscope so the car can see in front of itself.

The software here has two parts, though the phone app one does little more than just serve as an interface by sending off a video feed to be processed. The whole computer vision processing is done on the desktop part, and it allows [Piotr] to do some fun things like using reinforcement learning to keep the car driving as long as possible without crashing. This is achieved by making the algorithm observe the images coming from the phone and giving it negative reward whenever an accelerometer detects a collision. Another experiment he’s done is use a QR tag on top of the car, visible to a fixed overhead camera, to determine the car’s position in the room.

This might not be the first time someone’s made a scaled down model of a self-driving vehicle, though it’s one of the most cleverly-designed ones, and it’s certainly much simpler than trying to do it on a full-sized car in your garage.

Continue reading “A Car Phone — No, Not That Kind”

Little Lamp To Learn Longer Leaps

Reinforcement learning is a subset of machine learning where the machine is scored on their performance (“evaluation function”). Over the course of a training session, behavior that improved final score is positively reinforced gradually building towards an optimal solution. [Dheera Venkatraman] thought it would be fun to use reinforcement learning for making a little robot lamp move. But before that can happen, he had to build the hardware and prove its basic functionality with a manual test script.

Inspired by the hopping logo of Pixar Animation Studios, this particular form of locomotion has a few counterparts in the natural world. But hoppers of the natural world don’t take the shape of a Luxo lamp, making this project an interesting challenge. [Dheera] published all of his OpenSCAD files for this 3D-printed lamp so others could join in the fun. Inside the lamp head is a LED ring to illuminate where we expect a light bulb, while also leaving room in the center for a camera. Mechanical articulation servos are driven by a PCA9685 I2C PWM driver board, and he has written and released code to interface such boards with Robot Operating System (ROS) orchestrating our lamp’s features. This completes the underlying hardware components and associated software foundations for this robot lamp.

Once all the parts have been printed, electronics wired, and everything assembled, [Dheera] hacked together a simple “Hello World” script to verify his mechanical design is good enough to get started. The video embedded after the break was taken at OSH Park’s Bring-A-Hack afterparty to Maker Faire Bay Area 2019. This motion sequence was frantically hand-coded in 15 minutes, but these tentative baby hops will serve as a great baseline. Future hopping performance of control algorithms trained by reinforcement learning will show how far this lamp has grown from this humble “Hello World” hop.

[Dheera] had previously created the shadow clock and is no stranger to ROS, having created the ROS topic text visualization tool for debugging. We will be watching to see how robot Luxo will evolve, hopefully it doesn’t find a way to cheat! Want to play with reinforcement learning, but prefer wheeled robots? Here are a few options.

Continue reading “Little Lamp To Learn Longer Leaps”

A Game Boy Supercomputer For AI Research

Reinforcement learning has been a hot-button area of research into artificial intelligence. This is a method where software agents make decisions and refine these over time based on analyzing resulting outcomes. [Kamil Rocki] had been exploring this field, but needed some more powerful tools. As it turned out, a cluster of emulated Game Boys running at a billion FPS was just the ticket.

The trick to efficient development of reinforcement learning systems is to be able to run things quickly. If it takes an AI one thousand attempts to clear level 1 of Super Mario Bros., you’d better hope you’re not running that in real time. [Kamil] started by coding a Game Boy emulator in C. By then implementing it in Verilog, [Kamil] was able to create a cluster of emulated Game Boys that enabled games to be run at breakneck speed, greatly speeding the training and development process.

[Kamil] goes into detail about how the work came to revolve around the Game Boy platform. After initial work with the Atari 2600, which is somewhat of a defacto standard in RL circles, [Kamil] began to explore further. It was desired to have an environment with a well-documented CPU,  a simple display to cut down on the preprocessing required, and a wide selection of games.

The goal of the project is to allow [Kamil] to explore the transfer of knowledge from one game to another in RL systems. The aim is to determine whether for an AI, skills at Metroid can help in Prince of Persia, for example. This is arguably true for human players, but it remains to be seen if this can be carried over for RL systems.

It’s rather advanced work, on both a hardware emulation level and in terms of AI research. Similar work has been done, training a computer to play Super Mario through monitoring score and world values. We can’t wait to see where this research leads in years to come.

Buy Or Build An Autonomous Race Car To Take The Checkered Flag

Putting autonomous vehicles on public roads takes major resources beyond most of our means. But we can explore all the same general concepts at a smaller scale by modifying remote-control toy cars, limited only by our individual budgets and skill levels. For those of us whose interest and expertise lie in software, Amazon Web Services just launched AWS DeepRacer: a complete package for exploring machine learning on autonomous vehicles.

At a hardware level, the spec sheet makes it sound like they’ve bolted their AWS DeepLens machine vision computer on an 1/18th scale monster truck chassis. But the hardware is only the tip of the iceberg. The software behind DeepRacer is AWS RoboMaker, a set of tools for applying AWS to robot development. Everything from running digital simulations on AWS to training neural networks on AWS. Don’t know enough about machine learning? No problem! Amazon has also just opened up their internal training curriculum to the world. And to encourage participation, Amazon is running a DeepRacer League with races taking place both digitally online and physically at AWS Summit events around the world. They’ve certainly offered us a full plate at their re:Invent conference this week.

But maybe someone prefers not to use Amazon, or prefer to build their own hardware, or run their own competitions. Fortunately, Amazon is not the only game in town, merely the latest entry in an existing field. The DeepRacer’s League’s predecessor was the Robocar Rally, and the DeepRacer itself follows the Donkey Car. A do-it-yourself autonomous racing platform we first saw at Bay Area Maker Faire 2017, Donkey Car has since built up its documentation and software tools including a simulator. The default Donkey Car code is fairly specific to the car, but builders are certainly free to use something more general like the open source Robot Operating System and Gazebo robot simulator. (Which is what AWS RoboMaker builds on.)

So if the goal is to start racing little autonomous cars, we have options to buy pre-built hardware or enjoy the flexibility of building our own. Either way, it’s just another example of why this is a great time to get into neural networks, with or without companies like Amazon devising ways to earn our money. Of course, this isn’t the only Amazon project trying to build a business around an idea explored by an existing open source project. We had just talked about their AWS Ground Station offering which covers similar ground (sky?) as our 2014 Hackaday Prize winner SatNOGS.

Neural Networking: Robots Learning From Video

Humans are very good at watching others and imitating what they do. Show someone a video of flipping a switch to turn on a CNC machine and after a single viewing they’ll be able to do it themselves. But can a robot do the same?

Bear in mind that we want the demonstration video to be of a human arm and hand flipping the switch. When the robot does it, the camera that is its eye will be seeing its robot arm and gripper. So somehow it’ll have to know that its robot parts are equivalent to the human parts in the demonstration video. Oh, and the switch in the demonstration video may be a different model and make, and the CNC machine may be a different one, though we’ll at least put the robot within reach of its switch.

Sound difficult?

Researchers from Google Brain and the University of Southern California have done it. In their paper describing how, they talk about a few different experiments but we’ll focus on just one, getting a robot to imitate pouring a liquid from a container into a cup.

Continue reading “Neural Networking: Robots Learning From Video”