DARPA Goes Underground For Next Challenge

We all love reading about creative problem-solving work done by competitors in past DARPA robotic challenges. Some of us even have ambition to join the fray and compete first-hand instead of just reading about them after the fact. If this describes you, step on up to the DARPA Subterranean Challenge.

Following up on past challenges to build autonomous vehicles and humanoid robots, DARPA now wants to focus collective brainpower solving problems encountered by robots working underground. There will be two competition tracks: the Systems Track is what we’ve come to expect, where teams build both the hardware and software of robots tackling the competition course. But there will also be a Virtual Track, opening up the challenge to those without resources to build big expensive physical robots. Competitors on the virtual track will run their competition course in the Gazebo robot simulation environment. This is similar to the NASA Space Robotics Challenge, where algorithms competed to run a virtual robot through tasks in a simulated Mars base. The virtual environment makes the competition accessible for people without machine shops or big budgets. The winner of NASA SRC was, in fact, a one-person team.

Back on the topic of the upcoming DARPA challenge: each track will involve three sub-domains. Each of these have civilian applications in exploration, infrastructure maintenance, and disaster relief as well as the obvious military applications.

  • Man-made tunnel systems
  • Urban underground
  • Natural cave networks

There will be a preliminary circuit competition for each, spaced roughly six months apart, to help teams get warmed up one environment at a time. But for the final event in Fall of 2021, the challenge course will integrate all three types.

More details will be released on Competitor’s Day, taking place September 27th 2018. Registration for the event just opened on August 15th. Best of luck to all the teams! And just like we did for past challenges, we will excitedly follow progress. (And have a good-natured laugh at fails.)

Line Following Robot Without The Lines

Line-following robots are a great intro to robotics in general, since the materials and skills needed to build a good one aren’t too advanced. It turns out that line-following robots are more than just a learning tool, too. They’re pretty useful in industry, but most of them don’t follow visible marked lines. Some, like this inductive guided robot from [Randall] make use of wires to determine their paths.

Some of the benefits of inductive guidance over physical lines are that the wires can be hidden in floors, so if something like an automated forklift is using them at a warehouse there will be less trip hazard and less maintenance of the guides. They also support multiple paths, so no complicated track switching has to take place. [Randall]’s robot is a small demonstration of a larger system he built as a technician for an autonomous guided vehicle system. His video goes into the details of how they work, more of their advantages and disadvantages, and a few other things.

While inductive guided robots have been used for decades now, they’re starting to be replaced by robots with local positioning systems and computer vision. We’ve recently seen robots that are built to utilize these forms of navigation as well.

Continue reading “Line Following Robot Without The Lines”

Six Wheels (En)rolling: Mars Rovers Going To School

Few things build excitement like going to space. It captures the imagination of young and old alike. Teachers love to leverage the latest space news to raise interest in their students, and space agencies are happy to provide resources to help. The latest in a long line of educator resources released by NASA is an Open Source Rover designed at Jet Propulsion Laboratory.

JPL is the birthplace of Mars rovers Sojourner, Spirit, Opportunity, and Curiosity. They’ve been researching robotic explorers for decades, so it’s no surprise they have many rovers running around. The open source rover’s direct predecessor is ROV-E, whose construction process closely followed procedures for engineering space flight hardware. This gave a team of early career engineers experience in the process before they built equipment destined for space. In addition to learning various roles within a team, they also learned to work with JPL resources like submitting orders to the machine shop to make ROV-E parts.

Once completed, ROV-E became a fixture at JPL public events and occasionally visits nearby schools as part of educational outreach programs. And inevitably a teacher at the school would ask “The kids love ROV-E! Can we make our own rover?” Since most schools don’t have 5-axis CNC machines or autoclaves to cure carbon fiber composites, the answer used to be “No.”

Until now.

Continue reading “Six Wheels (En)rolling: Mars Rovers Going To School”

Robot Rovers Of The Early Space Race

In the early 1970s, the American space program was at a high point, having placed astronauts upon the surface of the moon while their Soviet competitors had not taken them beyond an Earth orbit. It is however a simplistic view to take this as meaning that NASA had the lead in all aspects of space exploration, because while Russians had not walked the surface of our satellite they had achieved a less glamorous feat of lunar exploration that the Americans had not. The first Lunokhod wheeled rover had reached the lunar surface and explored it under the control of earth-bound engineers in the closing months of 1970, and while the rovers driven by Apollo astronauts had placed American treadmarks in the  lunar soil and been reproduced on newspaper front pages and television screens worldwide, they had yet to match the Soviet achievements with respect to autonomy and remote control.

At NASA’s Jet Propulsion Laboratory there was a project to develop technology for future American rovers under the leadership of [Dr. Ewald Heer], and we have a fascinating insight into it thanks to the reminiscences of [Mike Blackstone], then a junior engineer.

The aim of the project was to demonstrate the feasibility of a rover exploring a planetary surface, picking up, and examining rocks. Lest you imagine a billion dollar budget for gleaming rover prototypes, it’s fair to say that this was to be achieved with considerably more modest means. The rover was a repurposed unit that had previously been used for remote handling of hazardous chemicals, and the project’s computer was an extremely obsolete DEC PDP-1.

We are treated to an in-depth description of the rover and its somewhat arcane control system. Sadly we have no pictures save for his sketches as the whole piece rests upon his recollections, but it sounds an interesting machine in its own right. Heavily armoured against chemical explosions, its two roughly-humanoid arms were operated entirely by chains similar to bicycle chains, with all motors resting in its shoulders. A vision system was added in the form of a pair of video cameras on motorised mounts, these could be aimed at an object using a set of crosshairs on each of their monitors, and their angles read off manually by the operator from the controls. These readings could then be entered into the PDP-1, upon which the software written by [Mike] could calculate the position of an object, calculate the required arm positions to retrieve it, and command the rover to perform the required actions.

The program was a success, producing a film for evaluation by the NASA bigwigs. If it still exists it would be fascinating to see it, perhaps our commenters may know where it might be found. Meanwhile if the current JPL research on rovers interests you, you might find this 2017 Hackaday Superconference talk to be of interest.

Thanks [JRD] for the tip.

Emotional Hazards That Lurk Far From The Uncanny Valley

A web search for “Uncanny Valley” will retrieve a lot of information about that discomfort we feel when an artificial creation is eerily lifelike. The syndrome tells us a lot about both human psychology and design challenges ahead. What about the opposite, when machines are clearly machines? Are we all clear? It turns out the answer is “No” as [Christine Sunu] explained at a Hackaday Los Angeles meetup. (Video also embedded below.)

When we build a robot, we know what’s inside the enclosure. But people who don’t know tend to extrapolate too much based only on the simple behavior they could see. As [Christine] says, people “anthropomorphize at the drop of the hat” projecting emotions onto machines and feeling emotions in return. This happens even when machines are deliberately designed to be utilitarian. iRobot was surprised how many Roomba owners gave their robot vacuum names and treated them as family members. A similar eruption of human empathy occurred with Boston Dynamics video footage demonstrating their robot staying upright despite being pushed around.

In the case of a Roomba, this kind of emotional power is relatively harmless. In the case of robots doing dangerous work in place of human beings, such attachment may hinder robots from doing the job they were designed for. And even more worrisome, the fact there’s a power means there’s a potential for abuse. To illustrate one such potential, [Christine] brought up the Amazon Echo. The cylindrical puck is clearly a machine and serves as a point-of-sale terminal, yet people have started treating Alexa as their trusted home advisor. If Amazon should start monetizing this trust, would users realize what’s happening? Would they care?

Continue reading “Emotional Hazards That Lurk Far From The Uncanny Valley”

Robot Maps Rooms With Help From IPhone

The Unity engine has been around since Apple started using Intel chips, and has made quite a splash in the gaming world. Unity allows developers to create 2D and 3D games, but there are some other interesting applications of this gaming engine as well. For example, [matthewhallberg] used it to build a robot that can map rooms in 3D.

The impetus for this project was a robotics company that used a series of robots around their business. The robots navigate using computer vision, but couldn’t map the rooms from scratch. They hired [matthewhallberg] to tackle this problem, and this robot is a preliminary result. Using the Unity engine and an iPhone, the robot can perform in one of three modes. The first is a user-controlled mode, the second is object following, and the third is 3D mapping.

The robot seems fairly easy to construct and only carries and iPhone, a Node MCU, some motors, and a battery. Most of the computational work is done remotely, with the robot simply receiving its movement commands from another computer. There’s a lot going on here, software-wise, and a lot of toolkits and software packages to install and communicate with one another, but the video below does a good job of showing what you’ll need and how it all works together. If that’s all too much, there are other robots with a form of computer vision that can get you started into the world of computer vision and mapping.

Continue reading “Robot Maps Rooms With Help From IPhone”

Cheetah 3 Is Learning To Move Blindly Before Learning To See

Stand up right now and walk around for a minute. We’re pretty sure you didn’t see everywhere you stepped nor did you plan each step meticulously according to visual input. So why should robots do the same? Wouldn’t your robot be more versatile if it could use its vision to plan a path, but leave most of the walking to the legs with the help of various sensors and knowledge of joint positions?

That’s the approach [Sangbae Kim] and a team of researchers at MIT are taking with their Cheetah 3. They’ve given it cameras but aren’t using them yet. Instead, they’re making sure it can move around blind first. So far they have it walking, running, jumping and even going up stairs cluttered with loose blocks and rolls of tape.

Cheetah 3 jumping 30 inches onto a desk
Jumping 30 inches onto a desk

Two algorithms are at the heart of its being able to move around blind.

The first is a contact detection algorithm which decides if the legs should transition between a swing or a step based on knowledge of the joint positions and data from gyroscopes and accelerometers. If it tilted unexpectedly due to stepping on a loose block then this is the algorithm which decides what the legs should do.

The second is a model-predictive algorithm. This predicts what force a leg should apply once the decision has been made to take a step. It does this by calculating the multiplicative positions of the robot’s body and legs a half second into the future. These calculations are done 20 times a second. They’re what help it handle situations such as when someone shoves it or tugs it on a leash. The calculations enabled it to regain its balance or continue in the direction it was headed.

There are a number of other awesome features of this quadruped robot which we haven’t seen in others such as Boston Dynamics’ SpotMini like invertible knee joints and walking on three legs. Check out those features and more in the video below.

Of course, SpotMini has a whole set of neat features of its own. Let’s just say that while they look very similar, they’re on two different evolutionary paths. And the Cheetah certainly has evolved since we last looked at it a few years ago.

Continue reading “Cheetah 3 Is Learning To Move Blindly Before Learning To See”