The Predictability Problem with Self-Driving Cars

A law professor and an engineering professor walk into a bar. What comes out is a nuanced article on a downside of autonomous cars, and how to deal with it. The short version of their paper: self-driving cars need to be more predictable to humans in order to coexist.

We share living space with a lot of machines. A good number of them are mobile and dangerous but under complete human control: the car, for instance. When we want to know what another car at an intersection is going to do, we think about the driver of the car, and maybe even make eye contact to see that they see us. We then think about what we’d do in their place, and the traffic situation gets negotiated accordingly.

When its self-driving car got into an accident in February, Google replied that “our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.” Apparently, so did the car, right before it drove out in front of an oncoming bus. The bus driver didn’t expect the car to pull (slowly) into its lane, either.

All of the other self-driving car accidents to date have been the fault of other drivers, and the authors think this is telling. If you unexpectedly brake all the time, you can probably expect to eventually get hit from behind. If people can’t read your car’s AI’s mind, you’re gonna get your fender bent.

The paper’s solution is to make autonomous vehicles more predictable, and they mention a number of obvious solutions, from “I-sense-you” lights to inter-car communication. But then there are aspects we hadn’t thought about: specific markings that indicate the AIs capabilities, for instance. A cyclist signalling a left turn would really like to know if the car behind has the new bicyclist-handsignal-recognition upgrade before entering the lane. The ability to put your mind into the mind of the other car is crucial, and requires tons of information about the driver.

All of this may require and involve legislation. Intent and what all parties to an accident “should have known” are used in court to apportion blame in addition to the black-and-white of the law. When one of the parties is an AI, this gets murkier. How should you know what the algorithm should have been thinking? This is far from a solved problem, and it’s becoming more relevant.

We’ve written on the ethics of self-driving cars before, but simply in terms of their decision-making ability. This paper brings home the idea that we also need to be able to understand what they’re thinking, which is as much a human-interaction and legal problem as it is technological.

[Headline image: Google Self-Driving Car Project]

CES: Self-Flying Drone Cars

CES, the Consumer Electronics Show, is in full swing. Just for a second, let’s take a step back and assess the zeitgeist of the tech literati. Drones – or quadcopters, or UAVs, or UASes, whatever you call them – are huge. Self-driving cars are the next big thing. Flying cars have always been popular. On the technical side of things, batteries are getting really good, and China is slowly figuring out aerospace technologies. What could this possibly mean for CES? Self-flying drone cars.

The Ehang 184 is billed as the first autonomous drone that can carry a human. The idea is a flying version of the self-driving cars that are just over the horizon: hop in a whirring deathtrap, set your destination, and soar through the air above the plebs that just aren’t as special as you.

While the Ehang 184 sounds like a horrendously ill-conceived Indiegogo campaign, the company has released some specs for their self-flying drone car. It’s an octocopter, powered by eight 106kW brushless motors. Flight time is about 23 minutes, with a range of about 10 miles. The empty weight of the aircraft is 200 kg (440 lbs), with a maximum payload of 100 kg (220 lbs). This puts the MTOW of the Ehang 184 at 660 lbs, far below the 1,320 lbs cutoff for light sport aircraft as defined by the FAA, but far more than the definition of an ultralight – 254 lbs empty weight.

In any event, it’s a purely academic matter to consider how such a vehicle would be licensed by the FAA or any other civil aviation administration. It’s already illegal to test in the US, authorities haven’t really caught up to the idea of fixed-wing aircraft powered by batteries, and the idea of a legal autonomous aircraft carrying a passenger is ludicrous.

Is the Ehang 184 a real product? There is no price, and no conceivable way any government would allow an autonomous aircraft fly with someone inside it. It is, however, a perfect embodiment of the insanity of CES.

V2V: A Safer Commute with Cars Sharing Status Updates

Every year, more than 30,000 people are killed in motor vehicle accidents in the US, and many many more are injured. Humans, in general, aren’t great drivers. Until dependable self-driving cars make their way into garages and driveways across the country, there is still a great amount of work that can be done to improve the safety of automobiles, and the best hope on the horizon is Vehicle to Vehicle communications (V2V). We keep hearing this technology mentioned in news stories, but the underlying technology is almost never discussed. So I decided to take a look at what hardware we can expect in early V2V, and the features you can expect to see when your car begins to build a social network with the others around it.

Continue reading “V2V: A Safer Commute with Cars Sharing Status Updates”

Autonomous Vehicle-Following Vehicle

Humanity has taken one step closer to Skynet becoming fully aware. [Ahmed], [Muhammad], [Salman], and [Suleman] have created a vehicle that can “chase” another vehicle as part of their senior design project. Now it’s just a matter of time before the machines take over.

The project itself is based on a gasoline-powered quad bike that the students first converted to electric for the sake of their project. It uses a single webcam to get information about its surroundings. This is a plus because it frees the robot from needing a stereoscopic camera or any other complicated equipment like a radar or laser rangefinder. With this information, it can follow a lead vehicle without getting any other telemetry.

This project is interesting because it could potentially allow for large convoys with only one human operator at the front. Once self-driving cars become more mainstream, this could potentially save a lot of costs as well if only the vehicle in the front needs the self-driving equipment, while the vehicles behind would be able to operate with much less hardware. Either way, we love seeing senior design projects that have great real-world applications!

Continue reading “Autonomous Vehicle-Following Vehicle”

Build your own self-driving car

If you’ve ever wanted your own self-driving car, this is your chance. [Sebastian Thrun], co-lecturer (along with the great [Peter Norvig]) of the Stanford AI class is opening up a new class that will teach everyone who enrolls how to program a self-driving car in seven weeks.

The robotic car class is being taught alongside a CS 101 “intro to programming” course. If you don’t know the difference between an interpreter and a compiler, this is the class for you. You’ll learn how to make a search engine from scratch in seven weeks. The “Building a Search Engine” class is taught by [Thrun] and [David Evans], a professor from the University of Virginia. The driverless car course is taught solely by [Thrun], who helped win the 2005 DARPA Grand Challenge with his robot car.

In case you’re wondering if this is going to be another one-time deal like the online AI class, don’t worry. [Thrun] resigned as a tenured professor at Stanford to concentrate on teaching over the Internet. He’s still staying at Stanford as an associate professor but now he’s spending his time on his online university, Udacity. It looks like he might have his hands full with his new project; so far, classes on the theory of computation, operating systems, distributed systems, and computer security are all planned for 2012.