[Geohot]’s Self-Driving Car Cancelled

George [Geohot] Hotz has thrown in the towel on his “comma one” self-driving car project. According to [Geohot]’s Twitter stream, the reason is a letter from the US National Highway Traffic Safety Administration (NHTSA), which sent him what basically amounts to a warning to not release self-driving software that might endanger people’s lives.

This comes a week after a post on comma.ai’s blog changed focus from a “self-driving car” to an “advanced driver assistance system”, presumably to get around legal requirements. Apparently, that wasn’t good enough for the NHTSA.

When Robot Cars Kill, Who Gets Sued?

20160530_165433On one hand, we’re sorry to see the system go out like that. The idea of a quick-and-dirty, affordable, crowdsourced driving aid speaks to our hacker heart. But on the other, especially in light of the recent Tesla crash, we’re probably a little bit glad to not have these things on the road. They were not (yet) rigorously tested, and were originally oversold in their capabilities, as last week’s change of focus demonstrated.

Comma.ai’s downgrade to driver-assistance system really begs the Tesla question. Their autopilot is also just an “assistance” system, and the driver is supposed to retain full control of the car at all times. But we all know that it’s good enough that people, famously, let the car take over. And in one case, this has led to death.

Right now, Tesla is hiding behind the same fiction that the NHTSA didn’t buy with comma.ai: that an autopilot add-on won’t lull the driver into overconfidence. The deadly Tesla accident proved how that flimsy that fiction is. And so far, there’s only been one person injured by Tesla’s tech, and his family hasn’t sued. But we wouldn’t be willing to place bets against a jury concluding that Tesla’s marketing of the “autopilot” didn’t contribute to the accident. (We’re hackers, not lawyers.)

Should We Take a Step Back? Or a Leap Forward?

Stepping away from the law, is making people inattentive at the wheel, with a legal wink-and-a-nod that you’re not doing so, morally acceptable? When many states and countries will ban talking on a cell phone in the car, how is it legal to market a device that facilitates taking your hands off the steering wheel entirely? Or is this not all that much different from cruise control?

What Tesla is doing, and [Geohot] was proposing, puts a beta version of a driverless car on the road. On one hand, that’s absolutely what’s needed to push the technology forward. If you’re trying to train a neural network to drive, more data, under all sorts of conditions, is exactly what you need. Tesla uses this data to assess and improve its system all the time. Shutting them down would certainly set back the progress toward actually driverless cars. But is it fair to use the general public as opt-in Guinea pigs for their testing? And how fair is it for the NHTSA to discourage other companies from entering the field?

We’re at a very awkward adolescence of driverless car technology. And like our own adolescence, when we’re through it, it’s going to appear a miracle that we survived some of the stunts we pulled. But the metaphor breaks down with driverless cars — we can also simply wait until the systems are proven safe enough to take full control before we allow them on the streets. The current halfway state, where an autopilot system may lull the driver into a false sense of security, strikes me as particularly dangerous.

So how do we go forward? Do we let every small startup that wants to build a driverless car participate, in the hope that it gets us through the adolescent phase faster? Or do we clamp down on innovation, only letting the technology on the road once it’s proven to be safe? We’d love to hear your arguments in the comment section.

Self-Driving R/C Car Uses An Intel NUC

Self-driving cars are something we are continually told will be the Next Big Thing. It’s nothing new, we’ve seen several decades of periodic demonstrations of the technology as it has evolved. Now we have real prototype cars on real roads rather than test tracks, and though they are billion-dollar research vehicles from organisations with deep pockets and a long view it is starting to seem that this is a technology we have a real chance of seeing at a consumer level.

A self-driving car may seem as though it is beyond the abilities of a Hackaday reader, but while it might be difficult to produce safe collision avoidance of a full-sized car on public roads it’s certainly not impossible to produce something with a little more modest capabilities. [Jaimyn Mayer] and [Kendrick Tan] have done just that, creating a self-driving R/C car that can follow a complex road pattern without human intervention.

The NUC's-eye view. The green line is a human's steering, the blue line the computed steering.
The NUC’s-eye view. The green line is a human’s steering, the blue line the computed steering.

Unexpectedly they have eschewed the many ARM-based boards as the brains of the unit, instead going for an Intel NUC mini-PC powered by a Core i5 as the brains of the unit. It’s powered by a laptop battery bank, and takes input from a webcam. Direction and throttle can be computed by the NUC and sent to an Arduino which handles the car control. There is also a radio control channel allowing the car to be switched from autonomous to human controlled to emergency stop modes.

They go into detail on the polarizing and neutral density filters they used with their webcam, something that may make interesting reading for anyone interested in machine vision. All their code is open source, and can be found linked from their write-up. Meanwhile the video below the break shows their machine on their test circuit, completing it with varying levels of success.

Continue reading “Self-Driving R/C Car Uses An Intel NUC”

The Predictability Problem With Self-Driving Cars

A law professor and an engineering professor walk into a bar. What comes out is a nuanced article on a downside of autonomous cars, and how to deal with it. The short version of their paper: self-driving cars need to be more predictable to humans in order to coexist.

We share living space with a lot of machines. A good number of them are mobile and dangerous but under complete human control: the car, for instance. When we want to know what another car at an intersection is going to do, we think about the driver of the car, and maybe even make eye contact to see that they see us. We then think about what we’d do in their place, and the traffic situation gets negotiated accordingly.

When its self-driving car got into an accident in February, Google replied that “our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.” Apparently, so did the car, right before it drove out in front of an oncoming bus. The bus driver didn’t expect the car to pull (slowly) into its lane, either.

All of the other self-driving car accidents to date have been the fault of other drivers, and the authors think this is telling. If you unexpectedly brake all the time, you can probably expect to eventually get hit from behind. If people can’t read your car’s AI’s mind, you’re gonna get your fender bent.

The paper’s solution is to make autonomous vehicles more predictable, and they mention a number of obvious solutions, from “I-sense-you” lights to inter-car communication. But then there are aspects we hadn’t thought about: specific markings that indicate the AIs capabilities, for instance. A cyclist signalling a left turn would really like to know if the car behind has the new bicyclist-handsignal-recognition upgrade before entering the lane. The ability to put your mind into the mind of the other car is crucial, and requires tons of information about the driver.

All of this may require and involve legislation. Intent and what all parties to an accident “should have known” are used in court to apportion blame in addition to the black-and-white of the law. When one of the parties is an AI, this gets murkier. How should you know what the algorithm should have been thinking? This is far from a solved problem, and it’s becoming more relevant.

We’ve written on the ethics of self-driving cars before, but simply in terms of their decision-making ability. This paper brings home the idea that we also need to be able to understand what they’re thinking, which is as much a human-interaction and legal problem as it is technological.

[Headline image: Google Self-Driving Car Project]

CES: Self-Flying Drone Cars

CES, the Consumer Electronics Show, is in full swing. Just for a second, let’s take a step back and assess the zeitgeist of the tech literati. Drones – or quadcopters, or UAVs, or UASes, whatever you call them – are huge. Self-driving cars are the next big thing. Flying cars have always been popular. On the technical side of things, batteries are getting really good, and China is slowly figuring out aerospace technologies. What could this possibly mean for CES? Self-flying drone cars.

The Ehang 184 is billed as the first autonomous drone that can carry a human. The idea is a flying version of the self-driving cars that are just over the horizon: hop in a whirring deathtrap, set your destination, and soar through the air above the plebs that just aren’t as special as you.

While the Ehang 184 sounds like a horrendously ill-conceived Indiegogo campaign, the company has released some specs for their self-flying drone car. It’s an octocopter, powered by eight 106kW brushless motors. Flight time is about 23 minutes, with a range of about 10 miles. The empty weight of the aircraft is 200 kg (440 lbs), with a maximum payload of 100 kg (220 lbs). This puts the MTOW of the Ehang 184 at 660 lbs, far below the 1,320 lbs cutoff for light sport aircraft as defined by the FAA, but far more than the definition of an ultralight – 254 lbs empty weight.

In any event, it’s a purely academic matter to consider how such a vehicle would be licensed by the FAA or any other civil aviation administration. It’s already illegal to test in the US, authorities haven’t really caught up to the idea of fixed-wing aircraft powered by batteries, and the idea of a legal autonomous aircraft carrying a passenger is ludicrous.

Is the Ehang 184 a real product? There is no price, and no conceivable way any government would allow an autonomous aircraft fly with someone inside it. It is, however, a perfect embodiment of the insanity of CES.

V2V: A Safer Commute With Cars Sharing Status Updates

Every year, more than 30,000 people are killed in motor vehicle accidents in the US, and many many more are injured. Humans, in general, aren’t great drivers. Until dependable self-driving cars make their way into garages and driveways across the country, there is still a great amount of work that can be done to improve the safety of automobiles, and the best hope on the horizon is Vehicle to Vehicle communications (V2V). We keep hearing this technology mentioned in news stories, but the underlying technology is almost never discussed. So I decided to take a look at what hardware we can expect in early V2V, and the features you can expect to see when your car begins to build a social network with the others around it.

Continue reading “V2V: A Safer Commute With Cars Sharing Status Updates”

Autonomous Vehicle-Following Vehicle

Humanity has taken one step closer to Skynet becoming fully aware. [Ahmed], [Muhammad], [Salman], and [Suleman] have created a vehicle that can “chase” another vehicle as part of their senior design project. Now it’s just a matter of time before the machines take over.

The project itself is based on a gasoline-powered quad bike that the students first converted to electric for the sake of their project. It uses a single webcam to get information about its surroundings. This is a plus because it frees the robot from needing a stereoscopic camera or any other complicated equipment like a radar or laser rangefinder. With this information, it can follow a lead vehicle without getting any other telemetry.

This project is interesting because it could potentially allow for large convoys with only one human operator at the front. Once self-driving cars become more mainstream, this could potentially save a lot of costs as well if only the vehicle in the front needs the self-driving equipment, while the vehicles behind would be able to operate with much less hardware. Either way, we love seeing senior design projects that have great real-world applications!

Continue reading “Autonomous Vehicle-Following Vehicle”

Build Your Own Self-driving Car

If you’ve ever wanted your own self-driving car, this is your chance. [Sebastian Thrun], co-lecturer (along with the great [Peter Norvig]) of the Stanford AI class is opening up a new class that will teach everyone who enrolls how to program a self-driving car in seven weeks.

The robotic car class is being taught alongside a CS 101 “intro to programming” course. If you don’t know the difference between an interpreter and a compiler, this is the class for you. You’ll learn how to make a search engine from scratch in seven weeks. The “Building a Search Engine” class is taught by [Thrun] and [David Evans], a professor from the University of Virginia. The driverless car course is taught solely by [Thrun], who helped win the 2005 DARPA Grand Challenge with his robot car.

In case you’re wondering if this is going to be another one-time deal like the online AI class, don’t worry. [Thrun] resigned as a tenured professor at Stanford to concentrate on teaching over the Internet. He’s still staying at Stanford as an associate professor but now he’s spending his time on his online university, Udacity. It looks like he might have his hands full with his new project; so far, classes on the theory of computation, operating systems, distributed systems, and computer security are all planned for 2012.