Geohot’s Comma.ai Self-Driving Code On GitHub

First there was [Geohot]’s lofty goal to build a hacker’s version of the self-driving car. Then came comma.ai and a whole bunch of venture capital. After that, a letter from the Feds and a hasty retreat from the business end of things. The latest development? comma.ai’s openpilot project shows up on GitHub!

If you’ve got either an Acura ILX or Honda Civic 2016 Touring addition, you can start to play around with this technology on your own. Is this a good idea? Are you willing to buy some time on a closed track?

A quick browse through the code gives some clues as to what’s going on here. The board files show just how easy it is to interface with these cars’ driving controls: there’s a bunch of CAN commands and that’s it. There’s some unintentional black comedy, like a (software) crash-handler routine named crash.py.

What’s shocking is that there’s nothing shocking going on. It’s all pretty much straightforward Python with sprinklings of C. Honestly, it looks like something you could get into and start hacking away at pretty quickly. Anyone want to send us an Acura ILX for testing purposes? No promises you’ll get it back in one piece.

If you missed it, read up on our coverage of the rapid rise and faster retreat of comma.ai. But we don’t think the game is over yet: comma.ai is still hiring. Are open source self-driving cars in our future? That would be fantastic!

Via Endagadget. Thanks for the tip, [FaultyWarrior]!

[Geohot]’s Self-Driving Car Cancelled

George [Geohot] Hotz has thrown in the towel on his “comma one” self-driving car project. According to [Geohot]’s Twitter stream, the reason is a letter from the US National Highway Traffic Safety Administration (NHTSA), which sent him what basically amounts to a warning to not release self-driving software that might endanger people’s lives.

This comes a week after a post on comma.ai’s blog changed focus from a “self-driving car” to an “advanced driver assistance system”, presumably to get around legal requirements. Apparently, that wasn’t good enough for the NHTSA.

When Robot Cars Kill, Who Gets Sued?

20160530_165433On one hand, we’re sorry to see the system go out like that. The idea of a quick-and-dirty, affordable, crowdsourced driving aid speaks to our hacker heart. But on the other, especially in light of the recent Tesla crash, we’re probably a little bit glad to not have these things on the road. They were not (yet) rigorously tested, and were originally oversold in their capabilities, as last week’s change of focus demonstrated.

Comma.ai’s downgrade to driver-assistance system really begs the Tesla question. Their autopilot is also just an “assistance” system, and the driver is supposed to retain full control of the car at all times. But we all know that it’s good enough that people, famously, let the car take over. And in one case, this has led to death.

Right now, Tesla is hiding behind the same fiction that the NHTSA didn’t buy with comma.ai: that an autopilot add-on won’t lull the driver into overconfidence. The deadly Tesla accident proved how that flimsy that fiction is. And so far, there’s only been one person injured by Tesla’s tech, and his family hasn’t sued. But we wouldn’t be willing to place bets against a jury concluding that Tesla’s marketing of the “autopilot” didn’t contribute to the accident. (We’re hackers, not lawyers.)

Should We Take a Step Back? Or a Leap Forward?

Stepping away from the law, is making people inattentive at the wheel, with a legal wink-and-a-nod that you’re not doing so, morally acceptable? When many states and countries will ban talking on a cell phone in the car, how is it legal to market a device that facilitates taking your hands off the steering wheel entirely? Or is this not all that much different from cruise control?

What Tesla is doing, and [Geohot] was proposing, puts a beta version of a driverless car on the road. On one hand, that’s absolutely what’s needed to push the technology forward. If you’re trying to train a neural network to drive, more data, under all sorts of conditions, is exactly what you need. Tesla uses this data to assess and improve its system all the time. Shutting them down would certainly set back the progress toward actually driverless cars. But is it fair to use the general public as opt-in Guinea pigs for their testing? And how fair is it for the NHTSA to discourage other companies from entering the field?

We’re at a very awkward adolescence of driverless car technology. And like our own adolescence, when we’re through it, it’s going to appear a miracle that we survived some of the stunts we pulled. But the metaphor breaks down with driverless cars — we can also simply wait until the systems are proven safe enough to take full control before we allow them on the streets. The current halfway state, where an autopilot system may lull the driver into a false sense of security, strikes me as particularly dangerous.

So how do we go forward? Do we let every small startup that wants to build a driverless car participate, in the hope that it gets us through the adolescent phase faster? Or do we clamp down on innovation, only letting the technology on the road once it’s proven to be safe? We’d love to hear your arguments in the comment section.

[Geohot] Selling His “Self-Driving” Car Tech For $1k By New Year

This week [Geohot] announced the launch of his self-driving car hardware. This is the natural extension of his proof-of-concept shown off in December which he parlayed into a Silicon Valley startup called comma.ai. [Geohot], whose real name is [George Hotz], is well known for jailbreaking the iPhone and making Sony look like idiots when they retroactively crippled Linux support on PS3. He has hardware chops.

Initial self-driving add-on hardware only works with Honda and Acura models that already have lane-keeping assist features because those vehicles already have built-in front radar. The package, which replaces the rear view mirror, adds a front facing camera. Those lucky (or brave, foolish, daring?) beta users can trade $999 and $24/month for what is currently a green 3D printed enclosure with some smartphone-like hardware inserted.

The company has taken an interesting approach to acquiring data needed for this particular flavor of self-driving. [Hotz] is teasing a chance at beta test invites to those who contribute driving data to the company. This is as simple as downloading an app to your phone and letting it roll from your windshield as you go bumper to bumper from Mountain View to San Francisco. That’s right, the plan is to support just that stretch of the nation’s highway system — although [Hotz] did make a brazen estimate of 90% of commutes for 90% of users within a year. Hey, it’s a startup so it’s either that, selling to a bigger fish, or closing their doors.

That narrow route support is actually an interesting constraint. In fact, the company is most interesting because of its chosen constraints: a small subset of cars, a chosen stretch of highway, and dare we say sanity when it comes to self-driving expectations. Grandiose claims have the general public thinking a vehicle with no human driver will slide up to your stoop and take you anywhere you want to go. That is a dauntingly difficult engineering challenge (dare we say impossible). What [Hotz] is selling is a more stress-free commute, not a nap in the back seat. You still need to be paying attention at all times.

Will this system work? Undoubtedly the engineering is possible (Tesla is already doing it). The biggest question mark that remains is human nature. This system demands your attention even though you’re doing nothing. That seems unrealistic — users are bound to lapse in attention much more frequently than if they were the primary driver. The question then becomes, will people pay attention at the very rare yet very crucial moments, and can a system like this prevent more fatal accidents than it causes?

[via Engadget]