Self-Driving RC Cars with TensorFlow; Raspberry Pi or MacBook Onboard

You might think that you do not have what it takes to build a self-driving car, but you’re wrong. The mistake you’ve made is assuming that you’ll be controlling a two-ton death machine. Instead, you can give it a shot without the danger and on a relatively light budget. [Otavio] and [Will] got into self-driving vehicles using radio controlled (RC) cars.

[Otavio] slapped a MacBook Pro on an RC car to do the heavy lifting and called it carputer. The computer reads Hall effect sensor data from the motor to establish distance traveled (this can be used to calculate speed) and watches the stream from a webcam perched on the chassis. These two sources are fed into a neural network using TensorFlow. You train the system by driving the vehicle manually through the course a few times and then let it drive itself.

In the video interview below, you get a look at the car and [Otavio] gives commentary on how the system works as we see playback of a few races, including the Sparkfun 2016 Autonomous Vehicle Competition. I apologize for the poor audio, they lost the booth lottery and were next door to an incredibly noisy robot band (video proof) so we were basically shouting at each other. But I think you’ll agree it’s worth it to get a look at the races. Continue reading “Self-Driving RC Cars with TensorFlow; Raspberry Pi or MacBook Onboard”

Autonomous Delivery and the Last 100 Feet

You’ve no doubt by now seen Boston Dynamics latest “we’re living in the future” robotic creation, dubbed Handle. [Mike Szczys] recently covered the more-or-less-official company unveiling of Handle, the hybrid bipedal-wheeled robot that can handle smooth or rugged terrain and can even jump when it has to, all while remaining balanced and apparently handling up to 100 pounds of cargo with its arms. It’s absolutely sci-fi.

[Mike] closed his post with a quip about seeing “Handle wheeling down the street placing smile-adorned boxes on each stoop.” I’ve recently written about autonomous delivery, covering both autonomous freight as the ‘killer app’ for self-driving vehicles and the security issues posed by autonomous delivery. Now I want to look at where anthropoid robots might fit in the supply chain, and how likely it’ll be to see something like Handle taking over the last hundred feet from delivery truck to your door.

Continue reading “Autonomous Delivery and the Last 100 Feet”

Autopilots Don’t Kill Drivers, Humans Do

The US National Highway Traffic Safety Administration (NHTSA) report on the May 2016 fatal accident in Florida involving a Tesla Model S in Autopilot mode just came out (PDF). The verdict? “the Automatic Emergency Braking (AEB) system did not provide any warning or automated braking for the collision event, and the driver took no braking, steering, or other actions to avoid the collision.” The accident was a result of the driver’s misuse of the technology.

quote-not-a-true-targetThis places no blame on Tesla because the system was simply not designed to handle obstacles travelling at 90 degrees to the car. Because the truck that the Tesla plowed into was sideways to the car, “the target image (side of a tractor trailer) … would not be a “true” target in the EyeQ3 vision system dataset.” Other situations that are outside of the scope of the current state of technology include cut-ins, cut-outs, and crossing path collisions. In short, the Tesla helps prevent rear-end collisions with the car in front of it, but has limited side vision. The driver should have known this.

The NHTSA report concludes that “Advanced Driver Assistance Systems … require the continual and full attention of the driver to monitor the traffic environment and be prepared to take action to avoid crashes.” The report also mentions the recent (post-Florida) additions to Tesla’s Autopilot that help make sure that the driver is in the loop.

The takeaway is that humans are still responsible for their own safety, and that “Autopilot” is more like anti-lock brakes than it is like Skynet. Our favorite footnote, in carefully couched legalese: “NHTSA recognizes that other jurisdictions have raised concerns about Tesla’s use of the name “Autopilot”. This issue is outside the scope of this investigation.” (The banner image is from this German YouTube video where a Tesla rep in the back seat tells the reporter that he can take his hands off the wheel. There may be mixed signals here.)

cropped_shot_2017-01-23-181745There are other details that make the report worth reading if, like us, you would like to see some more data about how self-driving cars actually perform on the road. On one hand, Tesla’s Autosteer function seems to have reduced the rate at which their cars got into crashes. On the other, increasing use of the driving assistance functions comes with an increase driver inattention for durations of three seconds or longer.

People simply think that the Autopilot should do more than it actually does. Per the report, this problem of “driver misuse in the context of semi-autonomous vehicles is an emerging issue.” Whether technology will improve fast enough to protect us from ourselves is an open question.

[via Popular Science].

Self-Driving Cars Are Not (Yet) Safe

Three things have happened in the last month that have made me think about the safety of self-driving cars a lot more. The US Department of Transportation (DOT) has issued its guidance on the safety of semi-autonomous and autonomous cars. At the same time, [Geohot]’s hacker self-driving car company bailed out of the business, citing regulatory hassles. And finally, Tesla’s Autopilot has killed its second passenger, this time in China.

At a time when [Elon Musk], [President Obama], and Google are all touting self-driving cars to be the solution to human error behind the wheel, it’s more than a little bold to be arguing the opposite case in public, but the numbers just don’t add up. Self-driving cars are probably not as safe as a good sober driver yet, but there just isn’t the required amount of data available to say this with much confidence. However, one certainly cannot say that they’re demonstrably safer.

Continue reading “Self-Driving Cars Are Not (Yet) Safe”

Geohot’s comma.ai Self-Driving Code On GitHub

First there was [Geohot]’s lofty goal to build a hacker’s version of the self-driving car. Then came comma.ai and a whole bunch of venture capital. After that, a letter from the Feds and a hasty retreat from the business end of things. The latest development? comma.ai’s openpilot project shows up on GitHub!

If you’ve got either an Acura ILX or Honda Civic 2016 Touring addition, you can start to play around with this technology on your own. Is this a good idea? Are you willing to buy some time on a closed track?

A quick browse through the code gives some clues as to what’s going on here. The board files show just how easy it is to interface with these cars’ driving controls: there’s a bunch of CAN commands and that’s it. There’s some unintentional black comedy, like a (software) crash-handler routine named crash.py.

What’s shocking is that there’s nothing shocking going on. It’s all pretty much straightforward Python with sprinklings of C. Honestly, it looks like something you could get into and start hacking away at pretty quickly. Anyone want to send us an Acura ILX for testing purposes? No promises you’ll get it back in one piece.

If you missed it, read up on our coverage of the rapid rise and faster retreat of comma.ai. But we don’t think the game is over yet: comma.ai is still hiring. Are open source self-driving cars in our future? That would be fantastic!

Via Endagadget. Thanks for the tip, [FaultyWarrior]!

Think Your Way to Work in a Mind-Controlled Tesla

When you own an $80,000 car, a normal person might be inclined to never take it out of the garage. But normal often isn’t what we do around here, so seeing a Tesla S driven by mind control is only slightly shocking.

[Casey_S] appears to be the owner of the Tesla S in question, but if he’s not he’ll have some ‘splaining to do. He took the gigantic battery and computer in a car-shaped case luxury car to a hackathon in Berkley last week and promptly fitted it with the gear needed to drive the car remotely. Yes, the Model S has steering motors built in, but Tesla hasn’t been forthcoming with an API to access such functions. So [Casey_S] and his team had to cobble together a steering servo from a windshield wiper motor and a potentiometer mounted to a frame made of 2x4s. Linear actuators attach to the brake and accelerator pedals, and everything talks to an Arduino.

The really interesting part is that the whole thing is controlled by an electroencephalography helmet and a machine learning algorithm that detects when the driver thinks “forward” or “turn right.” It translates those thoughts to variables that drive the actuators. Unfortunately, space constraints kept [Casey_S] from really putting the rig through its paces, but the video after the break shows that the system worked well enough to move the car forward and steer a little.

There haven’t been too many thought-controlled cars featured here before, but we have covered a wheelchair with an EEG interface.

Continue reading “Think Your Way to Work in a Mind-Controlled Tesla”

[Geohot]’s Self-Driving Car Cancelled

George [Geohot] Hotz has thrown in the towel on his “comma one” self-driving car project. According to [Geohot]’s Twitter stream, the reason is a letter from the US National Highway Traffic Safety Administration (NHTSA), which sent him what basically amounts to a warning to not release self-driving software that might endanger people’s lives.

This comes a week after a post on comma.ai’s blog changed focus from a “self-driving car” to an “advanced driver assistance system”, presumably to get around legal requirements. Apparently, that wasn’t good enough for the NHTSA.

When Robot Cars Kill, Who Gets Sued?

20160530_165433On one hand, we’re sorry to see the system go out like that. The idea of a quick-and-dirty, affordable, crowdsourced driving aid speaks to our hacker heart. But on the other, especially in light of the recent Tesla crash, we’re probably a little bit glad to not have these things on the road. They were not (yet) rigorously tested, and were originally oversold in their capabilities, as last week’s change of focus demonstrated.

Comma.ai’s downgrade to driver-assistance system really begs the Tesla question. Their autopilot is also just an “assistance” system, and the driver is supposed to retain full control of the car at all times. But we all know that it’s good enough that people, famously, let the car take over. And in one case, this has led to death.

Right now, Tesla is hiding behind the same fiction that the NHTSA didn’t buy with comma.ai: that an autopilot add-on won’t lull the driver into overconfidence. The deadly Tesla accident proved how that flimsy that fiction is. And so far, there’s only been one person injured by Tesla’s tech, and his family hasn’t sued. But we wouldn’t be willing to place bets against a jury concluding that Tesla’s marketing of the “autopilot” didn’t contribute to the accident. (We’re hackers, not lawyers.)

Should We Take a Step Back? Or a Leap Forward?

Stepping away from the law, is making people inattentive at the wheel, with a legal wink-and-a-nod that you’re not doing so, morally acceptable? When many states and countries will ban talking on a cell phone in the car, how is it legal to market a device that facilitates taking your hands off the steering wheel entirely? Or is this not all that much different from cruise control?

What Tesla is doing, and [Geohot] was proposing, puts a beta version of a driverless car on the road. On one hand, that’s absolutely what’s needed to push the technology forward. If you’re trying to train a neural network to drive, more data, under all sorts of conditions, is exactly what you need. Tesla uses this data to assess and improve its system all the time. Shutting them down would certainly set back the progress toward actually driverless cars. But is it fair to use the general public as opt-in Guinea pigs for their testing? And how fair is it for the NHTSA to discourage other companies from entering the field?

We’re at a very awkward adolescence of driverless car technology. And like our own adolescence, when we’re through it, it’s going to appear a miracle that we survived some of the stunts we pulled. But the metaphor breaks down with driverless cars — we can also simply wait until the systems are proven safe enough to take full control before we allow them on the streets. The current halfway state, where an autopilot system may lull the driver into a false sense of security, strikes me as particularly dangerous.

So how do we go forward? Do we let every small startup that wants to build a driverless car participate, in the hope that it gets us through the adolescent phase faster? Or do we clamp down on innovation, only letting the technology on the road once it’s proven to be safe? We’d love to hear your arguments in the comment section.