The Problem With Self-Driving Cars: The Name

In 1899, you might have been forgiven for thinking the automobile was only a rich-man’s toy. A horseless carriage was for flat garden pathways. The auto was far less reliable than a horse. This was new technology, and rich people are always into their gadgets, but the automobile is a technology that isn’t going to go anywhere. The roads are too terrible, they don’t have the range of a horse, and the world just isn’t set up for mechanized machines rolling everywhere.

This changed. It changed very quickly. By 1920, cars had taken over. Industrialized cities were no longer in the shadow of a mountain of horse manure. A highway, built specifically for automobiles, stretched from New York City to San Francisco. The age of the automobile had come.

And here we are today, in the same situation, with a technology as revolutionary as the automobile. People say self-driving cars are toys for rich people. Teslas on the road aren’t for the common man because the economy model costs fifty thousand dollars. They only work on highways anyway. The reliability just isn’t there for level-5 automation. You’ll never have a self-driving car that can drive over mountain roads in the snow, or navigate a ball bouncing into the street of a residential neighborhood chased by a child. But history proves time and time again that people are wrong. Self-driving cars are the future, and the world will be unrecognizable in thirty years. There’s only one problem: we’re not calling them the right thing. Self-driving cars should be called ‘cryptocybers’.

Continue reading “The Problem With Self-Driving Cars: The Name”

Fatalities Vs False Positives: The Lessons From The Tesla And Uber Crashes

In one bad week in March, two people were indirectly killed by automated driving systems. A Tesla vehicle drove into a barrier, killing its driver, and an Uber vehicle hit and killed a pedestrian crossing the street. The National Transportation Safety Board’s preliminary reports on both accidents came out recently, and these bring us as close as we’re going to get to a definitive view of what actually happened. What can we learn from these two crashes?

There is one outstanding factor that makes these two crashes look different on the surface: Tesla’s algorithm misidentified a lane split and actively accelerated into the barrier, while the Uber system eventually correctly identified the cyclist crossing the street and probably had time to stop, but it was disabled. You might say that if the Tesla driver died from trusting the system too much, the Uber fatality arose from trusting the system too little.

But you’d be wrong. The forward-facing radar in the Tesla should have prevented the accident by seeing the barrier and slamming on the brakes, but the Tesla algorithm places more weight on the cameras than the radar. Why? For exactly the same reason that the Uber emergency-braking system was turned off: there are “too many” false positives and the result is that far too often the cars brake needlessly under normal driving circumstances.

The crux of the self-driving at the moment is precisely figuring out when to slam on the brakes and when not. Brake too often, and the passengers are annoyed or the car gets rear-ended. Brake too infrequently, and the consequences can be worse. Indeed, this is the central problem of autonomous vehicle safety, and neither Tesla nor Uber have it figured out yet.

Continue reading “Fatalities Vs False Positives: The Lessons From The Tesla And Uber Crashes”

The Ethics Of Self-Driving Cars Making Deadly Decisions

Self-driving cars are starting to pop up everywhere as companies slowly begin to test and improve them for the commercial market. Heck, Google’s self-driving car actually has its very own driver’s license in Nevada! There have been minimal accidents, and most of the time, they say it’s not the autonomous cars’ fault. But when autonomous cars are widespread — there will still be accidents — it’s inevitable. And what will happen when your car has to decide whether to save you, or a crowd of people? Ever think about that before?

It’s an extremely valid concern, and raises a huge ethical issue. In the rare circumstance that the car has to choose the “best” outcome — what will determine that? Reducing the loss of life? Even if it means crashing into a wall, mortally injuring you, the driver? Maybe car manufacturers will finally have to make ejection seats a standard feature!

Continue reading “The Ethics Of Self-Driving Cars Making Deadly Decisions”