Self-Driving Cars Are Not (Yet) Safe

Three things have happened in the last month that have made me think about the safety of self-driving cars a lot more. The US Department of Transportation (DOT) has issued its guidance on the safety of semi-autonomous and autonomous cars. At the same time, [Geohot]’s hacker self-driving car company bailed out of the business, citing regulatory hassles. And finally, Tesla’s Autopilot has killed its second passenger, this time in China.

At a time when [Elon Musk], [President Obama], and Google are all touting self-driving cars to be the solution to human error behind the wheel, it’s more than a little bold to be arguing the opposite case in public, but the numbers just don’t add up. Self-driving cars are probably not as safe as a good sober driver yet, but there just isn’t the required amount of data available to say this with much confidence. However, one certainly cannot say that they’re demonstrably safer.

Myth: Self-Driving Cars are Safer

tesla-autopilot-road-trip-nqnjro4fqnomp4-shot0001First, let’s get this out of the way: Tesla’s Autopilot is not meant to be a self-driving technology. It’s a “driver assist” function only, and the driver is intended to be in control of the car at all times, white-knuckling the steering wheel and continually second-guessing the machine despite its apparently flawless driving ability.

And that’s where it goes wrong. The human brain is pretty quick to draw conclusions, and very bad at estimating low-probability events. If you drove on Autopilot five hundred times over a year, and nothing bad happened, you’d be excused for thinking the system was safe. You’d get complacent and take your hands off the wheel to text someone. You’d post videos on YouTube.

Bad Statistics

Human instincts turn out to be very bad at statistics, especially the statistics of infrequent events. Fatal car crashes are remarkably infrequent occurrences, per-person or per-mile, and that’s a good thing. But in a world with seven billion people, infrequent events happen all the time. You can’t trust your instincts, so let’s do some math.

how-teslas-self-driving-autopilot-actually-works-57b22f49fd2e612427000025mp4-shot0009Tesla’s Autopilot, according to this Wired article, racked up 140 million miles this August. That seems like a lot. Indeed, when grilled about the fatality in June, [Musk] replied that the average US driver gets 95 million miles per fatality, and since the Tesla Autopilot had driven over 130 million miles at that time, it’s “morally reprehensible” to keep Autopilot off the streets. Let’s see.

First of all, drawing statistical conclusions on one event is a fool’s game. It just can’t be done. With one data point, you can just estimate an average, but you can’t estimate a standard deviation — the measure of certainty in average. So if you asked me a month ago how many miles, on average, a Tesla drives without killing someone, I’d say 130 million. If you then asked me how confident I was, I’d say “Not at all — could be zero miles, could be infinity — but that’s my best guess”.

But let’s take the numbers at one death per 130 million miles as of August. The US average is 1.08 fatalities per 100 million miles, or 92.6 million miles per. 95% of these deaths are attributed to driver error, and only 5% to vehicle failure. So far, this seems to be making [Elon]’s point: self-driving could be safer. But with only one data point, it’s impossible to have any confidence in this conclusion.

Human Variation

But humans aren’t just humans, either. The variation across the US states is dramatic. In Massachusetts, the average is 175 million miles per fatality, while they only get 61 million miles in South Carolina. Cultural differences with respect to the acceptability of drinking, percentage of highway drives, and distance to hospitals all play a role here.

In the Tesla’s home state of California, which is slightly better than average, they get 109 million miles per fatality. Again, this is comparable with Tesla’s performance. But I’d rather it drive as safe as a Bostonian.

Selection Bias

There’s a good reason to believe that the drivers of Tesla’s Autopilot are picking opportune times to hand over control: straight highway with non-challenging visibility, or maybe slow stop-and-go traffic. I don’t think that people are going hands-off, fast, in the pouring rain. Yet weather is a factor in something like 20% of fatalities. To be totally fair, you might also adjust up the human’s mileage to account for the fact that people are probably (smartly) only going hands-off with the Autopilot when it’s nice out, or under other unchallenging conditions.

Human Override

how-teslas-self-driving-autopilot-actually-works-57b22f49fd2e612427000025mp4-shot0003And if people are obeying the Tesla terms of service and only using the Autopilot under strict human supervision, the single fatality would certainly be an underestimate of what would happen if everyone were driving hands-free. This video demonstrates the Tesla freaking out when it can’t see the line markings. Everything turns out fine because the human is in control, but imagine he weren’t. Relative to the assumption that the Autopilot is running in fully self-driving mode, accidents and fatalities are certainly (thank goodness) too low.

But given this selection bias — the Autopilot only gets to drive the good roads, and with some human supervision — the claim that it’s superior or even equal to human drivers is significantly weakened. We don’t know how often people override the autopilot to save their lives, but if people prevent the car from doing crazy things one half of the time, then Tesla’s track record is twice as bad as it looks.

Sadly, Two Data Points

All of this is arguing around the margins. We’re still pretending that it’s August 2016, before the second fatal autopilot accident in China. The cruel nature of small numbers means that with two accidents under its belt, and 150 million miles or so, the Tesla Autopilot appears to drive like the worst of sober humans.

But as I said above, these estimates for the Tesla are terribly imprecise. With only 150 million miles driven, you’d expect to see only one fatality if it drove on par with humans, but you wouldn’t be surprised to see two. With small numbers like this, it’s impossible to draw any firm conclusions. However, you certainly cannot say that it’s “morally reprehensible” to keep Autopilot’s metaphorical hands off the wheel. If anything, you’d say the opposite.

More Data

What we need is more data, but we won’t ever get it. Any responsible autonomous vehicle company will improve the software whenever possible. Indeed, just before the Chinese accident, Tesla announced a software update that might have prevented the US fatality. But every time the software is redone, the earlier data we have about the vehicle’s safety becomes moot. The only hope of getting enough data to say anything definite about the safety of these things is to leave the design alone, but that’s not ethical if it has known flaws.

So assessing self-driving cars by their mileage is a statistical experiment that will never have a negative conclusion. After every hundred million miles of safe driving, the proponents of AI will declare victory, and after every fatal accident, they’ll push another firmware upgrade and call for a fresh start. If self-driving cars are indeed unsafe, irrespective of firmware updates, what will it take to convince us of this fact?

Lost in the Fog

Until there’s a car with a two or three billion miles on its autopilot system, there’s no way to draw any meaningful statistical comparison between human drivers and autonomous mode. And that’s assuming that people are violating the Tesla terms of service and driving without intervention; if you allow for people saving their own lives, the self-driving feature is certainly much less safe than it appears from our numbers. And that’s not too good. No wonder Tesla wants you to keep your hands on the wheel.

Even if self-driving technology were comparable to human drivers, that will mean twenty or thirty more deaths before we even know with any reasonable certainty. Is it ethical to carry out this statistical experiment on normal drivers? What would we say about this if we knew that that self-driving were unsafe? Or is this just the price to pay to get to an inevitable state where self-driving cars do drive provably better than humans? I honestly don’t know.

quote-seduce-you-into-trustAutomotive safety wasn’t invented yesterday. There are protocols and standards for every fastener on the car’s frame, and every line of firmware in its (distributed) brain. These are based on meeting established reliability and safety measures up-front rather than once they’re on the road. Whether current practices are even applicable to self-driving cars is an interesting question, and one that the industry hasn’t tackled yet.

So in the meantime, we’ve got a muddle. Tesla’s Autopilot is good enough to seduce you into trusting it, but it’s still essentially statistically untested, and on first glance it drives much worse than I’d like. On one hand, Tesla is doing the best they can — collecting real-world data on how the system responds while warning drivers that it’s still in beta. On the other hand, if people were dying behind the wheel due to a “beta” brake-disc design, there would be a recall.

Just to be clear, I have no grudge against Tesla. [Elon Musk] said his tech was safer than human drivers, and the statistics in no way bear this out. Tesla is way out ahead of the other auto makers at the moment, and they’re making bold claims that I totally hope will someday be (proven to be) true. Time, and miles on the odometer, will tell.

123 thoughts on “Self-Driving Cars Are Not (Yet) Safe

  1. Realistically, the only time ‘self driving cars’ will be safe is if/when they are the only cars on the road, and even then, there is some room for error. If certain roadways and expressways will have isolated lanes for ‘self driving vehicles’, that are NOT ACCESSIBLE to manually driven vehicles, the results would likely be astounding. Mixing ‘man-driven’ vehicles with ‘self-driving’ vehicles will not be good.

    A train is in the category of ‘self-driving’ and even they have accidents. They also have humans setting controls and monitoring other factors which affect safety of the vehicle and passengers. Plus, their route is pretty well defined, with few chances of ‘side collisions’ as occur on our highways.

    I’m 77, and I can wait for the technology to catch up to human frailties. Once the human gets hands on the controls, all of the rules change!

    12-5-2016 am

  2. We already have “self-driving cars” over here in Europe. They are called “public transportation” and they use a professional driver in place of the controlling computer. They work very well, you should try them some time over there on the other side of the pond.

      1. You missed the sleeping train driver we had not too long ago.

        Then in Coventry (where I am) a bus went into a shop *(infact on this road it’s happened something like 3 times, not really sure why) – sadly this one appears to be due to the driver having a heart attack.

        Still, if we had reliable, on time, non-striking robo drivers for public transport, it maybe a way to make things work (bus’s generally have set routes, dedicated lanes, etc). What they don’t normally have is seat belts though. I can’t decide if I’d trust one personally.

    1. Dear Sheep:
      We like our freedom too much. We can come and go where we please WHEN we please. We are not tied to a bus schedule and don’t have to put up with other irritating passengers.

      1. Right on! We only have to put up with other irritating drivers who pull out in front of us, tailgate, run red lights, run stop signs and pass just 12 inches away from the bumper and generally behave like asses. Freedom!

        1. Then there are speeding tickets, seatbelt tickets, non-moving violations, parking tickets, red light cameras, speed cameras, gas bills, maintenance bills, car payments, car insurance, tolls, parking passes, parking garages, etc. Financial freedom!

        1. The irony is that extensive public transportation does not behave any more efficiently.

          Either the service is poor – too few lines and stops, too far away from the people – or an unnecessary number of busses and taxis shuttle criss-cross across the city empty to reach customers and destinations. Even schemes of using self-driving cars in this manner is inefficient because the fewer cars you have, the more miles per passenger miles they need to drive, and since everyone’s going to work at 7 am anyways you need nearly as many cars as there are people anyhow.

          Public transportation serves well on fixed routes between population concentrations where they won’t need to take you from door to door. Expecting society to re-structure around the weaknesses of public transportation simply causes property prices to rise around the transportation hubs, poverty to set in outside of the hubs, and living and doing business becomes more expensive otherwise.

    1. The google cars are restricted to a limited number of routes that are already mapped and scanned. They aren’t even trying to do what Tesla is doing – they get their bearings by comparing the surroundings to the 3D scanned map instead of following lane markers or other cars, which is why they can’t travel anywhere they haven’t been driven before.

      And they also can’t deal with it if the landscape changes too much, which is why they’re only driving them in places that don’t get snow.

      1. Elon Musk’s whole business model is a hype mill to gain more investment capital. If they don’t announce new groundbreaking stuff at a steady rate – working/feasible or not – the investments stop and the whole bubble bursts.

        Hence why the hyperloop, the powerwall, the solar roof tiles, the electric pickup truck… etc. etc. products that aren’t actually very viable but they sound great in the business portfolio you sell to new investors. Autopilot was one of them – Musk saw that they had a basic system in place, so he told everyone “hey our cars are self-driving, gibe moni plox”.

  3. Unfortunately, with the accident in China the family has refused to supply any accident data, so there isn’t even a way of knowing if autopilot was enabled at the time of the accident.

    1. Surely that indicates that it wasn’t?

      I’ve not lost some one in this way, but it feels dodgy from the bat. If the autopilot was to blame surely you’d want that to be known. If it was someone driving like an idiot, maybe not?

      Or has someone paid them to not share the data as it’s incriminating in a way that wouldn’t look good, statistically speaking?

    2. I mention this below, but the family in China is probably going to bring a wrongful death suit against Tesla, and the remains of the car are going to be Exhibit A. Nobody in their right mind lets the opposition have a preview of their best evidence, let alone potentially tamper with it.

      That they are not turning the car over to Tesla doesn’t add anything. It’s the expected behavior.

  4. Another problem with self driving car, is that exacerbate the bad maintenance problem, having more and more points of failure can could cause a malfunctioning and therefore a possible accident. The dead batte3ry when overtaking is a failure mode that now is dangerous, but with a self-drivin system one can’t even try to coast safely…

    1. Why would you be on the highway with a dead battery, least of all trying to over take someone? Also the wheels won’t suddenly lock up when the battery dies, if anything, coasting will begin to charge it back up.

      1. A failure mode of car batteries is that an element breaks and shorts or remains open, and a bump on the road could cause enough momentum to break the battery plates. Happened to me once, luckily I was in a parking lot and the cause was a “silent policeman”. I know some people that happened on a motorway and had to coast and call roadside assistance.

        1. then have two, it is a trivial issue and with electrical cars on the horizon a bit of a mute point, the batteries in most of those are highly parallel, you would have a higher chance of getting hit by lightning than every single cell giving up the ghost at once.

          1. Yeah, however, when a Tesla battery breaks a tab weld from vibration, the whole car catches fire because the breaking tab arcs over and overheats the adjacent cell. The particular chemistry used in the cells has a property of emitting oxygen when it goes under thermal runaway, so it self-ignites the electrolyte.

            That has already happened in France with the Tesla that caught flame on a test drive.

      2. If you have a low battery you can jump or snatch start a car into starting just to get it home and use your indicators/lights sparingly to preserve whats left. I can do this with my diesel landrover in relative safety as you can take the battery off that completely and it will continue to run until you turn it off but with newer fly by wire throttle vehicles like my diesel sprinter van has, the ecm shuts down when battery voltage is depleted, and you the fun thing of using the brakes that cuts the engine because the lamps steal the last of the system power…
        Interesting because I’ve had 2 alternator failures on the sprinter (at $700 apiece…) which produced this effect. Fortunately I’m aware of the vehicle systems starting to suffer the ill effects before it dies because it does things like the windscreen wipers go slower and the indicators flash erratically. Its a edge case but it happens out there on the road every day to someone, and if your vehicle doesn’t have a battery sensor and a watchdog monitoring it that alerts all the other systems and the driver well in advance of it cutting the vehicle its still possible that you will be sat there with a dead engine & no power steering, but hopefully with braking still as there are some laws that mandate that brakes are mechanical and distinct from the other vehicle systems for this very case…

      3. Happened to me. Doing 40mph in right lane (UK, so that’s the overtaking lane) on a ring road. Alternator had packed up so battery had stoped charging. Another fault masked that the battery was running flat. Coasted to a stop in the left lane under a bypass (so no chance to push the car, too steep) right between 2 exits. Thankfully, dead opposite the police station, so they spotted us and towed us to safety within minutes. (UK, so the police helped us not shot us)

    1. We are comparing the self-driving car against human drivers, not against perfection.
      Since humans are not 100% error free, I’m unsure why anyone would mention comparing to that value.

      Unfortunately, while human driver accident rates are generally public information, total human driver trips always seem to be estimated guesses, so it is difficult to get an exact percentage value on human driver errors resulting in accidents.

      That makes a percentage based comparison difficult if not impossible.
      Which is one of the problems. Most people try to compare the accident rates, but since so few self driving cars are on the roads compared to human drivers that comparison isn’t all that useful.

      But for example (and I am pulling these numbers from my nether) if humans have an accident rate of 5%, then self driving cars only need to be better than that to justify claiming they are safer. They don’t need to be 0%, just 4.9% or lower.

      1. You forget that the human accident rate includes self-inflicted accidents and bad drivers who are responsible for most accidents by getting into them more frequently. One guy may drive all this life without an incident, while another one has wrecked ten cars. Accidents aren’t evenly distributed in the population.

        As a result, aiming for the average is still worse than most drivers.

        1. sure aiming for the average might give that result, or it could do the exact opposite.

          if the average is the bad ones here and the good human drivers are the exception then the data would skew equally much in the opposite direction.
          considering how humans in most cases set themselves above average, even though that is mathematically impossible, i see this as much more likely.

          1. ” or it could do the exact opposite.”

            No. It really wouldn’t.

            Driver skill falls on a normal (gaussian) distribution with most people in the middle. Going above a certain treshold in skill won’t improve your safety record because you can’t perform any better than not having any accidents. Going below a certain treshold will start to increase your accident rate dramatically.

            In other words, the best drivers are not much better than the average drivers when it comes to accident rates, while the worst drivers are much much worse than the average, and therefore the minority of bad drivers are driving the statistics. While the distribution of driver skill is symmetrical, the effect it has on the roads is not.

            So, if you step into a self-driving vehicle that drives like the “average driver”, the actual average driver is in greater risk than driving themselves.

      2. You can’t really ever make such a simple safety comparison til self-driving cars are able to tackle ANY road, any where, in all sorts of conditions. I know a few roads that would confound today’s self-driving cars…

      3. Stats that didn’t make the article:

        1.5% of US annual deaths are traffic accidents. Around 1/3 of them are drunk drivers. When you’re at the legal limit, you’re something like 380 times more likely to get in an accident.

        Other factors that really matter in traffic fatalities are urban/rural location (like, factor of two). If you crash far away from a hospital, things look worse.

        But your point is right — self-drivers just have to improve on humans. We’re pretty surprisingly good, actually. 100 million miles per fatality is nothing to scoff at! Go meatsacks!

      4. Computers are very bad t dealing with unexpected things. Unfortunately, you only have to meet an unexpected thing every 10 years or so and a self-driving car has killed you.
        Humans are very good at dealing with unexpected things.
        They’ve also got a much higher threshold of ‘unexpected’ – like sun glare on screen, car pulling out in front of you, road sweeper half over the lane etc aren’t even things we mention in conversation. But they flummox self-driving cars.
        We often make intelligent decisions when presented with roadworks etc where there’s no clear ‘right’ path and signing is insufficient. We communicate with other drivers by lights, waving, small movements to say where we’re aiming for and who should move where to get round obstacles or resolve gridlock. Don’t see computers anywhere near that.

  5. First of all this is a nascent technology and broad predictions on just about any factor concerning it is highly premature. However, having said that, the failure rate in autonomous and semi-autonomous systems operating vehicles in other domains such as rail and aviation strongly suggest that when mature, major accidents causing fatalities will be lower. The real issue is that for a very large segment of the population self-driving vehicles are seen as threats to their employment at one end of the scale, or their manhood at the other, and resistance to the broad adoption of this technology will be strong. As usual in these cases, the truth will take a back seat to opinion and appeals to emotion, and again as usual, statistics will desperately misinterpreted to support whatever stand is being taken.

    The technology in this matter is secondary – the real battles over this are going to be in the social and political domains where truth has little sway.

    1. “in other domains”

      A ship doesn’t sail without a captain even though it may navigate autonomously by GPS, and on the high sea or 50,000 feet up in the sky a 30 meter statistical uncertainty in the GPS doesn’t matter. The margins of error in all the other domains are vast compared to automobiles, which would drive straight off a cliff if they were operating at the same level of “maturity”.

      1. First A.L.S. requires far finer tolerances than what you are implying and at far higher speeds, in the case of aviation the margins for error are at least comparable. I did not mention marine transport at all but high speed rail and many light rail systems work automatically again with higher speeds, greater mass. and higher risks given the number of passengers. While comparing modes can only be stretched so far the fact is that lives are in the hands of automated controllers for some time and this will become both possible and practical for road vehicles sooner rather than later. Like I wrote above: widespread adoption of this will have very little to do with limits of the tech and everything to do with public attitudes, and those that are opposed will mount campaigns based on mendacity as they have in every other instance where someone wants to block a technology.

        1. I disagree. Airplanes don’t fly as close as cars drive, their flight corridors are miles and miles wide, and a slight steering error simply won’t crash one in an invisible ditch, and trains are already on tracks without steering and can’t go anywhere else.

          It’s simply a whole different game, because when something goes wrong in an airplane the pilots usually have minutes to react and figure things out. Often the problem develops over hours. When something goes wrong in a train, the problem is usually long coming and you’re just a dead man walking till shit hits the fan.

          In a car, there’s a whole heap of extra variables – like the other article points out the google cars sometimes brake when they see a shadow, because they mistake it for a pothole. Airplane autopilots aren’t scanning for potholes in the sky and trying to steer around them.

          1. Automated landing systems for aircraft do not have tolerances “miles and miles wide” and very small errors can cause a controlled flight into terrain. Not only are these systems widespread, their use is mandatory under certain conditions. Nor do loss of separation incidents during a flight “develop over hours” but indeed over intervals and at speeds that make a human’s reaction time utterly inadequate. No one (certainly not me) is suggesting that self-driving cars have reached the stage of development where they are ready to be deployed in large numbers without humans ready to take the wheel. However one can extrapolate from where they are now, the degree they have developed since inception, AND the history of automated control systems in other modes of transportation, particularly aviation, and come to the conclusion that there will be no technical reason autonomous road vehicles will not be ready for common adoption in the very near future. The question of if they should will be a sociopolitical one only.

          2. “the pilots usually have minutes to react”

            i would like to see some data on that, if rapid decompression occurs and they dont get their masks on they arent even conscious for half a minute.

            couple that with a plane being a much more advanced machine, with the troubleshooting issues that brings with it.

          3. “Automated landing systems for aircraft do not have tolerances “miles and miles wide””

            Hence why they have special transponders to locate the craft relative to the field. Something which self-driving cars are struggling with.

            “if rapid decompression occurs”

            That’s like if a wheel suddenly snapped off a car. There are some freak accidents where neither computer nor man can react adequately.

          4. “Hence why they have special transponders to locate the craft relative to the field. Something which self-driving cars are struggling with.”

            Your point?

            I am not arguing that self-driving cars are currently a mature technology only that automated landing systems are, and operate well within comparable tolerances.

          5. “google cars sometimes brake when they see a shadow, because they mistake it for a pothole”

            Yeah, but sometimes I do that too. Depends on the road.

            I’ll do it more on the sealed rural roads where the rutting is so deep is casts a shadow across the other half of the (single lane, two direction) road.

      2. Also, planes have 3 dimensions to move in, so conflicts should be rare and easy to resolve. But if autopilot was easy, there’d be no hooha about drones; planes would just fly round them.

        1. Conflicts are rare because airspaces are tightly controlled, but even then there are the automated Traffic Collision Avoidance System (TCAS), the Airborne Collision Avoidance System (ACAS) and the Airborne Separation Assurance System (ASAS). Plus several others like the Obstacle Collision Avoidance System (OCAS) and the Ground collision warning system (GCWS) that work with the on-board flight director system to keep incidents to a minimum. In heavy traffic areas, and at the speeds aircraft are moving, autopilot/ILS is neither simple or trivial.

  6. Human beings are not intrinsically good at driving cars. Aside from training, roads need stripes, cat’s eyes, cars need lights, bends need warning signs, crash barriers, bicycles need lights, reflectors. If you are on the road you are telling everyone else where you are and what you are doing visually, the road is telling you where everything is and what the dangers are. But it’s all human adapted.

    When cars and bikes are required to carry transponders telling everyone local where they are, what they are doing – and by extension the path they are keeping and where they will be in 1, 2, 3, seconds all operating on roads that are marked autopilot safe – the barrier to the technology will drop and the safety will improve hugely.

      1. Most of the technology needed, I think we already have. Any car with an engine management unit and a sat nav, add a suitable radio and for a manually driven car that’s all you’d need to interact. A self driving car would still need camera technology to avoid accidents but the usage is now limited to detecting things that aren’t supposed to be there. Sat navs now use propitiatory maps, but here at least the local area authority maintain accurate online maps of each region that even name/number every house. They are not as friendly as streetmap, but they blow it out of the water if you are trying to find a house. The council has internal maps and systems to keep track of roadworks. If that was all opened up it wouldn’t need any more effort to maintain, it’s just a question of who gets to access it.

        I see this step as a legislative, social and logistical issue, not technological. I don’t relish the public argument “When I hit the brake you want my car to tell everyone wirelessly?” – “It already does this, you have a brake light, but we want to add another frequency to the EM” “And when I hit the accelerator, that too” “Your car emits a puff of smoke and lurches forward, so we can guess, but everything you tell the car to do we want your car to explicitly tell all the cars around you”. It’s the logical extension to what we have now and it’s computationally friendly.

    1. Yes, but

      a) humans work adequately well without all those things. Cats eyes, buzz stripes, reflectors etc. are attempts at shaving the last 0.000…1 cases off the fatality record.

      b) you don’t want all cars to have tracking transponders because of human reasons. Namely, the human in charge of the system reasons.

        1. That’s only because there are so many different ways to die by accident that none of them are individually very large. The probability of death by car accident over a lifetime is roughly 1 in 50.

          If there weren’t all those extra safety markers and signage and people were left to self organize, what do you think the probability would become? 1 in 49?

        2. Eating too much fatty food is the biggest one in the west, actually. It’s just that people think the benefits outweigh the costs.
          And for the average American, they certainly do outweigh… ;P

    2. And besides bicycles, will you install these transponders on every human? What about dogs and cats? Or wild animals (which account for around TWO MILLION crashes annually in North America alone, with well over a hundred fatalities per year,) are you planning to round up every deer and moose in the wild and install a transponder? And then do that again 2-3 times per year to tag the ones born in the interval?!!

  7. Seasonable reply. I am sure that autopilot can drive better and with quicker reaction times than humans. A lot of accidents are the result of unexpected events with too little time to correctly react.
    Secondly, human impulses to emotional respond to frustrating drivers and events often are the root cause of most accidents other than weather related.

    1. Worse! I’ve _crossed streets as a pedestrian_ in Boston!

      Big city driving is significantly less fatal because of the low speeds. This is why Uber’s self-drivers in Philly make sense. Who knows how many fender-benders they’re going to have to endure? But as long as it’s material damage only, it’s in their R&D budget.

      When a robocar runs over its first pedestrian…

      BTW: Breaking news. In Germany, carmakers were just ruled to be responsible for accidents caused by self-drivers. Don’t look to Mercedes or BMW to be taking any risks on this tech.

  8. “Until there’s a car with a two or three billion miles on its autopilot system, there’s no way to draw any meaningful statistical comparison between human drivers and autonomous mode.”

    And that needs to be with a fixed software build. As soon as you make any changes, you reset your milage count back to zero (unless you can prove that none of the changes can possibly have made things worse – but how are you going to do that, in general?)

    Personally I’m convinced that self-driving cars can be way safer than humans, but I can only see that happening once the software gets standardised and locked down, with all manufacturers running the same code. Unless that happens, code will be constantly changed, and new bugs will constantly be introduced. Yes, this means that development will slow to a crawl, but rapid development always means bugs.

    1. Tesla actually has a semi-clever answer to that. They’re turning the autopilot on in all cars at all times, but not letting it drive. They’re simply comparing the differences between the driver and the computer behaviour and look at where they differ to find out where the computer would have made an error.

      That way they get to rack up many many more miles virtually.

      I suspect it will reveal that the system is fundamentally inadequate and they’ll shitcan the whole show quietly.

          1. In large masses, yes.

            Think about it. The group of humans will behave in predictable ways, whereas the group of computers all running on the same software will not, because the same bug causes all of them to behave erratically.

            If one person on the road drives a bit wonky, the traffic continues to flow. If all of them suddenly start acting crazy, you get a big traffic jam through the whole highway.

          2. it isnt big groups actually crashing though, but individuals.
            it only takes a single person to actually cause a very large accident.

            and to be fair in large groups most social sciences sort out outliers so you don’t necessarily know how reliable big truly representative groups are.

            you are right that humans are more adaptable but that can be a source of problems as well, the way they adapt to those small errors will be different.
            i have seen enough issues on roads around the world to know that there are a very real proportion of drivers that are as you say acting crazy,

          3. “there are a very real proportion of drivers that are as you say acting crazy,”

            What I’m talking about is a situation where a single bug affects all cars. For example, imagine there’s a fog and the low hanging sun causes a particular kind of halo, which confuses the car software and causes it to veer hard left. Suddenly -ALL- the cars on the road take a dip in the ditch, because they’re all running on the same algorithm and make the same error.

            Or a magnetic storm which causes all the cars to drop GPS lock at the same time. Suddenly they don’t know where they are anymore, which map to use to go around. Whoops.

  9. Given a collection of an adequate number of data points for every mile driven thus far, some knowledge about the 2 existing crashes, other “near miss” scenarios and general expected scenarios, it should be completely possible to use a bit of data science/machine learning/predictive analytics to improve this only 2 data points situation considerably to the point of being able to tell the real story and fully understand whether self driving vehicles are or are not in fact safer.

  10. Good article. Nice to see someone with a basic grasp of statistics discussing these things.

    Personally I’m willing to accept companies like Tesla doing their development work on public roads because the value to society of functioning self-driving cars will be huge, if they pull it off. If I was going to make an argument about it being morally reprehensible to keep self-driving cars off the road, I’d make it on that basis rather than their current state. Of course that’s a lot easier for me to say when I’m not going to be attacked by a pack of lawyers for my car design.

    It’s also worth noting that we take a similar approach to the development of new medicines. Yes, we do clinical trials before release, but we also monitor for problems once the drugs are in production because we can never be sure of safety using only small trial populations.

      1. When you get a prescription from your doctor, do you consider yourself to be a willing subject in a drug test? I doubt most people think of it that way. Drugs that make it to market are presumed safe, but we still monitor them just in case.

      1. Lol, “notarealproblem”, +1
        That is a good point, though I’ve no idea what the world looks like at that end of the spectrum.

        I will be very cautious about this kind of technology, I do hope I’m wrong in my pessimism. A lot of people would/will benefit from assisted driving I just can’t trust it myself till I’ve seen one put through its paces during a thunderstorm or snowstorm. What happens if you are in an area with no mobile data connection, no GPS signal, middle of nowhere?
        Most of us remember Windows’s BSODs.

        I have that fear that when a vehicle’s software has to choose between hitting either a parked car or a person, it will hit the person. :( I wish the best to Musk, Google, Tesla, DARPA, ect…

        I hope I’m proven wrong, but will be a late adopter.

        1. i dont think anyone finds it unreasonable to be wary, just that a lot of the arguments being presented against the concept is against the concept itself, as if it wasnt possible at all or even worse that humans always does better because humans.

          we are amazing creatures but we have our faults, i will bet that looking throughout industry in the world more life critical “decisions” are made by automation than people and it doesnt seem to be a big issue there, my guess is because people dont realize.

        2. For some reason that i can’t figure out, noone ever just considers that the car would just tell you to take control if it finds it can’t drive safely because of cameras being obscured or gps loss, etc.

          1. That’s the difference between Level 4 and Level 5 Autonomy. And the manufacturers are ALREADY working on vehicles that don’t even have a steering wheel or brake pedal, so a human couldn’t take control even if they wanted to.

  11. “In Massachusetts, the average is 175 million miles per fatality, while they only get 61 million miles in South Carolina.”
    Could be the differences in seat belt laws and the average traveling speed. Ever drivin through Boston on a Friday?

    1. No one drives through Boston on a Friday, as it will be Saturday by the time you get to the other side.

      As for the rest of Massachusetts, they’re hardly a bunch of slow-pokes. On I-93 on the NH/MA border coming into MA, the speed limit drops to 55 and the traffic goes up to 85.

  12. epautos.com has more coverage but you miss one of the worst problems.

    You don’t react if you are not engaged. Someone driving triple digit speeds and processing everything on a freeway in the sparsely populated west is safer than a driver that might have to engage in a fraction of a second driving 55 on a crowded freeway.

    You can have EITHER human driven cars where they have to pay attention 100% of the time – and that is dropping, since ABS and stability controls mean you don’t have to worry about skidding, you probably don’t know how to drive with a manual transmission – or 0%. Humans are bad at anything in between.

    During the period that Montana had NO speed limits, accidents went down (“reasonable and prudent” was declared by a court to be to vague and subjective) because drivers had to decide and pay attention. What is on the road, the conditions, how well does your car brake or corner? Oh, 100% engagement.

    Now we have the driver assisted everything – until it stops and demands the driver bo from 0% to 100% in a few milliseconds

    1. ABS is really useful in the winter. You won’t fail to notice when it kicks in, because it kicks back at your foot when the wheels start slipping. That in itself is worrying and scary enough that you don’t want to rely on it stopping you.

      The problem is that people who aren’t used to driving in the winter freak out when the rattling starts and lift their foot off the pedal, and then crash or drive someone over.

      1. ABS is rad. I never used to trust it, but I rode a motorcycle for the decade during which ABS got good. It’s great now.

        As part of driver training in Germany, they have you take it up to 40-60 km/h and slam on the brakes. (It’s actually a two-foot manouver — brake and clutch — just stomp both as hard as you can.) It took me three tries to do something so stupid. I _knew_ the car would skid out.

        I’m certain that the ABS outperformed what I could have done, without a _lot_ of practice. And all I was doing was heavy-footing the pedals.

        ABS is the bomb. Try it in a parking lot. The preparation will make you a safer driver to boot.

  13. We are now conditioned to technology’s headlong rush to monetise anything as quickly as possible. So most of the headlines out there and the discussion here is a ‘person versus machine’ theme about the end play. There is massive benefit available in the grey area in between. let’s call it the “person WITH machine” area. Eg We might not be ready for completely autonomous but something that takes over in specific circumstances where there is a clear advantage makes sense.

    But isn’t that Elliot’s point. We need specifics and real data to advance with real benefit. not exactly something Elon Musk is known for, although I marginally prefer his headlong rushing to other alternatives…

    Imagine a world where our Facebook autonomous car assists us to a bad decision to smash into a brick wall. A result we are actually happy with because our heads up display was feeding us agreeable, self-confirming pseudo-data throughout the incident.

    1. I relate the self driving car conundrum to the issues I face every day at work – I work on a range of medical equipment from monitoring systems to life support and by far the greatest faults lay with the operator /machine interface wether it’s a power cord ripped out of a socket fluid spilt on a keypad a tube incorrectly attached. Which in my mind gives support to the proposition that self driving cars are safer. But the machines themselves are not infallible either. Power supplies fail software, crashes, batteries fail an automatic analysis of an ecg trace needs the be confirmed by a trained physician. The variables of where the real world meet the computer world are too great to trust the computers response.

      1. Yet we already do in many domains – and live with the understanding that these systems are not infallible simply because we also know in the long run they are less fallible than humans.

    2. I agree about man+machine. I love ABS, for instance, because it can micro-manage traction in accident situations where I’d be concentrated on not hitting the deer, or whatever.

      The problem with self-drivers is that I don’t see a good path to man+machine there. If the autopilot does well enough, you become complacent and take yourself out of the loop.

      Tesla’s current strategy — making the driver take control every so often for no good reason — would piss me off after a few iterations. You know how you click the “OK” popup boxes almost by instinct? It’ll be like that. Just more cognitive white noise.

      Airplanes’ autopilots are a good example of man+machine as well. Maybe there is a lesson there, but it’s a totally different problem because airplanes are almost never operating in congested spaces, and almost never hit each other, or phone poles or guard rails.

      I just don’t see it yet in cars. Where is the synergy?

      1. A couple of years ago, I came face to face with a car that had gone careening down the wrong side of the road to bypass a traffic jam and turn into the side street I was crossing on foot. In that instant, time slowed down and I was aware of the age of the driver, the make of the car, my best survival option was over the bonnet, and recognising the engine actually accelerating.

        My brain had thrown the IDE out the window, instantly loaded an optimised binary, immediately reconfigured every i/o pin and peripheral, over-clocked etc etc. The synthesis of capabilities that the human brain can put into action is truly incomprehensible. But it’s not what saved me. As the advanced age driver panicked and actually accelerated, the brand new S-Class Benz saw me, ignored the driver’s instructions and, even as the engine raced, deftly took control of the brakes to manage to stop close enough for me to put hands on the bonnet ready for my vault attempt.

        Human capacity for conscious thought sets us apart from other animals. We are ever so proud of it, and it is freakishly incomprehensible in its own right. But it really is a minuscule part of the brain. Why, after countless man years of conscious thought from brilliant minds, is robotic walking barely comparable with what a baby’s brain has achieved unconsciously in about 12 months.

        In my incident, conscious thought, played no role in either the driver’s or my own actions. Humans’ pride in our conscious thought misleads us to believe that it plays a much greater role in our behaviour than it actually does. In driving, as in most other things, the conscious choices are only a tiny part of the brain activity needed. The brain has already used previous conscious thought, practice, observations etc to write subroutines for most of it. Most importantly the brain handles interrupts, variously referred to as intuition, instincts, trust etc. (Warning: be very careful who you show your interrupt vector table to. Politicians, advertisers scammers etc will use that to hack your brain and manipulate your trust).

        One of the awesome outcomes of the tiny piece of brain power allocated to conscious thought is creating tools. A tool can do something a human physically can’t do. How about that ladder thing, eh? Brilliant invention, but it’s not “smart”, it’s not “AI”. Not in any way does it emulate, imitate, replace or otherwise compare to the human brain. And it would be ridiculous and dangerous to think it did. Same goes for the radar and ABS in the Benz, Watson, the Large Hadron Collider, you name it.

        Elon Musk likes messing with our interrupt vectors. He doesn’t like AI, remember, but doesn’t balk at portraying self-driving cars as taking over from humans rather than as tools. See what he did with our trust? Almost always, when one of those trust hackers is doing it, it’s for their own benefit (of course they have a self-justification subroutine in a constant loop that handles any morality stack overflow).

        Elon Musk wants to drive tech forward – a good thing we hope he keeps doing. He doesn’t want to do the painstaking, time-consuming, expensive testing and development that the Benz folks have done. His (not inconsiderable) brain has a proven hack subroutine stored away to influence people into believing that using the public as Guinea pigs is reasonable and that grand sounding statistics are facts – NOT a good thing.

        Not sure how Dave Jones has hardwired his tech-bullshit-spotting interrupt vector right after reset but I believe it involves an inverting Scmitt trigger…

  14. In 2013, 28.7 MILLION PEOPLE ADMITTED TO DRIVING UNDER THE INFLUENCE OF ALCOHOL. IN 2015, 10,265 PEOPLE DIED IN DRUNK DRIVING CRASHES – ONE EVERY 51 MINUTES – AND 290,000 WERE INJURED IN DRUNK DRIVING CRASHES.

    http://www.madd.org/drunk-driving/about/drunk-driving-statistics.html

    If automation had those stats we wouldn’t even think about letting on the street.

    Clearly we will all be better off if the cars did not rely on the human. Thats the goal and if we can get there while still not being more dangerous then all the better, but even if it was more dangerous it still needs to move ahead. The end goal is what counts – we all win.

        1. I believe that’s been generally disproven. Drunk people tend to run into things head-on, which is the safest way to hit something in a car. Their victims are a mix of head-on, side-on, rear-ended and not in a car, which are less safe in aggregate.

    1. The end goal is to get our fat arses out of cars and take some transit and maybe walk a bit.

      I’m happy to see technology make cars safer. A city with fewer cars on the road, that only make car-appropriate trips, would be even safer.

    2. If all drivers were drunk all the time, I’d totally agree. At 0.8 BAC, you’re something like 380 times more likely to be in an accident. But people spend so little time behind the wheel in this state, that drunks only account for 1/3 of all traffic fatalities.

      Nobody knows how many miles are driven drunk — the NHTSA’s risk values are very shaky estimates. Surveys won’t give accurate results either: half of the people will forget that one night when they had two too many, and nobody will have a good estimate on mileage from memory.

      Even after the China incident, Autopilot is many times better than a drunk, as far as we can tell, modulo selection bias and human override. That’s just not the baseline that’s relevant to me, as a sober and reasonably careful driver.

      I’d bet I drive a factor of four safer than Autopilot. (Of course, with small-number statistics like this, I can say anything and nobody can disprove it.)

      1. “Autopilot is many times better than a drunk,”

        That’s not really a valid comparison. A drunk person is not impaired in the same way as a computer is. They may be well worse in some respects and better in others.

        A Polish man was once caught driving with 0.55 BAC which is deadly to most people. He had stuck a leaf in his glasses to stop seeing double. Nobody questions whether the computer is seeing in double – whether its visual algorithms are reliable at all – because we don’t care as long as it appears to be driving well.

  15. First, that crash in China may or may not have actually had AP on – last I heard the family refused to let Tesla look at the car and it was too destroyed to phone home the crash data. From the dash cam footage, there was plenty of time and space for a responsible human to change lanes, so AP cannot be blamed in any case – it was human error to blindly trust it. It may have also been human error in poorly communicating the limitations of the system however (there have been issues with potentially misleading translations in the past for Tesla). Regardless, you can’t simply throw that at AP’s feet just yet, and perhaps never, if the car is never examined by Tesla and the log extracted. Even if AP was on, this exact thing would happen, because AP 1.0 systems of the time were unable to detect slow/stationary vehicles in that exact type of situation (there’s been a number of cases of people rear ending stopped vehicles on the side of the road because they just assumed AP would handle it). Again, poor communication of limitations, rather than AP failing to work as designed (in that it was a known limitation, not expected to be handled). So we still only have one confirmed AP fatality, not two.

    Also, you don’t necessarily have to restart the clock every time you update the software. If you’re gathering all the driving data and storing it like Tesla is, you can potentially run your software against all that stored data to test it before sending it into the wild to ensure there are no regressions for the scenarios you have recorded.

    (All, including non AP-equipped) Tesla cars have already passed 3 billion miles driven. Trying to figure out how many were driven with AP hardware of some kind is not so simple, though. I wouldn’t be surprised if it was in the neighborhood of a billion miles, because apparently the last billion only took 5 months. That’s not a billion miles with Autopilot at the wheel, exactly, but it is always evaluating, even if not in control, and that evaluation is sent to Tesla so they can learn where what the car thought should be done differed from what the human did.

    So I think Tesla has more data than the HaD article assumes. To be fair, Elon said they probably will/should have 6B such miles on just AP 2.0 hardware before regulators will approve their software to be fully autonomous.

    Having AP hardware is safer than not having it, whether or not you’re actively letting the car take the wheel, because it provides an additional safety net (other makes are implementing similar features using similar technology but typically less integrated). Various active safety features like collision avoidance / warning / reduction systems are good things to have.

    As for the “convenience” parts such as Traffic Aware Cruise Control and Auto Steer, these are certainly things that you must keep an eye on, but they reduce driver fatigue and thus when used properly (i.e., not ignoring the road) actually improve overall safety as it allows the driver to be more alert for things that may be an issue. There is a learning curve involved though, as the things you must watch out for may vary from what you’re used to (such as particular combinations of weird traffic scenarios that the sensors may not pick up on and you’ll have to hit the brakes or steer manually). To be fair though, for decades plain old cruise control required you pay attention so you didn’t rear end whatever was in front of you, so the problem isn’t new, only evolved.

    Just because there’s not enough data to prove it’s safer, doesn’t prove it isn’t safer either. You simply must understand the limitations and remain aware. The marketing around Autopilot hasn’t helped this situation of course.

    1. Great comment!

      a) The family in China is going to be part in a lawsuit against Tesla. The car in question is going to be the central piece of evidence. Can you imagine them letting Tesla folks “examine” the evidence beforehand? Anyway, the video looks a lot like the Florida accident — driving full-speed into a large obstacle in a way that no human driver would. I’m not _sure_ it was Autopilot, but that’s the way I’d bet. The reluctance to let their adversary pre-examine the evidence doesn’t say much to me.

      b) Autopilot was marketed as self-driving in China. (And hyped as such here in the US, even though Tesla has very careful lawyers.) In the US case, the person trusted the machine, treating it as if it were self-driving.

      c) Tesla’s data collection is exactly what’s needed for them to improve the system, and that’s awesome. I’d bet on the other car makers going _years_ in observational / experimental mode before risking human lives or legal liability. Tesla’s particular financial position encourages them to take more risks. I believe they see this tech as make or break for the company.

      d) Tesla knows _a lot more_ than any of us. And I’ll take your figure of billions of miles “driven”, in the spirit of “with-logging”, as well. But we don’t know anything about these miles. We have no idea how many near-fatalities were prevented by human intervention, aside from what we see on YouTube.

      Given the incentives, I assume that we _do_ know about all of the phenomenal successes: accidents avoided and people driven to hospitals. But Tesla’s going to be very selective in releasing the data, both because it’s their competitive edge, and they don’t create their own bad PR. You’ll note that Musk didn’t come out bringing up their driving record after the second accident…

      I based the article on all that we do know: a couple of crashes that ended in fatalities in something like 150 million miles of driving with Autopilot on. If it doesn’t look good based on the publically-available info, it’s not going to look better based on the data that isn’t currently being released. (Otherwise, they’d release it. QED?)

      I’m not sure if something like self-driving tech should be open-sourced. Tesla and Google and Uber are spending lots of money on R&D. Would they continue to do so if they were forced (by DOT? NHTSA?) to operate in the open? Or would that just kill work on self-drivers entirely? Is there some middle ground?

    1. No. I was working in a photo department of a college newspaper in 1994. I remember when digital photography replaced film overnight. After the sensors got above a few hundred pixels across, it was pretty clear what was going to happen in the industry.

      I also remember the first time we sent digital photos over phone lines using a modem. This was in the time of satellite uplinks and waiting minutes for photos to come scrolling out of the AP feeds. It was a shocking empowerment to be able to compete with the pros for only the price of a 96k modem. Beforehand, you had to drive like mad to get back in time from away games to get the film developed and printed in time to get to the presses.

      There was no question about whether digital was going to take over.

      (Film is still legit in many niche/art applications.)

  16. AND, once this becomes widespread, it adds just one more vulnerability to our infrastructure to hacking. It will be used first in trucking which will result early on in the ability to shut down a large percentage of commercial transport.

  17. There is also what I call the “I Robot” problem….where children jump out in front of the car. Now, if you veer left, you hit oncoming traffic. If you veer right, you go off the edge of a cliff. A human would commit suicide to save the childrens’ lives. A computer will be concerned with the safety of his passenger and would kill the kids.

    1. Even if my self driving car decides to hit the oncoming traffic and kills me, I’m sure the psychological damage for my family is huge compared with me making that decision yourself.

  18. We have lots of assist technologies in cars – ABS, parking sensors, cruise control. But they’re all very clear what they do and don’t do.
    It’s hard to see how I can know if my AP has seen and will avoid that parked car / kid running into the road or not, without waiting to see if it reacts. That means I have to always react to those, which largely kills the point of AP.

  19. I truly appreciate what you have written. Tesla’s Autopilot functions well but in some areas. Since, human have a natural tendency to take more risks when they feel safe, probability of accidents increases. Technologies are born for ourselves. They have merits and demerits. Example: People were happy with their conventional phones. And when smart phones were introduced, many of them opposed. They refused. Got a bunch of demerits. But now we all have at least one smart phones in our pocket. Why?
    Basically, give some time. Humans are working to make lives comfortable. I really enjoyed your views. Thanks.

  20. Easy solution would be to have first responders send out notifications to all cars in the network to let them know where they are per GPS so the car wouldn’t have to identify where the problem is last second. How hard is it to send out notifications when there is an accident? Eventually there is going to have to be more automated safety in vehicles that aren’t electric cars.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.