Tesla Automatic Driving Under Scrutiny By US Regulators

The US National Highway Traffic Safety Administration (NHTSA) has opened a formal investigation about Tesla’s automatic driving features (PDF), claiming to have identified 11 accidents that are of concern. In particular, they are looking at the feature Tesla calls “Autopilot” or traffic-aware cruise control” while approaching stopped responder vehicles like fire trucks or ambulances. According to the statement from NHTSA, most of the cases were at night and also involved warning devices such as cones, flashing lights, or a sign with an arrow that, you would presume, would have made a human driver cautious.

Qote from Tesla support page: "The currently enabled Autopilot and Full Self-Driving features require active driver supervision and do not make the vehicle autonomous."There are no details about the severity of those accidents. In the events being studied, the NHTSA reports that vehicles using the traffic-aware cruise control “encountered first responder scenes and subsequently struck one or more vehicles involved with those scenes.”

Despite how they have marketed the features, Tesla will tell you that none of their vehicles are truly self-driving and that the driver must maintain control. That’s assuming a lot, even if you ignore the fact that some Tesla owners have gone to great lengths to bypass the need to have a driver in control. Tesla has promised full automation for driving and is testing that feature, but as of the time of writing the company still indicates active driver supervision is necessary when using existing “Full Self-Driving” features.

We’ve talked a lot about self-driving car safety in the past. We’ve also covered some of the more public accidents we’ve heard about. What do you think? Are self-driving cars as close to reality as they’d like you to believe? Let us know what you think in the comments.

66 thoughts on “Tesla Automatic Driving Under Scrutiny By US Regulators

  1. So Tesla says the cars aren’t designed to operate without human supervision, yet people still act like they don’t need to pay attention while “driving” and they are hitting fire trucks and ambulances. Why is this Tesla’s fault/failure.

    Beyond jacking up the collision avoidance algorithm to handle roadside/crash scenarios, what is Tesla to do?

    I am confused why the car’s collision avoidance system didn’t prevent the accidents…

    1. The problem is that human psychology doesn’t play well with something that needs constant supervision that works 99% of the time. Not for hours anyway. That’s ignoring the dips who are jumping into the passenger seat of course.

      1. There is plenty of trails showing that even a person that tries to remain concentrated will typically fail within 5 minutes unless they have an active roll in the thing needing supervision. (Ie, they go from having a fast reaction time to having a fairly slow one. (ie, from well under 1 second to the typical second or two.))

        How a person that doesn’t even try to concentrate, or out right uses this opportunity to do something else will fair is likely far worse.

        One can make a very simple way to test this oneself if one likes to write some code.
        Where the test would then have 2 lights that blinks fairly often, and one that blinks fairly infrequently.
        If one then have 3 buttons, one for each light. And measure the reaction time from the light turning on to the participant having pushed the corresponding button. Then the person that has to deal with all 3 will typically react faster than the person that only has to deal with the infrequent one. (the infrequent one being a good minute or more between its blinks in both cases.)

        So yes, human concentration is fairly lackluster if one measures it. The time it takes for the car to say, “Take over” to the driver actually reacting and registering the scene can in a lot of cases be longer than the time it takes for the accident itself to happen.

        Not to mention that as the autopilot system gets better, then more and more people will say, “Nah, it can handle this” even if they see the situation as iffy. Not to mention that they might even be startled by the alert itself (due to how infrequently it might happen) and first get confused over what it even means, adding even more delay to the change in command.

      2. Absolutely this – I’ve got the PilotAssist 2 in a volvo, which is nowhere near as comprehensive as the AutoPilot in Tesla, but I’m definitely nowhere near as alert and engaged as I am when the system is off. Its an inevitability of not being so involved in the process.

    2. Numerous sates here in Australia legislated 40Kph speed limits around “First Responder” incidents due to the number of accidents that were occurring at the scene while accidents were being attended to. This was before Musk even release FSD as an option, so it occurs enough with humans that it drove concern. Its possible the Tesla FSD out performs the majority of panicky and incompetent drivers.

      They since repealed the restrictions because the number of accidents went up and not down due to people panicking…

      1. People do the same – just because it should be easy to detect doesn’t mean everyone does spot it in time, or react correctly when they do.

        Can be as simple as they see it, know its a car/van/etc but don’t notice its static until too late to avoid the impact. Or they can fail to notice it at all because they are focused on the slight curve in the road, some other road user acting oddly, maybe they get dazzled by reflected sun or beam lights at just the wrong moment – so something that isn’t ‘moving’ seems to blend into the background. If you play with how human sight works you can even change nearly everything in the FOV over a pretty short time period and the human won’t see it – they might know the image has been changed when they really think about it, but they didn’t see the change.

        If these autonomous cars are actually as good as they need to be at self driving – or dealing with these odd situations is definitely an open question, not really enough real world (or simulated) testing done. Have to say I’m not a fan, but can’t rule out they are actually better drivers than the average human – infact I suspect they have to be nearly as good as there are some really really poor human drivers out there dragging the average down. Seriously doubt the AI is close to as good as a good Human driver, maybe not even better than the modal quality human, but almost certainly better than the mean average….

        1. Right now, there are no autopilot systems in cars that are publically known to be safer than humans under similar conditions — Waymo and Cruise may have something going on, but they’re not releasing their data. Tesla’s “autopilot” miles are under best conditions, so comparing that to aggregate human accident rates is disingenuous.

          I’d be stoked to see data that proves otherwise. Hell, I’d be stoked to see any data at all, as a co-participant in road traffic. There’s a tremendous intransparency in this field.

          Tesla’s “autopilot” and “fully-self-driving” modes are neither, and they’re just a few lawsuits away from having to change those names. Volvos do as much “autonomous” driving, and you don’t see them calling them anything other than “driver assistance” features. Tesla’s choice is irresponsible, IMO.

          The issue with the lane-holding mechanism being fooled by parked fire trucks and ambulances certainly stacks up anecdotally, but we’ll see what the NHTSA comes up with when they get the real data. Other manufacturers’ cars don’t seem to be hitting firetrucks, so it does make you wonder.

          1. Indeed it does make you wonder, but lacking even remotely enough data to speculate ourselves…

            Calling something “autopilot” or “fully-self-driving” really aught to be considered false advertising – maybe they are close enough under the right conditions to claim such a thing (though that still seems dubious), but as long as it is a legal requirement the driver be alert and “in control” even if the system is 100% perfect (which we do know its not) calling it that really is pushing it…

          2. > Tesla’s choice is irresponsible, IMO.

            It’s downright fraudulent. Elon Musk has gone to public on several occasions claiming the cars are already safer at driving than people – and a lot of people believe that.

    3. Hi,

      in functionaly safety like required and defined in various standards depending on the industry, like ISO26262 for automotive, we call this foreseeable misuse or foreseeable improper operation. As a manufacture your are also responsible for such situations.

    4. There are a few reasons why Tesla is responsible. First, it has their name on it. Second, human nature is such that, if a device is “smart” the human will turn off their brain and abdicate their need to think over to the machine. I’ve seen this for years in the HVAC industry with “smart” controls. Technicians will replace the same part 3 or 4 times, “Because the machine told me to.” It wasn’t because all of the clues would have lead them to a different part/cause, which the clues would have done. They simply put blind trust in the machine and stop thinking about it.
      Same thing with cars. For that reason, I do not advocate for any type of self-driving features. Too dangerous, in my book.
      Another reason is that it is the job of the engineer, at least in the Engineer’s Creed, to Protect The Public. Part of carrying out this responsibility is knowing how people respond and designing around this.

      With all of their “smart” features, why not ensure that a person is sitting in the driver’s seat, hands on the wheel, head up so they can see the road. Allowing a car to operate with no one in the driver seat is a recipe for disaster.

    5. Firstly, stop calling the system “autopilot”… they are CLEARLY marketing it as autopilot with small print saying “not really”. 100% Tesla’s fault. And any jury will likely find negligence there.

  2. Is there a term for the situation where reducing the driver input also reduces their attention? I.e. it’s harder to maintain focus when you are just sitting there and the car is doing everything, so the driver (or supervisor) will naturally become inattentive?

    1. That’s spot On – the less attention required the less attention will be given –

      Even just something as simple as blind spot warning – you begin very quickly to rely on it and if it doesn’t pick up that one bike in your blind spot….

      There no way of the driver knowing when the vehicle has lost sight of the lane or missed not seen the car coming out of the side street or “afraid of shadows” and takes evasive action.

      What we can see and respond to when paying attention is still greater than what the car can in a much wider environment.

      1. I drive old junk cars but I recently rented a car that had the new adaptive cruise control and I was on a major highway in a lot of traffic and the sky cracked. Rain and when the rain hit the blacktop, fog. I am not a fan of tech diving me around so at first I turned the adaptive cruise off and worked the pedals myself, but I was surprised at how often I had to react quickly when the tail lights on the car in front of me broke out of the fog. It turned out the radar in the adaptive cruise can see through that with no issues at all. I found that I had by foot at the ready near the brake but the system really did work smoother than I did.

        This car had the blind spot monitoring as well but I assumed that it really was for the blind spots, not to be trusted to just see if the light was lit and jump into another lane.

        Both nice features and I can see if you have a car with them how after a while you would get lazy about
        driving and just trust the systems.

  3. Tesla’s self-driving algorithm works by learning based on the combined trips of everyone who has a Tesla. Given the fact that serious incident scenes aren’t an everyday occurrence it’s a given that the system isn’t that great at dealing with them.

    Now ultimately any accident currently is the fault of the human in the drivers seat because they know perfectly well the system needs to be monitored at all times.

    The way to fix the problem is to give the vehicles more practice with these out-of-the-ordinary situations through simulated incident sites on test tracks designed to replicate the real world as closely as possible. With enough time they will get there because there are only a finite number of situations that could be encountered on the road.

    1. Seems to me the collective intelligence of humans is rather limited as well. I’m reminded of how motorcycles were attempted to be made safe also. Un rideable. People are being dumbed down while cars are still not being made smart. Until the positronic brain actually becomes a reality, drivers need to do the driving and not a so called auto pilot system that can’t operate on reason and logic along with an education. Unfortunately there are people out there that still don’t meat those qualifications.

      1. I’m also reminded of the old retired guy whose wife was cooking breakfast in the motorhome while driving down the road. She informed him his breakfast was ready. He had the cruise control on and got up his drivers seat and went back to the table to sit down and eat when suddenly they found themselves off road and in a really bad situation.

    2. The problem with extremely rare edge cases is that they happen all the time, because there are infinitely many variations.

      When you throw a dart at a board, the probability of hitting that particular spot is zero, but you just did it.

    3. >Tesla’s self-driving algorithm works by learning based on the combined trips of everyone

      There’s no online learning in Tesla cars. They merely use select cases to pick new training data, and use the recorded driver inputs as a baseline for imitation learning. It takes the averaged driver input as the “correct” answer to each situation.

      Even if you had lots of data for accident scenarios, the way they’re training the AI means it would simply replicate the same errors as people do – slightly worse than people do – because it doesn’t understand what it’s doing anyhow.

    1. In a large part, it is.

      I’m a Tesla stockholder, and I’ve been following this deeply for the past 5 years or so.

      The vast, vast majority of negative press about Tesla is short sellers trying to drive the stock price down so that they can profit from their demise.

      I have personally looked into about 25 negative stories of Tesla, only to find when you get to the ground truth the issue is not so bad. I’ve stopped tracking these down because it’s not worth my time any more.

      As to autopilot, there may be some issues that should be dealt with, but it’s a new technology and we’re bound to have some unexpected bumps until we straighten things out.

      The real answer is statistics: Teslas are about the safest cars there are, and autopilot has likely saved many more lives than the 11 deaths reported, so the question isn’t whether autopilot is killing people, it’s whether autopilot is better than human driving. Statistically speaking, it is.

      (And compare with 124 deaths from a single GM ignition problem, which they fixed and didn’t tell anyone about, and backdated the records to show that it was fixed before they did it – to avoid liability claims. And that was a single issue with a single manufacturer.)

      I suspect what’ll happen is that there will be some back-and-forth between NTSB and Tesla, and they’ll come up with commonsense rules or design changes that will make autopilot even safer.

      It’s new technology. We’re working out the bugs. The news is only lurid and overreported because of short sellers.

      (I remember a specific example where a man had a heart attack and drove to the hospital with assist from autopilot, and stated clearly that he wouldn’t have made it but for the autopilot.)

        1. > TL;DR: if one wants to confirm their bias about a popular subject they just have to dig through enough articles

          Isn’t that pretty much the exact opposite of what he said?

          Rather than just believing the headline, he actually read the article. Rather than just believing the article, he acually went to the effort of tracking down primary sources and verifying whether or not the article was an accurate summary of that information.

      1. You’re ignoring the fact that there are almost no tesla cars on the road. GM probably make more cars every day than tesla ever have.
        And tesla drivers are above average – it’s self-selecting by the price.
        This problem isn’t unique to Tesla – until we have real full autonomous driving we need to stop adding features that look like it. Humans are not good at remaining alert to something they’re not controlling.

      2. >whether autopilot is better than human driving. Statistically speaking, it is.

        No such statistics / doesn’t prove the point because it is a false comparison and the amount of data relative to the case would be insufficient anyhow.

  4. It should be treated how they treat autopilot in aviation. If the vehicle (plane or car) has an incident while under autopilot, it’s the driver/pilot’s fault. “Plane” and simple.

    1. It’s not simple at all, because assigning fault does nothing to protect other people on the road. In this case, it is emergency vehicles. If the vehicle is assisting the driver, that assistance needs to be legal. That’s seems simple, and obvious.

      If you build a device that harms people, you can be held responsible. If you build a device and it is simply misused, you have no fault. But if you say the device is suitable for a particular purpose, and it isn’t, that’s on you, not the user.

      The real problem with this idea is that an airplane autopilot had to prove itself to the government before being authorized. Here, the manufacture just pushes an OTA update whenever they want, with only their own private testing. Completely different situation.

      1. See Boeing’s 737 MAX.

        Aviation is _very_ tightly regulated, with professional pilots and maintenance schedules. Pilots receive training on how the, comparatively very simple, autopilot systems work, so they know what they’re doing almost all the time — 737 MAX as cautionary counterexample.

        Car drivers are barely trained, the machines are infrequently inspected, if then, and nobody understands what neural networks are really doing.

        I’m not sure that these two situations are even comparable, but in the case of allocating liability, I’d say that more of it would fall on the system/manufacturer in the automobile case.

  5. “What do you think? Are self-driving cars as close to reality as they’d like you to believe?”
    Yes, but not Tesla’s. Tesla is playing catch-up in this area, but are marketing themselves as already-there. Meanwhile, the companies that really are close to road-ready are quietly racking up driverless miles in the regions that have authorized their testing. Waymo has logged over ten million autonomous vehicle miles.

    That’s for level 4 autonomy, by the way, where the driver is still needed for some tasks like parking. I think level 5, full autonomy, is still quite a ways away.

    1. How can anyone truly think that Tesla is playing “catch up” when it comes to autonomous driving? Haven’t even finished my coffee and already got a good laugh today, thanks for that.

    2. This level convention on autonomous vehicles has very little to do with technology. It has far more to do with where the accountability lies when accidents occur. With level 1, the responsibility lies in the hands of the driver and with level 5 all the responsibility lies with the company.

      Whether or not a company labels their autonomous system anywhere from 1 to 5 only is an indication how much they have trust in their product in a certain scenario. A geo-locked low speed urban taxi system could be classified as a level 5 system simply because the company knows there is a very small chance of something going wrong.

      I suspect FSD will remain at lvl2 for the foreseeable future, even if they manage to get it an order of magnitude safer than the average human driver.

  6. with regards to “superior” track record in accident rate:
    i don’t think this tells much now about the safety of the current “self driving capabilities”, as sane human beings with just a little drop of responsibility towards others simply don’t let their electric beasts roll on their own. it’s common sense. aprt fromsome reckless idiots who like living on the edge and do stuff what they doesn’t suppose to while driving, sane and concious human beings _drive_ the car.

    so what we see here is how the various assistive measures enhance the behaviour of the average driver.

    it is definitely something, but not the self driving future diplayed in the movies. and yes, no one should pretend or claim so – putting the life of innocents at risk.

    Musk did not even have the EVs operating in fully autonomous mode at his glorified disco tunnels over the Las Vegas “Loop”, even if the level of control over the entire operating environment there is almost complete: single lane, unidirectional, speed limited to 30mph – how far is that from any real life scenario?

  7. I think this problem has a couple of interesting layers :-)

    (a) there is this “active driver supervision”, aka “you can let the car do, but in a pinch, you gotta step-in”. Now everyone knows humans suck exactly at this kind of task (train drivers and pilots have similar tasks, but they get quite a bit of training for this)

    (b) there is Tesla, who perfectly knows (a), but is in desperate need of data & field experience to improve their “autonomous driving” market position.

    (c) there are the drivers, who are unconsciously aware of — at least — (a), if not (b), but try to forcefully ignore it

    (d) there is the regulator, who (up to now) colludes with Tesla by acknowledging, in the case of an accident “ah, but the driver didn’t read the instructions, so…”

    Now Elon’s solution to this equation is genius, but it’s at the same time distressingly cynical and sociopathic.

    Me? The engineer in me would propose to introduce a bit of negative feedback to stabilise the system: forbid the drivers of such “driver assisted autonomous vehicles” to have a seat belt or any other driver protection measure. Perhaps they’re more careful (not buying the vehicle, not using the feature, or letting Darwin to take care of it) when their skin’s in the game.

    1. Long before Tesla we have had cruise control, automatic braking to maintain distance, lane following, gps navigation – its not really a new thing to have aids folks misuse cause problems. New technologies that you could argue should never exist.

      Your negative feedback idea is all well and good but it doesn’t really work – just because you are in a more vulnerable position doesn’t mean everyone else on the road is, so you are less likely to be at fault, but perhaps actually more likely to have accidents because you are going a the speed limit, stopping at every stop sign and the muppet behind you in their safe box assumes you would act like a human and not follow the rules 100% if you can see its not required right now.

      To make that work you have to make it so the driver of every vehicle in an accident is automatically executed by the state if they survive, no appeals process, if the driver is for some reason in doubt every passenger instead, etc, very deliberately create the enviroment so everyone on the road is playing the game by the same rules, as otherwise things will actually get worse accident wise – too many different “rule books” in play…

  8. Definitely nothing to do with them selling a feature called “Full self driving” or “Autopilot” that people keep using to kill themselves and others on the public roads… yeah… definitely just jealous folks shorting stocks…

  9. Perhaps some of this problem could be at least partly solved by V2X?

    If it’s mandated that all self-driving cars (not just Tesla’s) have V2X fitted and you also add that to all emergency vehicles, such that it sends out a warning when sirens and/or emergency lights are activated (at a high RF power too).

    Then (as part of the type approval) any autonomous vehicles in range of a V2X emergency alert at least slow down, and also alert the driver to take over with an alarm.

  10. @Foldi-One [just in case threading explodes in my face]

    My negative feedback proposal was of course just a cynical joke. Just a sad reminder that, after introduction of all those protection measures, fatalities dwindled dramatically… among those in the car. The ones outside… tough luck.

    Back to Tesla, I’m still convinced that they deliberately send that “double message”, to get into the action and at the same time cover their corporate asses. Smart, but evil.

    1. Indeed… cynical it is, I quite like a nice bit of cynical – but taking the idea further does bring a certain element of satisfaction to a frequent cyclist and pedestrian – you hit me and you die too, folks in their nice armoured boxes might actually pay attention… Even if it will never happen.

      To me Tesla naming is stupid, but not out of character, or more so than many other things – doesn’t matter what you call something these days as long as you have the legal disclaimer underneath saying don’t take the name (or half the stylised adverts you will see) as real truth – its actually something else…. Which is unfortunately not a Tesla specific problem, just modern(ish) marketing exaggerated (at best) bollocks…

  11. The solution is simple…. Get rid of all that self driving junk… errr technology and let the driver pay 100% to the driving. Also, there is no longer a ‘blame’ game as we know where the problem lies in an accident. Bonus is less expensive vehicles and more reliable too. Win Win.

    1. The car manufacturers would lose out, as they wouldn’t be able to sell cars to people that are currently unable to drive. True self driving makes *everyone* a potential customer

        1. Artenz says: “No current system is good enough that it can be used by someone not able to drive themselves.” (quoted in case the reply goes wrong again…)

          Agreed, no *current* system can do this. My point was that ditching the current “””self driving””” tech would block the path to future systems that can.

          1. Companies can still develop the tech with proper safety protocols, such as a trained test driver, and measures to make sure they stay focused on the job.

    2. “Get rid of all that self driving junk… errr technology”

      Automatic transmission : gone.
      Automatic choke : gone.
      ABS, traction control : gone.
      Immobilizer : gone.
      Digital radio tuner : gone.
      Brake booster? Pollution control? Too high tech?

      Almost describes my 30+ year old Mercedes.

  12. NTSB incident investigations have made airplanes as safe as they are today. They did not cause the demise of (commercial) jet airplanes after a few early ones crashed and killed all passengers. So can these investigations result in better driving software.

    Besides, people working near – or on – the side of the road are continuously weary of ‘real meat’ drivers as they often are not paying attention, driving at ludicrous speeds. The 11 incidents may in fact be better than the average. It is hard to tell from the lack of numbers.

  13. Maybe unrelated,….but I find the new high intensity LED warning lights on emergency vehicles “Blinding” at night! You can’t see the situation (literally) because all your night vision is gone and your blinded by light in your eyes. They need to either diffuse the light or dim it down at night or a little of both. I can’t believe it has gone this long without a correction.

    1. Yes, I find that too, it’s gotten absolutely frigging ridiculous. Bear in mind that the human eye has a far greater dynamic range than the average camera so those 8 cameras on the Tesla are probably blind.

Leave a Reply to MorberisCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.