Uber Has An Autonomous Fatality

You have doubtlessly heard the news. A robotic Uber car in Arizona struck and killed [Elaine Herzberg] as she crossed the street. Details are sketchy, but preliminary reports indicate that the accident was unavoidable as the woman crossed the street suddenly from the shadows at night.

If and when more technical details emerge, we’ll cover them. But you can bet this is going to spark a lot of conversation about autonomous vehicles. Given that Hackaday readers are at the top of the technical ladder, it is likely that your thoughts on the matter will influence your friends, coworkers, and even your politicians. So what do you think?

The Technology Problem?

Uber, Waymo, and other companies developing self-driving cars have a lot of technology. There have been a few hiccups. An Uber car ran a red light in California. Another — also in Arizona — was struck by another vehicle and rolled over, although police blamed the other (human) driver. Now we have the [Herzberg] case and we don’t know for sure if there was any technology component to the tragedy. But if there is, we are certain it is soluble. The technology to drive doesn’t seem like it should be so difficult.

That doesn’t mean machines will drive in the same way as a human. Humans have intuition and some pretty awesome pattern matching capability. On the other hand, they also have limited attention spans and don’t always react as fast as they would like. Nor is the machine able to perform miracles. No matter if the driver uses silicon or protoplasm, it takes a certain amount of distance to stop a vehicle moving at a given speed. Vehicles will kill people no matter how smart the driving computers get.

It Isn’t Technology

If the technology is there, why aren’t the highways saturated with robotic limos? Like most real-world technology, the technology is only one part of it. We can make plenty of electricity using nuclear power plants, but risk aversion, regulation, and tax structures make it infeasible. I’m not saying that’s appropriate or not, I’m just saying that we know how to build atomic power plants, we just decided to stop. We have had the technology to go to the moon for a while. We’d have to develop some new ways if we wanted to go back, but we could do it. We just don’t today. Making a self-driving car isn’t a problem like sending a live human to the Andromeda galaxy.

Data from 2016 shows that just under 40,000 people a year die in United States traffic fatalities — an average of 102 people a day. We don’t have much data for robot vehicles, but intuitively you have to guess that well-designed autonomous vehicles ought to be able to do better. If the majority of cars were under computer control, you’d think much better. But it isn’t going to be zero fatalities.

Some of the resistance is probably an oddity of human behavior. In 2016, there were 325 deaths worldwide due to commercial air travel. Yet very few people are afraid to drive, but many people are afraid to fly. Behind the wheel, there is an illusion that we are in control. Maybe we think a car wreck is somehow “your fault” even if it isn’t. But on a commercial airplane, you feel like you are at the mercy of the pilot and — to some degree — what many people consider mysterious technology.

That doesn’t bode well for public sentiment for self-driving cars. We are willing to dismiss 102 deaths a day much more readily than less than a death a day caused by air travel. The other issue is how companies are going to survive the onslaught of lawsuits and legal challenges that are inevitable. Don’t get me wrong; autonomous cars shouldn’t get a free pass. But they probably shouldn’t be held far more accountable than a human driver, either.

Hacker Activism

As a technically savvy community, we should influence people to have sensible positions about this and other technology policy issues. What should you say? That’s not really for me to tell you. Maybe you are against self-driving cars. Or maybe you are for them. Justify your position and carry it forward.

Me? I think the cars are coming. I think they will make us safer and have other benefits to the environment and even the economy. But I would like to see more regulations — something that I usually reject. However, as more companies enter the fray, there will eventually have to be safety standards just like there are on human-driven cars, airplanes, and other dangerous things we all come into contact with often. [Adam Fabio] suggested that companies should be required to share crash data with the industry, and I thought that was a good idea, too.

Countries that allow unrealistic lawsuits against these robotic cars are going to fall behind those who both reasonably handle liability and establish reasonable guidelines for operation. Your opinion may differ. That’s OK. And it should make for lively comments.

Technology plays a bigger role in everyone’s lives every year. We’ve gone from a bizarre priesthood of nerds to the people who understand how the world works. Computers, cell phones, home assistants, and self-driving cars were the stuff of science fiction not long ago. Now they are advertised on prime-time television. If we expect ordinary people, community and business leaders, courts, and politicians to make rational decisions, we should be vocal and active sharing what we know in a way that helps people see what we do.

We’ve been wrestling with the ethics of self-driving cars for years here at Hackaday. It isn’t always as clearcut as you might think.

266 thoughts on “Uber Has An Autonomous Fatality

  1. Guess the process of driving was successfully emulated.
    Who do you claim from. Do software developers now need car insurance. Maybe it’s the hardware designers at fault.
    I say put it down to Darwinism.

    1. That is the problem that we are going to watch play out now. Who is at fault? I don’t think anyone is going to argue that a machine driving a car is going to equal 0 deaths on the road, but who is responsible and who pays? Vehicle owner? Last person who instructed the vehicle to perform an action? Software stack owner? What if there is no one in the vehicle? What if video footage shows that it would be reasonable to conclude that a human could have avoided the incident, then who is at fault? If an autonomous car hits a bear in the woods and no one sees, does it make a noise?

      1. the person, who owned that killing machine must pay for the vehicle manslaughter and he/she needs to pay real big, to avoid this stupid idea in the future, like this: you can have this new technology for sure, but if anything happen then you will go to jail for a very long time

        1. The number of people who die on the roads in the US is equivalent to 6 fully loaded 747s crashing a month. Would you ever step on a plane if they crashed that often? Humans suck at driving, you suck at driving, I suck at driving. If autonomous driving reduces traffic deaths by any amount I don’t care about accidents like this.

          1. Sure, you don’t care about accidents like this, until it is you that is killed. Self driving cars are NOT ready for prime time and probably never will be and anyone that puts one on the road should be sued into bankruptcy when (not if) it injures someone. The only folks that think these things can not fail are those have have never built anything with electronics. These will not be around 2 years from now.

          2. The comparison is meaningless, because the probability of accidents isn’t evenly distributed among the driving population.

            Are you driving drunk? Are you speeding and overtaking? Texting behind the wheel? No? Then your probability of getting into a fatal crash just dropped dramatically.

          3. Did you ever step into an autonomous plane?
            How much people would you think would get into a plane knowing that there is nobody risking his/her life (the pilots) to make sure that the plane flies safely?
            The “plane” vs “car” comparison is not good

          4. >>really? says:
            >>March 20, 2018 at 2:20 pm
            >>Did you ever step into an autonomous plane?

            Yes. Any airliner from the last several decades employs several flight controllers to either fully or partially automate parts of the flight.
            As stated elsewhere here in the comments, landing is the most dangerous part of flying. Guess which part of flying has been fully automated in order to reduce the risk?

            https://en.wikipedia.org/wiki/Autoland

            These systems are so good, that they’ve had to put random X-Y offsets into the target landing spot in order to avoid increased wear on the tarmac from all the planes constantly hitting the same spot.
            The system allows airports to operate in low visibility conditions, where otherwise the airport would be closed. Some airlines require the pilots to use the system in anything other than ideal conditions. This is fully to protect the passengers from pilots who would rather land by hand.
            The only reason not to use it, when available, is to keep the pilots trained for when it is not available. Think about that for a second.

          5. >”Luke, I’m not driving drunk, but that guy that runs into me was…”

            The question of getting into an accident involving a drunk driver is a matter of the density of drunk drivers on the road where you happen to be travelling. From a scale of 0 to 1, the normal density is very close to 0, but if you’re the one driving drunk, it’s exactly 1.

        2. I cannot help but see a new trend in “Suicide by Autonomous Car”.
          Already, cops, and train drivers are subject to that quite a bit.
          Even the occasional truck driver who hits someone walking down the
          middle of the highway.

          There comes a point where the person doing something mind mindbogglingly
          stupid must be held accountable for their own actions as well.
          We will have to wait for the video to possibly be made public before knowing
          for sure, but I will bet there was no “looking both ways before entering
          the road way”.

          “How not to drive your car on Russian roads”. Youtube search it.
          I am not saying that autonomous cars cannot be made safer,
          but humans will always find a way of getting themselves injured/killed,
          and their lawyers will always try to find a way to blame it on the other guy/computer.

          1. Article says: “the woman crossed the street suddenly from the shadows at night.”

            Yes, he have some information. And I really doubt the majority of human drivers, driving at 40MPH, would have avoided hitting a woman suddenly crossing the road, outside of a crosswalk, at night, coming from the shadows.

          2. Russia is the exception I would rather trust a one year old than a Russian driver, you cannot defiy the laws of physics if someone was to walk out in front of an automated vehicle or a driver controlled vehicle the result would be the same. Automonous vehicles are here and here to stay, as auto piloted aircraft are here, we trust computers for everything else in life, why not in driving us around they can respond faster than we can, if you think on just how we would survive without computers these days.

          3. Russian dash cams do show poorly on those humans. Also enjoy looking mind boggling drivers giving their truck a cut on a low clearance bridge….. With it’s own web site: http://www.11foot8.com They pass three signs warning of the low bridge and finally an electronic sign that states: “Overheight Turn Left” and a red light to give them some time to think about it. Nope… full speed Captain… I’ll give ‘er all she’s got and meet you on the other side! And these people vote too!

        3. Since it is extremely unlikely that a human would have fared any better than the machine in avoiding this accident, I cannot help but wonder if you have some hidden agenda for your technophobic view ?

        4. On stupid ideas respectfully you put one forth. Based on the formation provided here, unless criminal negligence was proven I wouldn’t convict a human driver of manslaughter in a similar accident. My guess is in the past you would be that person against the wheel or improved paths because they allow carts or whatever to travel faster. Steam or electrical, no? Motor vehicles and air craft, no again?

        5. “the person, who owned that killing machine must pay for the vehicle manslaughter and he/she needs to pay real big, to avoid this stupid idea in the future, like this: you can have this new technology for sure, but if anything happen then you will go to jail for a very long time”

          There is no “the person” This is ubers technology this is ubers test. See were so used to blaming ourselves suing ourselves that we don’t see the real enemy right under our noses. The he she is not a he she it’s a company or multiple company’s all assuring themselves immunity in knowing we will blame ourselves rather than fight for safe technology that can reasonably assume pedestrians are not a set of parameters that follow what you think they should do.

          The police say she crossed in a darkly lit area and seemed confused or dazed and that it seemed unavoidable once she entered the road (tho the car’s computer system should have seen her on the side before it even became something to avoid). As a human with a real human brain I see someone even remotely close to the side of a road, I will conclude as anyone would shit this person is gonna ruin my day and cross in front of me. Probably only 1% of the time they do and I’m prepared because we know our own minds and how free will works. So I slow down before anything crosses even tho 99% of the time nothing crosses.

          Now back on the whole it was unavoidable thing the police are talking about, because a deputy also sates he saw the cars video evidence and says it was unavoidable once she entered the road but prior to that moment on the video he says you see her approaching. IF the video shows her approaching then the robot see it as well. It comes down to human consciousness, and how that defines the conclusions we make everyday. And no the person sitting in the car should not take blame. He’s just there because they have not allowed this thing to pick up drunk people on it’s own yet. But that’s the goal.

          1. Might be a potential trajectory magnitude distance honk horn with a perimeter check to potentially swerve if not break in time algorithm/case event/if then else/process. Not sure what the code syntax is or even how the code is organized.

      2. There would have to be an investigation to find out was it a design or workmanship defect that caused to accident which in that case the manufacture is at fault or was it a maintenance issue or poor operation which the owner would be at fault.
        If there also a smart road and it was deemed the cause then operator of the road would be at fault.
        Some stuff such as software defects and the systems being hacked can be difficult to prove out though manufactures could reduce their liability by having manual controls that over ride the automatic systems on a hardware level.
        Ie no matter how badly compromised the self driving systems are you could still take over manually as long as the powertrain controllers have not been compromised as well.
        It would be a good argument to keep a physical connection between the controls such as steering, the transmissions, and brakes.

        1. Exactly. There are at least two reasons that this death occurred, and one is strongly related to the fact that a pedestrian was near a moving vehicle. The 2nd was that a moving vehicle near people was allowed to drive at a rate that didn’t allow it to protect that person when they moved in front of the vehicle. If these cars were required to slow down more and more as pedestrians get closer and closer, it would be infinitely harder for this to have happened. The designers of the cars, the road systems, and the regulations are at fault for not rethinking how self driving cars are used since they may not require human input. Who cares if I take a few minutes longer to get somewhere if I’m able to work or entertain myself while I’m in transit?

          I think it’s far to early to allow self driving vehicles on the roads using regular auto traffic rules. There are going to be many stupid deaths while people beta test bumpers and driving scenarios.

          Automakers and algorithm creators should be 100% liable for ANY self driving vehicle crashes and deaths. That alone might change the scenarios they allow their cars to drive in and the rules they design their cars to follow. Someone buying a self-driving car that purports to be able to drive itself likely doesn’t have any qualifications to judge that claim and can’t be penalized for being a passenger.

          Also, any decision involving safety and risk of any sort IS the trolley problem, it’s just at a scope that most people don’t recognize. Details on bumper construction for a car? How long a traffic light takes to change to green after the other direction turns red? Walking in hurry in NYC and accidentally bumping someone towards the road? All are weird variations on the trolley problem, just with less strictly definable amounts of risk.

    2. I don’t know about requiring auto insurance for programmers, but maybe it’s finally the time for programmers to be licensed (like Engineers are licensed) to deal with these kinds of liabilities.

    3. If they are truly the saviour of Mankind, then the lower accident rates of autonomous cars should make their insurance cheaper. But I sure wouldn’t want to be on the hook personally if my self-driving car glitched out and T-boned a school bus.

      1. I don’t know about other states, in Kansas we can’t avoid a speeding ticket by claiming the cruise control was on and is at fault. No doubt autonomous capable automobiles will be required to have capable operator at the controls. An pilot doesn’t engage the autopilot, so they can nap. Going be a long time before autonomous motor vehicle amass a safety record, that will lead to lower insurance rates, or hgher rate if the record indacate that need,

    4. Everyone involved in putting this thing on the road should be, and will be sued. This is a farce and dangerous to even think that electronics can replace a driver. Hopefully, this fad will disappear soon before too many more folks get killed.

        1. No, they do not. A real live pilot has to set the auto pilot and maintain radio contact with the various FAA facilities along his/her route as well as monitor all engine gauges and aircraft functions. They also monitor the autopilot to make sure they remain on course, many times, corrections must be entered into the auto pilot to maintain proper course and altitude. So, autopilots do not control aircraft, the Captain does.

          1. Uhm, autopilot does take control though for periods in time where the pilot is not performing the detailed control operations of flight. The pilot “man” the controls… though isn’t performing all the flight control operations. There are even aircraft that require somewhat autopilot advanced avionics to even fly stably and don’t fly really well by hydraulic, mechanical and electro-mechanical very well if at all in some cases if I understand correctly. I thought that was the case in particular with the F-117.

          2. That was also the case with the F-16 and the FA-18 and I am sure, the F-35 but all of those aircraft still require a pilot. It would be insane to have 747’s flying around with no crew but, the parameters for those flights have thousands of less variables than are required for driving a car. A self flying plane would be safer than a self driving car. Trains would be the easiest to do but, the electronics are not trusted to even do that so why are they trusting them in cars with our lives?

          3. Job security and trends. I mean some had pet rocks and lucky charms of all sorts as the trend.

            Strange weird stuff… though like technology imagined and dreamed about in the 1950’s or earlier due to materials science, electronics advances, computer hardware, computer software and the ability for those to record our memories and associations as well as better process inputs for desired outputs that are more challenging to compute on paper… what was thought at one time range to be impossible due to feasibility of usually size/volume constraints; can be made possible with those advances in materials science with memories and associations used logically to the best of our ability and I wish wisely with our knowledge… though wise logical application can be more challenging in regards to feasibility of the pan’ic attack masses.

        2. Also, the Pilot has to respond to ATC orders to change altitude/heading in response to changing weather conditions and/or in flight emergencies concerning another aircraft. Autopilots don’t do that very well.

          1. I love that line from “Pushing Tin” where the ATC on the plane tells the flight attendant “This is an emergency! I need to talk to the person controlling this plane.” She says, “Sir, the Captain is very busy.” He replies, “The Captain? That would really scare me.”

      1. It would have been difficult but not impossible even for a human driver as the only way to avoid hitting her would be by rapidly steering out of the way in the classic moose avoidance maneuver as she was not visible until the last second.
        But it also shows why you should have a light and reflectors on your bike and only cross the street where it’s well lite if possible.
        Though I’m not sure if the car’s software would recognize a couple of points of light moving around as an obstacle vs a couple of fireflies and some distant street lamps.
        I think uber’s cars only use camera and lidar technology and don’t use any radar which would help in poor lighting conditions.

        1. I’ve had that situation, and the only thing I saw was the guy’s white sneaker, and that’s all it took. It was a near miss.

          The thing I don’t trust the computer to do is to take such a small clue and correctly assess that there’s someone on the road. They’re just not that good, especially in the dark where the video is grainy and the computer vision algorithm is already struggling to tell anything from anything.

          1. I mean, I avoided the collision.

            Another problem for the computer is that cameras have such poor dynamic range that they get blinded by their own headlights. Anything in the dark gets washed out into black.

            And the difficulty of computer vision is, that if it’s trained under visible light, whoops, things look completely different in IR. That greenish-blue light looks completely different to an algorithm because it IS different, because the algorithm doesn’t understand what it’s looking at – it’s not taking in the context, but simply shifting bytes of data.

  2. One big upside to self-driving cars that I see is the possibility that all of them can learn from one accident. If the investigation of this death turns up useful data for avoid such a tragedy in the future, it should be possible to share that as an update to the technology across the board.

    I go back and forth on whether we’re going to see widespread self-driving in the near future. On the one hand, humans are horrible at paying attention while driving and machines are great at this. But it’s much harder for me to think the same about reacting to edge cases. I think the proof is in how self-driving platforms can handle the unknown.

      1. Solution To The Trolly Problem (from car makers perspective) :

        When a consumer pruchases their new tesla o whatever and gets in for the first time, they input their details on the onboard computer, name, blutooth phone connection etc…

        They are then asked to configure autonomous driving mode: “in the event of a unavoidable accident, AI should : A – act to prioritise protection of vehicle occupents, B – act to prioritise protection of pedestrians and 3rd parties” with a slider between the 2 options. the driver is forced to select a weighted value between the 2 extremes. and as such ultimately takes on the responsability and liability of decisions made of the AI in the event of an fatal accident. This solver laibility of car makers who can focus on making cars as safe as possible (100% safety is unrealistic) and insurance companies mbusiness model remains virtually unchanged. Overall accidents are reduces as self driving cars improve compared to human drivers.

    1. Autonomous cars have one property, despite privacy concerns, and that’s all those sensors, and the capability to record everything, not to mention the networking. Any investigator’s wet dream.

      1. Even current generation cars have a lot of information usable in case of accident: speed, throttle and brake usage or not, angle of steering wheel, indicators etc…
        All are logged and kept in event of a collision.

      2. I read that the police say a human driver could not have avoided the woman either, she came from a shadow and suddenly stepped on the road.
        The odd thing though is that I’m told she was holding a bicycle with bags hanging from the handlebars so I wonder how fast she could have been going really. And on the subject of you sensor comment, if it’s realistic to say that a human eye would see the same as say a camera sensor, I think their sensitivity curve to things in shadows can be quite divergent, sometimes a lot in favor of a camera-sensor and sometimes a lot in favor of the eye. So I’d be careful in comparing what a human driver could have done based on watching a recording if I were the cops.

        The real problem is evaluating if it’s a software issue though. Who is going to check the software for faults? I don’t think the police has skilled coders in employment that could analyze that so easily, and it is important, because there are more sensors than visual usually and you have to know the software and system to know if it could have been prevented by better software, and then you need philosophers to determine if it’s fair to ask more robustness from an autonomous system than you would do from a human driver, and how much more, and if you have to be realistic about the limitations of software engineers too, them being human after all.

        1. I now saw the released video, and I can’t help but notice that the car could have tried to brake or swerve but in the short time the person was visible did exactly nothing from the looks of it.
          And I think the video also strengthens my view that you can’t compare what a car camera sees with what a human would see or not see.

          1. I too watched the video, I think a human would have seen earlier, and I’m going to assume our eyes and the camera eyes are equal, but I’m going to add one more feature, how many times have you seen something because it momentarily obscured a light behind it?
            Our eyes are faster and superior to video cameras fitted to cars at spotting moving targets, and we comprehend and avoid death, the machines tend not to worry so much.

    2. This is possibly the best documented case (ever?) of a vehicle hitting a pedestrian. I’m sure the logs and data dumps from the LIDAR and other sensors will prove enlightening (and, hopefully, educationsl).

    3. You’ve hit the nail on the head. Some folks think that the only driving data self-driving cars have is what their sensors can provide. They forget that self-driving cars are also networked cars, and they have access to cloud data that can tell them things like “pedestrians often cross here without looking”. They can make use of this data to drive more carefully (slowly) where accidents (or even just observed irregular behavior) have occurred before.

      1. >”They can make use of this data ”

        You’re overloading the cognitive capabilities of the AI. Currently it’s not strong enough to do that.

        If a cyclist with a red helmet dissapears behind a bus stop, and a cyclist with a blue helmet appears on the other side, how many cyclists are there? That sort of problem is already completely beyond the scope of self-driving AIs.

        1. After some of the videos I have seen of people reacting to strange things around them, I would say that problem is also too hard for most humans. It requires noting the color of something that is not important to your situation twice and noticing it has changed.
          Source: shows like Brain Games, Most magic performances, observations from social engineering

          1. People may be change-blind, but the AI struggles with object permanence in the first place because it doesn’t actually have a mental model of its surroundings. It is simply reacting to the immediate sensor data.

            When the cyclist dissapears behind the billboard, it’s as good as if it had never existed for the car’s computer. Reason being that the computer doesn’t recognize things reliably enough to remember what things exist – otherwise it would start hallucinating pedestrians and cyclists it had erroneously detected. It errs on the side of false negative than false positive, because the false negatives have less catastrophic consequences than slamming your brakes in the middle of the highway for absolutely no reason.

          2. Also, the color of the helmet was just a simple example. The AI wouldn’t recognize if a grown man went behind the billboard and a small girl on a tricycle came out the other side. It would still be surprised to find a second cyclist appear behind the billboard a moment later.

        2. The idea I proposed has multiple parts:
          1) It requires a way to recognize “incidents” that require special attention. This doesn’t have to be gathered by the car’s AI, but it can be gathered by a person or a cloud process that looks at what’s happening with many cars’ driving patterns.
          2) It requires a way to report “trouble areas” back to the cars in a way that they can process. This can be done by creating virtual road-hazard signs, the same as if DOT planted a new triangular warning sign.
          I don’t think either of these are beyond current technology.

      2. No, they don’t. You do not want your car networked. Why? How do you know that information it gets from other cars and the cloud is correct? Your car can be reached from the net? Great… instant hacker target with, once hacked, built in literal ‘kill switch’.

        You can use external information, but never rely on it, in case of doubt, the sensor data needs to ‘win’.

        Oh… and someone who bought such a car will return it if it tends to slow down without observable reason in certain areas. No one wants to drive a road rage magnet.

        1. You bring in the side issue of security. Of course it is an important issue, but I don’t think that you should always ignore available data because it may possibly have been hacked. Do you ignore reports of traffic and drive into congestion because you can’t be sure the data is valid? If the external info says “be more careful here”, you’d be silly not to take it into consideration (in most cases). That doesn’t necessarily mean “go 5 mph here” either.

          1. Information rivalry is already a problem in self-driving cars. The Tesla Decapitation Accident was a case where the car had all the necessary information to avoid the collision, but it was programmed to prioritize data in a way that caused a false interpretation that the road is clear.

            Adding external data just gives you more conflict to a situation that cannot be adequately solved by the AI.

        2. Networking is one of the ways Nokia is proposing keeping maps current. Although that’s through smartphones rather than smart cars. And good maps really are the heart of the anonymous movement.

        3. …but all autonomous cars are going to be such road-rage magnets! One of the big reasons they should be safer is that they won’t be making stupid human errors, like speeding, following too closely, dangerous passing, etc.
          Your autonomous car is going to drive like a nervous high-school drivers-ed instructor, not some digital Sterling Moss. Get used to it.

          1. Instead, they’ll be making stupid computer errors, like stuttering and refusing to go, braking randomly, getting confused over a leaf flying into the lidar and driving off the road.

        1. Which raises the question, does the AI know there are areas where there is a lower expected probability of collision, and then use that in its braking voting procedures?

    4. Mike, they will not even allow trains to be automated and all they do is stop and go and pull onto sidings to allow other trains go by. There is an inherent and probably unsolvable paradox at the most basic decision making lines of the programming code, which is, do I kill the driver of this vehicle or do I kill the “other guy”? Remember that sometimes the other guy can be a woman walking with her baby, a guy on a bike, or a school bus full of little kids. Someone has to write that code and someone has already made that decision and it exists deep down in the programming and no one will talk about it. Who is going to buy a car that is programmed to kill them under certain conditions? They also do not want to talk about the failure of electronic components, especially some of those being made in China that are being used in these vehicles. Failures happen, they are going to happen no matter how much people hope it to be otherwise. I love electronics and I love innovation but it is because of this that I hate driver-less cars because I know components fail, so do you and so do most folks reading HAD.

  3. There was a human behind the wheel. I don’t know how much said human trusted the car and, thus, how much attention was being paid to the road. It seems like, if it was possible to stop in time, between the two drivers, one of them would have stopped. It’s hard for people to blame the dead cyclist, but at least some of it must rest on her.

      1. “Details are sketchy, but preliminary reports indicate that the accident was unavoidable as the woman crossed the street suddenly from the shadows at night.”

        Seems to me the pedestrian would be at fault for blindly running out into the street in the dark.

        1. I think there’s the implication that autonomous vehicles have greater capabilities than humans, so something like “dark” wouldn’t be the handicap it would be for people. Reaction time would be faster, not to mention more likely to be obeying the speed laws for a given area. In short less forgiving of machine mistakes.

          1. That is something which no-one so far seems to have cottoned on to… the fact that it is dark should have nothing to do with the performance of self driving cars. Nearly every sensor on a self-driving car can operate in complete blackness (to a human) – be that LIDAR, ultrasonic or infrared.
            Claiming that it was the fault of the pedestrian, or the fault of the car, because it was dark should not be a factor at all.

        2. “Seems to me the pedestrian would be at fault for blindly running out into the street in the dark.”
          According to the Reuter’s article (linked above) she was walking a bicycle at the time.

        3. There have been some indications that this area is a somewhat notorious pedestrian hazard, with a comment from someone indicating that near where this accident occurred (possibly right where it occurred) there are structures on the median that look like really nice pedestrian walkways… With signs at the end of each of them saying “Don’t walk here”.

          Look on Google Street View at the signs at the X-shaped “walkway” structure in the median south of the intersection of North Mill Avenue and East Curry Road in Tempe…

          That’s an accident waiting to happen, whether human or automatic driver.

        1. Here’s the thing though. We can’t upgrade our sensors. But autonomous vehicles have greater capability, as well as more of than we do, so their detection capability should be greater, allowing for more time to take proper action, even allowing for physics.

          1. Depends, it needs to be physically possible to stop the car before the accident. I guess this is the most documented car crash ever, so probably a lot will be learnt. If the system applied to brakes to the maximum and still hit the person there is little to nothing anyone could have done.

          2. “even allowing for physics” so are you going to simulate the physics of the brains of the pedestrians and the bicycle riders to predict when they are going to swerve in front of you?

        2. I’m amazed that we’ve already had multiple self-driving car deaths where the car could see no more than the human. Even if self-driving cars are idiots, being able to see more than humans should help quite a bit, and somehow it’s not helping.

          Once the cars can actually see the pedestrians near them, the problem isn’t the laws of physics, it’s the rules built into the car that allow the car to still be driving at a speed where the car can’t act on that data fast enough to stop or slow down before killing people.

          1. The thing is – a human driver has to assume that a pedestrian who is NOT in the road and is not heading towards the road is not going to suddenly change course and step/bike right into the road. An autonomous system has to make the same assumption, otherwise it becomes a traffic hazard itself.

            There simply is no way to handle “object entered vehicle path after unanticipated course correction” scenarios whether you’re human or computer. People take advantage of this frequently in Russia to commit insurance fraud apparently, which is why dashcams are so popular there.

    1. During this testing there is plenty of time for the employee to get used to the technology and cease paying attention to the road, perhaps much more so than a non-autonomous vehicle. I’m sure each driver has logged hundreds of hours and thousands of miles being passengered around Tempe and the environs. (I bike by the building where these Uber vehicles are stored every weekday)

      1. Boeing has many millions of hours of experience with fatigue and distraction in airplane pilots, and the very first thing they will tell you is that a pilot that is not actively engaged in the act of propelling the vehicle is NOT going to be any sort of state of mind to make corrective actions in an emergency.

        1. Yes.
          A self-driving vehicle that suddenly pops out with a message “Driver, please take over now” when the *driver* (read, occupant) has just started reading that novel they have so much wanted to for weeks is not going to respond in any sort of hurry, let alone be able to handle an emergency situation.

  4. Apples to oranges.

    Going from your text, you are comparing the death toll for car travel in the US with the death toll for air travel worldwide.

    Minor detail, but engineers are fastidious, you know.

    1. I made it clear the numbers were worldwide for air. To me that’s an even bigger disparity since you know worldwide auto deaths are even more — possibly much more. So yeah the scales are different but that just makes it even safer to fly and more dangerous to drive.

    2. Mmm. If every home in the world owned two airplanes and used them for two trips a day while only giving them yearly maintenance checks, air travel would be a lot more dangerous than it is today. With self-driving cars we could start to fix this though, maybe your car drives off to a garage every 1000 hours of driving for a check-up.

      1. Problem with that argument is that airplanes are built to a different standard, and yes, taken better care of. So even if there were more of them the accident wouldn’t scale quite as dramatically.

        1. Well, presumably the flight environment is not as congested either. I haven’t looked lately, but historically most crashes occur when taking off or landing. Level flight is extremely safe.

          1. Here in Texas you do have to have a (more or less) annual inspection. However, it is clearly easy to game the system and get one even if you shouldn’t.

  5. For now, Insurance companies are holding to the standard that the car’s owner and/or operator are responsible for the proper functioning of their vehicles, just like human driven cars. This is one of the reasons that most states are requiring an alert driver in the driver’s seat to take over if the car appears to choose incorrectly.

    Insurance policies already have the rules and procedures to go back against a manufacturer if a defect in manufacture or design leads to the accident.

    1. Liability is pretty much established when it is a vehicle used for business, as the ‘deep pocket theory’ applies. Jury’s are a bit more sympathetic to owners of personal vehicles. But even in that case, its pretty much “sue everyone” nominally involved.

    2. I think everyone is aware that if an insurance company can get out of paying, they will.

      And why would you buy/operate a self-driving car if you had to be an ‘alert driver’ at all times? Why not drive it yourself then? Is not the whole idea to allow the driver (occupant) to do alternative things… read a book, check email, etc?

      In many situations, even an alert driver will need some time to take over from a “malfunctioning” autonomous driver, and that can result in fatal consequences. Just synthesising “Driver, take over now” a few tenths of a second before a crash should not suddenly shift the blame to the driver. But, that’s what manufacturers have succeeded in doing.

        1. Makes me think about the situation railroad engineers face. They have maybe 200 tons or more of a vehicle-in-motion going maybe 45 MPH and somebody decides to run the crossing and gets stuck while the train is only a 50 feet away. Physics says… there is NOTHING the engineer can do to avoid a collision even with full brakes applied. THere is just way too much momentum in play. Is it the engineer’s fault that they hit the car and killed a driver? A non-physics expert might think, “Hey, they had a full 50 feet to stop!!! Its the trains fault.” They would be wrong. I see this the same UBER thing the same way. What says a HUMAN driver could have reacted fast enough to avoid hitting the pedestrian? Hours and hours of Russian Dashcam videos on Youtube prove humans are generally horrible at situational awareness in cars.

          1. When I was younger, a pickup tried to beat a coal train to the crossing.
            The train came to a full stop 1/4 mile later, by then, what was left of the pickup was under the 3rd locomotive.

          2. Well, hopefully, a human driver would know that hey, along here is were some stupid folks come out without warning, so I will slow down just in case. A computer can never replace a human brain. This is not to say that some drivers are not total idiots…they are and probably would have run this person over too…we will never know. But maybe not all of them would have. This technology is not ready.

  6. Ironically, while cycling home last night I got tagged by a novice driver in a residential area during broad daylight. His passenger mirror bumped my left handle bar – very fortunate that it didn’t hit harder! I didn’t lose control, survived, caught up with the driver who (despite not stopping) apologized and I think learned a valuable lesson.

    I’m a regular commuter with lights, reflectors, etc. While the road was narrow and windy, there was no reason not to see me and move over.

    Personally, watching folks like Boston Dynamics, quad-copters playing catch, SpaceX landing 2 boosters at once, and tracking US drones just south of our border, I’d rather have that Uber car come up behind me than a human driven one….

    1. I had somebody turn in front of me in broad daylight after passing me in the marked bike lane. I even had one of those flashing red lights. She said she never saw me. I’ll bet she was fooling with her phone. The cars with the sensors can’t get here soon enough.

      1. Safe bet.

        I regularly see people completely engrossed with their phones while driving. To date, I’ve seen no response/recognition when I use my horn, but nearly every one ends up drifting over into the rumble strips or 3/4 of the way into another lane before they make a sudden correction.

        Granted, the above only applies to those traveling the same direction as me, as their wandering between lanes catches my eye. There might be others that can at least maintain enough attention to drive in a mostly straight line.
        I do notice more than a few in the oncoming lanes of smaller roads, though. :/

    2. Yesterday I had a bicyclist out of the bike lane & in the road. I’m always impressed by those who put their faith in humans doing the right thing & the law rather than physics.
      If all the cars were computer driven I’d guess that bicyclist would have one thing less to not be concerned about.

    3. It does make you wonder what will happen in some oddball situations. I was driving around Place de l’Étoile in Paris (well, I guess Place Charles de Gaulle, now) and right in front of me a tour bus and a normal car made contact, ripping off the car’s rear view mirror. The drivers yelled at each other and kept going. No one stopped. No insurance was exchanged.

          1. No they must give way, France is “priority to the right”, so when you are on a unmarked roundabout, you must give right of way (implicit, nothing tells you to do this).
            But most roundabout are what we call “English roundabout”, you must yield when entering it (and it’s explicit).
            I always found English way of driving to be inherently less dangerous, even if left hand shifting is awkward.

          2. Traffic circles here in the States are a relatively new thing. Many municipalities are replacing 4-way (or more) intersections with traffic circles, because “they are safer”. But even though “yield when entering” is implicit, most people don’t know that… Or signalling their exit from the circle…
            (sigh!)

          3. When I first got my license in Massachusetts, the “old way” was the rule, for exactly the same reason: yield to traffic on your right. (This was early 1970s)

            Apparently, soon afterwards, they hired an engineer who had taken a basic course in queueing theory, and changed the rule so that cars in the rotary [roundabout/traffic circle] have the right of way, and cars entering have to yield.

            I remember distinctly, sitting, stopped, in bumper-to-bumper full rotaries, waiting for rational thought to break out. This being Massachusetts, that took some time.

          4. Roundabouts are pretty new where I live. I hear plenty of complaints about how much of a mess they are when other drivers get confused…completely forgetting how awful people could be at understanding the etiquette of a 4-way stop.

            It seems to me a similar wrong thinking is happening with autonomous cars. Don’t let perfect become the enemy of the good (or the better).

      1. In this runabout, in case of accident, drivers are automaticly sue to assume 50% of the accident, not regarding who is faulty. This is done by assuming you have to take care of not having an accident by your fault AND to take care of doing your best to avoid an assume another’s fault.
        That’s why for minor crash, driver’s doesn’t stop. Another thing, you are not permit to get out of your car there. In case of accident, you are supposed to get a safe place outside of the runabout.
        The fact is this runabout’s sheme is rigth priority, one of the worst sheme, but the only who seem able to handle the flow.
        I would Love to see the mix of autonomous car and human driver there..

    4. Late one night (years ago) I was walking to work on the left side of the street, (on the sidewalk).
      As I crossed at light controlled intersection, a car coming up the road to my left, making a left turn, made a “California Stop” and I found myself rolling off its hood (bonnet -Jenny) to the driver’s side. Fortunately, I was not injured.
      Maybe I was partially at fault because of the parka I was wearing (very, very, very, dark blue -Thanks Honey for the Christmas present, you want me to be warm while I wear it to work!), but the driver probably assumed no one would be out walking at that time of night and no one would notice their California Stop.

      P.S. I thought your comment needed another response from a R** person! B^)

    5. Fellow bike commuter here, glad you are ok. I got t-boned by a drunk driver in a big truck, this time last year, and I’m still mildly confused how I made it. Really changed my views on self-driving cars. It’s not a silver bullet solution, but I’m very optimistic that some day they will make the world a little safer for everyone.

    6. Back when I used to commute by bike I had few problems in the daytime, but at night.. I wound up building a box that would clip on the back of my bike that had 2 big gel cells in it and I modified an old motorcycle headlight with the brightest lamp I could find, and I also got an aftermarket big round red tail light that I put a very bright lamp in. These were both brighter than car headlights. From a distance cars did not know what I was but they gave me a much wider berth as they closed in. I would bring the battery box in with me and charge it on one of the lab power supplies. I got a kick out of that. Taking home little buckets of electricity every night..

  7. I thought this was what IR was for.

    Interestingly a stock Volvo has a “large animal detection system” designed I guess for moose and the like but presumably it would be tuned for humans as well. Is it possible that they disabled it? It would be awfully embarrassing to have to admit that your self driving car is actually worse at detecting people than the same car off the factory line.

    Also, this brings up a whole slew of questions about fully autonomous cars and unanticipated events. Like if your car gets lightly rear-ended does it stop and pull over? If you see an accident or event can you stop the car at any point in its travels? Does it detect flashing blue lights and pull over correctly and safely? What about water over the roadway or a great big pothole?

  8. Pedestrian Errors will cause fatalities. A 40 MPH road? A cyclist NOT at a crosswalk? At NIGHT? Darwin award winner indeed.

    Wonder if putting in an IR camera and using predictive motion algorithms will fix this? Animals don’t know where the crosswalks are. Apparently some cyclists don’t as we’ve seen here.

    I commute to DC for work. The number of people (and it spans ethnic, racial, and every other background you can think) that decide “hey, I can cross this 6 lane road when the traffic is flowing on it” is insane. Do police actively ticket them? They should. The mass of a car against the frame of a human is damaging at nearly any speed. When they do this they risk their lives, and cause issue for the drivers out there that are attempting to safely go their way. If I have to choose between a pedestrian and a car, I have to hit the car to save the life of that pedestrian. How the law will call it – is that I hit that car. The pedestrian, and the situation they presented by going against common sense, will never be called into question. Pedestrians a cyclists in this town are a menace to traffic flow. Cyclists take to the streets and think red lights and 4 way stop signs don’t apply to them, and the law does nothing to dissuade them.

    Time we started not looking to sue the driver or the software designer, but look to punishing the populace for doing stupid crap in the first place.

    My 2 cents.

      1. You should call your local department of motor vehicles and turn in your driver’s license because you fail to understand that bicycles and pedestrians are traffic, just as much as the motor vehicles. In fact pedestrians and bicycles pre-date motor vehicles so they have more of a right to the road than motor vehicles, and our laws reflect this, but you are ignorant of them.

        1. The problem here is you don’t need a license to be a pedestrian or cyclist. And they often break the rules of the road. See: walking out from between two cars not at a crosswalk.
          Right of way is a curious thing. Nominally, in places I’ve lived, pedestrians always have the right of way. As do sailboats. But no one really thinks the supertanker is at fault when the 16′ sailboat sails too close and gets run over.

          1. Sailboat doesn’t and never had right of way more than other boats. First because we never have right of the way positions, but we can have more or less priviledges than another boat. and in a second time, It’s matter of manoeuvring capabilities.

          2. Most laws are common sense, and all research show that all types of traffic bend the laws at about the same rate. So we could demand that drivers should renew their license every two years, it still would not help with their dangerous driving.

            The US is particulary bad at this with five times as many dead in traffic compared to other similar places.

        2. Excuse me, but been driving for 35 plus years, no accidents, and I don’t aim for anyone. There’s laws that are meant to govern pedestrians and bicycles. They need to adhere to them or deal with putting their life at risk.

    1. “Animals don’t know where the crosswalks are.”

      You clearly don’t live in the Charleswood neighbourhood of Winnipeg, MB. I have watched urban deer while roaming from one hooman garden to another actually stop at the street and look both ways, wait for traffic and the proceed to cross with caution.

      (ok, to be fair it wasn’t at a crosswalk, but if more people had that sense of awareness and self-responsibility we wouldn’t need crosswalks…)

      1. I’ve been using one of over a year. It only cost $25 (plus a 32GB microSD).
        I haven’t reviewed any videos, like the time I “hit” a deer. But it’s clock does not keep time well, and I wonder if a “false” timestamp would disallow its use in a courtroom.
        I’m not sure how well it works at night though, but if its LCD is any indication, it doesn’t work well.

    2. I´ve seen dogs stop and wait their turn to cross the street. And people that just walk blindly in front of your car ( and call you rude when you complain ) .

      Having a human driver inside will not help. The idea is for the human to not drive, so they will not be paying attentio to the road, or they would be driving in full themselves.

      Unfortunately, I believe this will create a group of worse drivers, that will not have the necessary experience when need to do it without the computer. Same as some years ago people would know the streets´s names, and where places were, and now they depend on their gps´s being up to date to find even common, well-known places.

      One idea would be to have some kind of autonomous public transportation system, in a reserved, pedestrian-proof lane.

      1. Do you mean that weird X-shaped structure of “walkways” in the median south of the intersection I’ve seen claimed as the location of the hit? (just north of the underpass)

        The one that has signs at all entrances saying “Don’t cross here, use the crosswalk”?

        What the hell is that structure for? It’s epic urban planning fail to have what looks like a crosswalk and signs saying “don’t walk here”.

      1. Yeah, aren’t you in violation of the law if you cross anywhere but where the jagged lines are in London? That way drivers know where you should be crossing. We call those crosswalks on this side of the pond.

    3. Any place where pedestrians or bikes are hit by cars more often is probably not pedestrian-bike friendly enough. This is where I get irritated by all the hype around self-driving cars. Their proponents put them forward as the only thing necessary to make the world better, but often the problem isn’t drivers, it’s design. If your downtown is dangerous for pedestrians and bikes, it’s too vehicle-oriented. More effort needs to go into separating vehicles from other traffic, and frankly it should be easier to walk/bike and take transit downtown than it is to drive downtown. Moar carz is not the answer.

  9. As the percentage of robotic vehicles rises, vehicle on vehicle collisions should become less common, especially once the automotive industry establishes a standard for vehicle to vehicle communication for collision avoidance.

    None of that will, however, prevent children from diving after balls rolling into the street or suicidal people diving in front of buses.

    And as a new terrible thought, how long is it before autonomous cars are used as bomb delivery vehicles…

      1. Same here It probably will become a big issue and autonomous trucks are already capable of doing a lot of damage and most barriers meant to stop low speed collisions like those pipes you see in front of a big box store tend to fold over like a cardboard tube with hit by an 18 wheeler at speed it’s just physics it’s a lot of energy to try and dissipate.

    1. Vehicle to Vehicle communication is a hacker’s wet dream… With the current state of computer security you want each car send messages to all neighbours on the road and receive data from them?

      What could possibly go wrong…?

      1. No, what you want though is vehicle-to-city communication. Why rely on optical detection of a red light when the intersection can simply tell the car that it’s coming up to a red light? Or to warn the car that there’s an accident 3 blocks ahead and there’s a detour required? This is an area that needs more attention.

  10. “The technology to drive doesn’t seem like it should be so difficult.” Reminds me of the MIT professor who assigned one of his students to work out a computer vision system as a summer project back in the 60s or 70s.

    1. Perhaps. But there has been enough success that I don’t think the goal is unreachable. What was funny to me back in the 60s was the “common wisdom” that computer language translation was right around the corner. Very important during the cold war when we needed to translate all that Russian. It took a long time for people to understand that computers were very bad at translating idiomatic language use. The old story was “the spirit was willing but the flesh was weak” to “the liquor was good but the meat was spoiled.” That may not be a true story. Although there was the famous British headline: “British Left Waffles on Falklands.” That’s easy for someone in the UK to understand but very few Americans will get the meaning on a first reading and a significant number won’t ever get the meaning unless someone explains it to them.

      But yeah, maybe “so difficult” isn’t the best word choice there. “should be unattainable with current tech” is probably better.

        1. You see [Bill] in the US, the “Left” is the Democratic party, so we don’t have the immediate interpretation as the Left being a political group, so we’d assume left was a verb and waffles was a noun.

      1. I was going to comment on that sentence (“The technology to drive doesn’t seem like it should be so difficult.”). You have no idea how difficult this actually is! Imagine building a (quite advanced) system that needs to work correctly all the time. The scale when it comes to automotive systems is so enormous. We’re talking a place where 99.9% safe isn’t enough. Not even 99.99% is enough (you have 100 000 cars, that will perform a mistake 0.01% of the time. This means you have 10 cars that will have some error). That’s the big challenge. In principle, making a self-driving car is not hard. The challenge is making it robust enough to not do any mistakes when you scale it up and at the same time affordable enough to that people can actually buy it.
        Take it from someone who has actually worked at Volvo with self-driving cars. We usually had a joke saying that rockets scientists probably say “Well, at least it’s not self-driving cars”, in the same way common people say “It’s not rocket science”.

        Finally, I also think that “should be unattainable with current tech” is a better wording, even though that sentance stil carries a lot of weight.

  11. I wonder if there was not a human in the car if the self driving car would have even knew that it hit somebody. I could see it continuing on it’s route with entrails hanging off of it.

    We can’t get trains from one place to another 100% safely with technology. Why people think we can come closer with cars is just beyond me.

      1. I’d think like smart meters, there can be wireless audits of the black boxes audit trails if the software and hardware are validated as well as the procedures. Maybe 21 CFR PART 11 maybe? Or is this some fringe HIPAA/AMA/APA?Bar??? third party regulated operation? Maybe even DOT regulated or maybe other existing infrastructure systems with audit trails added if not already to determine correlated operations are valid.
        I’d bet there is an ambulance chasing type operation going on to endanger for that criminal enterprise. Liberal attorneys are the worst for trying to set case precedence that is unconstitutional, violated statutes and administrative law and for some reason is considered the standard. That operation is invalid also.

        1. If you mean put black boxes in all the cars that’s another issue I have with most self driving car concepts is my movements are logged
          and I would not feel comfortable with that.
          Such data could be abused and used to silence human rights activists or political rivals etc.

          1. I’d assume being UBER with support from GM that there are black boxes in the autonomous driving vehicles. I’m not sure in this case. For the self driving cars… yeah, I’d want a black box for evidence since I’m always targeted seems and I’d like the ability to better substantiate if something happens to valid details. There can definitely be a more secure “black box” system that is literally property of the owner or owners and is in fact private property. Now, with lawyers that are corrupt muddling up the system with defense forces being allowed to not perform their duties and uphold the U.S. Jurisdiction Constitution and Statutes for some other foreign interests executive privilege that violates our U.S. Jurisdiction Citizens Rights and is for something invalid or mentally ill that threatens our health, safety, welfare and well being… then that is the real issue to me. Partisan terrorism is not what a first World government needs to be about, seems to me to be more a bait and trap scheme actually to see who will compromise our Rights and really assault and/or invade us and our property.

    1. And that’s why the machines are dangerous.

      Consider that most people on the road are better drivers than the average driver.

      How is this possible? Because the probability of accidents isn’t evenly distributed among the driving population. If you measure “betterness” by how many accidents you’ll cause in your lifetime, the number comes out lower than the average as calculated by the total number of accidents divided by the total number of drivers.

      In other words, the bad drivers are so bad that they cause multiple accidents each. They’re repeat offenders. They drag the average score down for the rest of the people. It’s the same thing as how you’d be mistaken to believe that in a bad neighborhood anyone you meet might mug you, when in reality there’s only a handful of individual criminals responsible for most of the crime.

      So when the corporate bean counters argue in court that their cars are safer than the average driver, what they mean is, they’re only slightly worse than most drivers on the road.

  12. The human eye is far better than any man made camera and as such, a diligent driver would have seen someone moving up ahead in the shadows and slowed down.

    I have no idea why people are so quick to embrace this obviously flawed technology so quickly. Sure, there’s the exciting realization of a fantasy, but dreams rarely, if ever, mirror reality. There is a huge amount of literature and cinema that is focused on the perils of embracing technology so completely that other ways of living have been forgotten generations ago. (I, Robot, Robot Dreams, Robot Visions, Caves of Steel, The Naked Sun, Robots of the Dawn by Issac Asimov, 2001: A Space Odyssey by Arthur C. Clarke to name a few) The problem is that nobody actually took the time to reflect on these and dismissed them as fictional entertainment.

      1. So much of this is relative. When a long time without colliding with a deer until I had such a collision. I didn’t see it’s head until I was able to switch fro low beam to high beam, no chance to react at all, other the thought that deer is dead entering my mind as I hit the brakes in case there where others on the roadway.

      1. This. I recently found out a friend has all sorts of vision defects from colour blindness to blind spots to close objects not combining properly (1 metre instead of nose-on-your-face distance). Sure their eyes are defective but they thought their vision was normal.

        The only safe statement is that they’ve got better handling of extreme light differences, they work in moonlight or direct sun.

        1. The brain is pretty clever at compensating for poor vision. Colour blindess itself isn’t a critical flaw, neither is the lack of stereoscopic vision – the car doesn’t have it either because it’s more difficult to implement reliably than radar. Blind spots are handled by the brain by constantly moving the eyes around and stitching the resulting image together, because our sharp vision is actually only the size of a dinner plate at couple meters distance.

          Meanwhile, human eyes have motion detection built in to the retina, so the brain doesn’t have to compare multiple snapshots over time to figure out what’s moving. The eye does it continuously, and effectively overlays the information into the image that is being sent to the brain. That’s another advantage over the digital camera that has to do things the hard way.

          https://www.sciencedaily.com/releases/2015/06/150616190723.htm

      2. As relevant to the case, the human eye has far wider dynamic range, being able to see in a contrasting light without losing detail in shadows. We see in the moonlight, where digital cameras turn practically blind and to add insult to injury, the AI fails because of the massive amount of noise in the image that prevents any sort of image recognition.

        1. is this some sort of joke? Humans have terrible night vision, especially humans behind the wheel of vehicles, who have lighted instrument panels pointed at their faces. Humans are also distracted by passengers and such and their image sensors may not be pointed at the road, but rather at the baby screaming in the back seat, or the spouse making unauthorized adjustments to the radio.

        2. You never heard of LIDAR and thermocameras i guess. Also: explain to me why so many cars have accidents with deers during the night, despite the spectacular human vision. Those drivers clearly werent diligent enough!…

          1. Because the headlight beams shone in the deer’s eye causes the animals to freeze up instinctively.
            Ever heard of “lamping”? that’s the practice of hunting in the dark and using a powerful lamp shone at the animal to freeze its position while the hunter gets a steady shot in (please lets not =discuss ethics here).
            I also saw the deer I hit, it stopped at the roadside and then crossed suddenly anyway as I was parallel with it, so it didnt freeze up from headlights. I slow down more even if I know the animal has seen me and accounted for the possible being ran over thing, but that day I couldn’t quite get rid of all of my remaining speed in the distance becuse the road was wet, and what I didnt want to do was loose control completely and slide off into the massive roadside ditches we have with my son in his seat behind.

            For the autonomous car thing, if we really cared about safety enough to completely phase out human drivers, we could achieve much higher gains by stopping people talking on mobile phones while driving.
            Every day I see multiple instances of terrible driving, and each time the driver has a phone glued to their ear. I get in a car or on a bike, I’m out of comms, yet my wife would happily take a call about stupid stuff or ring people to pass the time on a long journey. On a motorcycle I don’t even listen to music, I just make the journey and have no real sensation of the duration of the journey as I’m focusing on everything around me that might kill me. When I see people getting off and seeing earbud leads out their helmets, I wonder if they realize the level of concentration not being killed by someone else might take up that day…

    1. The human eye is not better at detecting stuff than a high-end camera. We can (and are) making stuff that are much better at “seeing” than a human eye.
      What makes a human a “better” driver in some cases is our very excellent, hardware-accelerated image processing and classification system. That’s something we are still struggling with to try and solve using computers. We’re getting there, but not quite yet.

      1. Well there are cameras better at a somethings but there are no cameras that can deal with contrast differences as well as the human eye at least not ones that don’t cost more than the car.

  13. o/t a bit but my fear of self-driving vehicles is the potential for them to be hacked and weaponized. The FBI is clearly afraid of this too, as news outlets report the’ve been looking into this possibility. Imagine the damage of one well-timed “go crash” attack on a hundred thousand vehicles across the country.

    1. A nation state back group or a terrorist organization with deep pockets may be able to pull that off and they also could spoof GPS signals as even the Iranians have done this well enough to capture a UAV.
      Another danger having all transport automated would make society more fragile as manually driven vehicles will still work if communications infrastructure are knocked out by a natural disaster but automated ones may not.

  14. Tragic event for sure, the thing is that not only the cars have to be changed to secure autonomuos vehicle movement, the roads and surrounding areas also should get a overhaul.

      1. Reminds me of a Demotivational poster that had a picture of the rear end of a horse sticking out of a cars windshield. Both totaled as it were. I’m thinking just how bad of a driver one has to be for that to happen.

    1. My cousin was hit by a drunk driver and survived in AZ. My grandfather did not in CO. Both incidents happened in mountain time zone. OK, probably not the same issues based on what I just wrote… I guess? I want to say another cousin’s husband was hit in Virginia and a friend down the road downtown too. Happens. Look both ways before crossing the street for sure and like some say… stay off the streets just to be safe… unfortunately.

  15. No surprise that it’s Uber involved here. You can see teh design choices the different comapnies have made in their research. Google follow the rules absolutely, pootle around passively and get regularly rear-ended. Uber however are far more aggressive. They’ve already had one crash running a red light, and it was reported that this car was going marginally over the speed limit. Because they reckon everyone does those things I guess.

    Why do people still reckon that if we ever get to a stage where all cars are self-driving, that they will all communicate among themselves for the best mutual outcome. Isn’t it more likely that cars running Uber software will lie, cheat, and regularly zoom to the front of the queue and barge in. And given that, who is going to want to buy the car with the Google software which will be having sand kicked in its face every mile of the journey.

  16. Autonomous car crash: software bugs can be found and hopefully eliminated, decreasing risk of such accidents in the future. Human-operated car crash: how many accidents do we need before people learn to drive more carefully?

    1. Just 1. Anecdotally if you have a crash, you drive more carefully in future. The same will happen here. The self driving software will be updated to be more careful. The problem in meatspace is that we have millions of parallel operating systems learning to drive. The next generation of meat puppet softwear doesn’t automatically get the lessons learned. It has to start from scratch.

    2. @SQ2KFN: “software bugs can be found and hopefully eliminated”.
      No way… As a person working in an automotive Tier 1 company I see my colleagues working on these features. One thing is that there’s no such thing as software without bugs; the second – is the diligence of engineers working on it and pressure from the management/marketing/accounting to get it done…

      @Conrad: No. The next gen of the software also does not automatically get “lessons learned” implemented…

  17. It is interesting to compare the response to accidents from the early days of trains and cars with those of autonomous cars. People were saying that the technology should be banned. The evidence is that autonomous cars and semi-autonomous cars will save many many lives. The one shortfall with the current technology, when compared to the best human drivers, is that humans are very good as reading small signs of intent. They can judge that a stationary person might step off a sidewalk by body language and other small signs. They can then take avoidance action just to minimize the possibility of an accident. Autonomous technology is not there yet, so Uber will not be able to get rid of the need for those pesky drivers just yet.

  18. “Details are sketchy, but preliminary reports indicate that the accident was unavoidable as the woman crossed the street suddenly from the shadows at night.”

    Wow! I don’t know what usually happens in court but at least in the local news reports where I live they NEVER admit that it can be the pedestrians fault. The pedestrian could be standing in a shadow at night wearing dark clothes and jump right into the oncoming vehicle with perfect timing (and they seem to like to do that here) and it’s still a cry for the driver’s head on a pike!

    1. the dashcam video is now out, and no, she didn’t just ‘jump’ out into a vehicle’s path; she had WALKED across one lane and was almost across the other before she was hit. I think a human would have done better in this case.

  19. I do agree that autonomous cars are coming, but as mentioned implicitly, there is a big gap between technology logic and human logic/perception/intuition.
    I would think that if you force for only autonomous cars on the street then you could expect MUCH less accidents, assuming all of them should use the same logic based of strict non-breakable rules. On the other hand, humans are humans, with intuition, emotions and a character, there is people that attach to the rules and people that just feels nothing when breaking one (red lights, parking zones, etc), at least that is what I see here in Argentina that is where I currently live (Have been in other places with much more calmed societies and it could be a little bit different but still the same at some proportion).
    My opinion is that the problem is the mix between both, of course if you leave only humans you can’t expect things to change (As Einstein said, if you want different results you should do something different), and it doesn’t mean neither that if you include autonomous vehicles accidents are not going to reduce, but I just think that unless a machine can certainly predict the future, mixing both autonomous and human driven vehicles could reduce accidents but will not stop them from happening and neither reduce them as much as if all the cars are autonomous (That said by a guy who loves driving manual transmission cars).

    1. The actual words I used were: “held far more accountable” — I agree I can hold the machine to a higher standard. But it shouldn’t be a standard of defying physics or that anyone who gets hit by one gets a million dollars at a minimum.

  20. “That doesn’t mean machines will drive in the same way as a human.”

    There’s also something not being said. The environment adapting to the autonomous vehicle. For example signs coming with RF tags, where the what and where is addressed. Or those reflectors embedded in the road changed enough to let vehicles know where they are even under snow and ice. Adaptation is nothing new. Just look at the signs in Florida. The “senior” state.

    1. You can’t rely on RF tags, they can fail, get stolen or sabotaged. You will need to be able to read the sign and understand it. As long as the sign is there and readable by a human (even if shot up, rusted, bent or covered partially with graphity), your image recognition software needs to be able to handle it.

      1. There should be an in-road transponder or network talking to the cars, so that the signs are just a fallback. You can’t put 100% of the improvements in just the vehicle. The road, and the city need to change as well.

        1. Country roads, unchanged since the dawn of recorded history, are accessible to pedestrians, horses, bicycles, motorcycles, and farm equipment, but not the vehicles in your universe.

          1. The city is, and always has been, where most vehicles are, and where the VAST majority of vehicle interactions are. Country driving is likely to be the last place taken over by self-driving vehicles.

      2. Right there is the Argentinosaurus on the front lawn with a smart highway and networked self driving cars is they will get hacked.
        Terrorist groups and nation state backed entities could make worm that spreads through such a network either disabling the vehicles or ordering them to crash into things like power substations.
        If the local police can order one to stop with a device in their cruiser than doing at least doing that will quickly become something in the realm of script kiddies.
        Though the is one solution to the tag problem you can encode information into a road with simple magnetos these would be harder to tamper with since they would require physically digging them out or having a device capable of making extremely powerful magnetic pulses a device capable of the latter problem could just crash the self driving car’s on board computers.

  21. Had a near miss a while back, for some reason (ESP?) I slowed down just enough on a road coming up to a junction to avoid a motorcyclist going round a corner on the wrong side of the road. A robot car would have done the speed limit and splattered them.

  22. “Don’t get me wrong; autonomous cars shouldn’t get a free pass. But they probably shouldn’t be held far more accountable than a human driver, either.”
    Wrong! Wrong! Wrong! One of the key promises of autonomous cars is that they will be far safer than human drivers and should be held to that claim. If they are not held to a higher safety standard, then autonomous cars will just be another obstacle for human drivers to avoid and ultimately, will make the roads more dangerous.

    1. Well as I mentioned above, if you read your quote it says “far more” — I am not saying we shouldn’t expect more. But reasonably more. So if everytime an auto-car has a wreck the company involved gets held up for a million dollars no matter what, that’s far more. But the key word there is “far”

  23. “Data from 2016 shows that just under 40,000 people a year die in United States traffic fatalities ”
    “In 2016, there were 325 deaths worldwide due to commercial air travel.”

    People should stop using this erroneous rhetoric comparing apples to oranges. No one cares how many deaths there were a year. What everyone cares about is “what are the chances for me to get into an accident on my next trip and what can I do to avoid it”. If you start comparing number of death per trip/per hour of travel/per person, the numbers will be quite different. Basically if you assume that car travel is, say, only 100 more common than air flights, you’re already on the same order of magnitude.

    1. I don’t agree. It is like optimizing software. No matter how inefficient something is that takes 2% of your compute time, you can only recover 2% out of it. So if I could eliminate all airline deaths worldwide, I save 325 people. If I could eliminate all traffic fatalities in the US alone I save almost 40,000 people. So it is still a valid comparison when the scale is human lives. Sure, I can cook the scale to show what I like. But comparing miles traveled, for example, is definitely apple to orange because planes don’t fly 1 mile to the store every 15 minutes. Now if you want to bust my chops, I didn’t look up the worldwide traffic fatalities, but that just further makes my case since the real number scaled properly is even worse. According to WHO in 2013 there were 1.25 million traffic fatalities worldwide.

      I think human life is the right scale. Regardless of how you want to project it with statistics the fact remains that you have 1.25 million to save on one hand and 325 on the other. Which would you rather save 1% of?

      1. “planes don’t fly 1 mile to the store every 15 minutes” This is exactly the point. If you’re trying to answer the question of “where to invest” or “how can I save the most lives”, then yeah, just look at the absolute numbers and this is a totally valid use of this data.
        But these metrics are often (as in this article) used in a context of “what is a safer method of transportation for an individual”, and this is where it breaks apart.
        Taking your software analogy, it would be like Windows OS vs my homebrew note taking app that I use once a week. Windows finds and fixes 1000 bugs a year, while I fix 10 bugs in my app. Is my app 100 times more stable than Windows? Sure, 1000 bugs in Windows impact more aspects of life for humanity, so it’s probably more beneficial to solve those. But my app, turns out, errors out pretty much every 4-5 times I use it. So I would conclude my app is much crappier (less safe to use) than Windows.

        1. You just made my exact point. Sure, your homebrew computer is less safe, but the benefit of fixing your note taking computer compared to fixing even 1 serious bug in use by millions is marginal at best. Again, it depends on your scale. Is it quality or is it quantity? It could be that exploding hoverboards are more dangerous than driving a car (I don’t know — just a made up example). But fixing that saves only a handful of people so is it worth prioritizing the “more dangerous” thing that kills dozens over the thing that is killing thousands? The needs of the many…. (with apologies to CPT Spock).

    2. The NHTSA and Consumer Reports both publish lists of safe and unsafe cars, and many of the unsafe cars are quite popular, so I don’t think you can say that “everyone” cares about getting into an accident on their next trip and what they can do to avoid it, because people go out of their way to buy cars that are well known to be unsafe.

      1. But these rankings are relative. A genuinely good driver in one of the rated “unsafe” cars will have a better safety record than the average mediocre driver in the safest of vehicles.

  24. I think there needs to be first and foremost more MASINT/RADINT/RINT type ES/TS EW weapons systems exposure so the public knows what to measure and can install RDF and potentially active and not just passive shielding to defend from such potential hacks.

    I don’t think the autonomous cars are a bad idea… I think the safety needs to be upgraded in general for the publics health, safety, welfare and well being from a range of remote sensing and transmission assaults with intent to maim, murder and mass murder.

    Reminds me what you’re thinking is kind of like this worse case scenario: https://www.youtube.com/watch?v=qF3rz6MG7Sg

  25. We will see more and more of these types of incidents, most likely not fatalities, as the technology is further introduced and will abate as it is refined. People do things to put themselves in danger regularly, often self justified as I’m late, I’m tired I’m in the right, etc. or just not paying attention. Youtube is awash with video’s of people demonstrating this very fact and developers of this technology need to keep that in mind, which I’m sure they do.
    It is entirely possible that the accident would have happened anyway, whether the car was driver-less or not.

  26. You know who else killed people during experiments without asking the people if they wanted to participate, the Nazis. How is Uber any better, they even have a name that could have been taken from Nazi philosophy.

  27. I think there needs to be first and foremost more ES/TS EW weaponized hacking systems exposure so the public knows what to measure and can install RDF and potentially active and not just passive RFI/EMI shielding to defend from such potential hacks.
    I don’t think the autonomous cars are a bad idea… I think the safety needs to be upgraded in general for the public’s health, safety, welfare and well being from a range of remote sensing and transmission assaults with intent to maim, murder and mass murder being mitigated with validated systems so at least reduced and not promotion of criminal argument and victimization systems.
    Reminds me what you’re thinking is kind of like this worse case scenario: https://www.youtube.com/watch?v=qF3rz6MG7Sg

  28. I know! People die with seat belts on – so the obvious answer is that we should remove them!

    The trouble with this is that auto cars aren’t a technology problem (now) they are a society and regulation problem. And history shows they are much harder problems than technology ones…

    1. Well said Ian – you hit the nail right on the head. It’s more about the ‘Greater Good’ for society, not engineering minutiae. If you want a more sensible conversation ask a lawyer / politician to join and not so many hackers.

  29. I would rather kill 5 people than myself while driving a car, and I am sure that if all cars were computer-controlled, the number of people who die would be the deciding factor. Disgusting isn’t it?

  30. if there was a working technology for self-driving cars, most of you folks would be unemployed.
    So no, there is no such technology and there will not be for a very long time
    computers cannot reliably recognize things, their function and behaviour.
    and google, with its massive resources that for sure wont fit into your car trunk, still cant do a translation worth beans. if you dont believe it, check any chinese equipment instruction manual.
    i work in large sw and hw projects, and what i see is a race to the bottom, trying to find the cheapest and crappiest place where to outsource production, who gives a flip about quality. testing? let the customers do it for you.
    and you want to hand over your and your family lives to such industry? think again. i’d rather have a chimp driving my car. way smarter than a computer.

  31. Until aircraft are fully autonomous the comparing autonomous cars to autopilot is invalid. The comparison to modern fighter jets that are so complicated they need computers to activate the actual controls is also invalid because the computers are responding to human’s commands No doubt if personal transportation survives fully autonomous passenger vehicle will exist, but it’s going to be an evolutionary process. Along the way the new tech will first supplement the human driver. In this example perhaps thermal imaging may have began the avoidance process before a human driver could have seen the danger. The question is can we afford the technology, and can we put a price on human life?

  32. Now that the dashcam video is out, this is clearly a self-driving FAIL. A walking pedestrian was almost across two lanes of traffic when she was struck and killed. That should have been more than enough time for a credible self-driving car to detect her – by LIDAR, radar, infrared, whatever.

    Even assuming (I don’t) that a human wouldn’t have detected her sooner, I believe a human would have swerved and we’d have a trashed bike and an injured pedestrian instead of a fatality.

    If a self-driving car can’t detect a walking pedestrian on a clear road, regardless of light conditions, it shouldn’t be out on public roads yet. They HAVE to be better than humans, especially on a simple, straightforward situation like this, or there’s no point.

  33. Just saw the video, got to say, that it was almost certainty the worst spot I have ever seen someone try to walk across. Why do that?! What was she thinking?! . After seeing that even as a driver of 15 years I found that just insane that a person would cross in such a horrible spot, in my other post I talked about the 1% I worry about, and boy did she show what I mean (so sad). But her actions don’t put no fault on the robot that is what’s driving after all.

    From the time she becomes visible 6 sec to impact at 8 the car made no emergency lane change, I have had to do this before at highway speeds and it’s not as hard as you think. At those speeds very little wheel movement is needed. Overt movement of the wheel is the #1 reason emergency lane changes end in rollovers and death. But doing nothing? That is not the #1 reason for death that’s a guarantee. A “few” things went wrong, 1 The radar does not appear to have picked her up, shadow or not she should have been picked up on the radar long before she appears on video. 2. When she’s finally on video for two whole seconds the car continues to move forwards and does not change lanes. Which bring me to my conclusion. The Robot got confused by the way she held her bike and walked it (because it’s not human and can’t truly comprehend. It appears to have concluded the person was actually a car or a motorcycle that would speed up the way she was holding her bike and thought it was merging traffic destined for the third merge lane up ahead. What a disaster

    https://youtu.be/RASBcc4yOOo?t=4

    1. Thermal cameras need to be implemented on these things. A thermal image would have seen her and noted her direction of travel and possibly prevented the impact.

      But boy, you ain’t kidding. She could have crossed at any well lit area, but didn’t. Plenty to choose from on the road.

    2. I agree it looks like the car didn’t figure it out. Did you notice that there was a moving vehicle in the distance in a line between this dash cam and pedestrian? I wonder if that factored into it? The lidar picked up the object but optical said it was moving faster because of the background object? Just wild speculation on my part, of course.

  34. Ok there is a lot of focus on “the car” should have done something but what about the pedestrian. She obviously didn’t see the car coming and clearly did no more than look at it.

    A car driving down the road with its lights on is infinitely more visible than a pedestrian in the dark.

    There were two entities involved here and neither appeared to take any evasive action. Though The car was where it was supposed to be, the pedestrian wasn’t.
    No matter how clever a system is you can’t account for someone else’s stupidity.

    1. I think the gal since potentially being homeless was remote controlled to cause the incident similar to the Uber driver in Kalamazoo as he clearly noted though in a delusional potentially way since he wasn’t trained on the exact procedure used by his handlers either HUMINT or like HUMINT or remote operatives.

      Reference to CNN Kalamazoo Uber Driver Shooting Suspect: https://www.cnn.com/2016/03/14/us/kalamazoo-shooting-suspect/index.html

      Methods to inhibit healthy, safe, welfare and well being technological advances by primate predatory mass murderers that want people to believe they are telling the truth, cause fake news and incidents to compel the Conspiracy and Deprivations of Rights Under Color of Law by Continuing Criminal Enterprises:
      https://www.youtube.com/watch?v=4jJebz5dsA8&lc=

      This device in particular, like V2K Silent Sound Subliminal Methods, is worth looking into as well as the rest of John Williams site before pan troglodytes brute force destroy the U.S. Jurisdiction et.al.: http://www.lonestarconsultinginc.com/mindcontrol1.htm#SCANNERSDEVICE

  35. From the death incident video- If my headlights were that Dim I’d be driving slower, with my spidey senses on. Perhaps the computer prioritized the comfort of the hair job driver over a “homeless unit”. What a douchebag future.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.