The Predictability Problem With Self-Driving Cars

A law professor and an engineering professor walk into a bar. What comes out is a nuanced article on a downside of autonomous cars, and how to deal with it. The short version of their paper: self-driving cars need to be more predictable to humans in order to coexist.

We share living space with a lot of machines. A good number of them are mobile and dangerous but under complete human control: the car, for instance. When we want to know what another car at an intersection is going to do, we think about the driver of the car, and maybe even make eye contact to see that they see us. We then think about what we’d do in their place, and the traffic situation gets negotiated accordingly.

When its self-driving car got into an accident in February, Google replied that “our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.” Apparently, so did the car, right before it drove out in front of an oncoming bus. The bus driver didn’t expect the car to pull (slowly) into its lane, either.

All of the other self-driving car accidents to date have been the fault of other drivers, and the authors think this is telling. If you unexpectedly brake all the time, you can probably expect to eventually get hit from behind. If people can’t read your car’s AI’s mind, you’re gonna get your fender bent.

The paper’s solution is to make autonomous vehicles more predictable, and they mention a number of obvious solutions, from “I-sense-you” lights to inter-car communication. But then there are aspects we hadn’t thought about: specific markings that indicate the AIs capabilities, for instance. A cyclist signalling a left turn would really like to know if the car behind has the new bicyclist-handsignal-recognition upgrade before entering the lane. The ability to put your mind into the mind of the other car is crucial, and requires tons of information about the driver.

All of this may require and involve legislation. Intent and what all parties to an accident “should have known” are used in court to apportion blame in addition to the black-and-white of the law. When one of the parties is an AI, this gets murkier. How should you know what the algorithm should have been thinking? This is far from a solved problem, and it’s becoming more relevant.

We’ve written on the ethics of self-driving cars before, but simply in terms of their decision-making ability. This paper brings home the idea that we also need to be able to understand what they’re thinking, which is as much a human-interaction and legal problem as it is technological.

[Headline image: Google Self-Driving Car Project]

93 thoughts on “The Predictability Problem With Self-Driving Cars

      1. On my last trip out to the Bay area, I watched in awe as Google cars prowling around the area nearly caused several accidents. They often did exactly as you said – slammed on the brakes for no apparent reason, leaving the human behind them to stop in time. The cars moved *very* slowly so there was often a backup of cars behind them. The first car behind could see there was a Google car so they kept a wide berth most of the time. I know I certainly did. The cars behind that one though, knew there was a slowpoke up ahead and in normal human fashion tailgated to express their “why the hell are we going 15 in a 30 zone?”

        The Google car hits the brakes and a wise human behind them has plenty of time to stop slowly and avoiding a chain reaction behind them. If the nearest human is stupid though, they have to slam on their own brakes (spilling their double mocha latte in the process) and it’s a wonder that more of those situations didn’t result in a pileup. Of course, the Google car could easily have escaped being part of the melee even though it was usually the primary cause.

        I think that practically speaking we’re going to have to have ‘autonomous vehicle’ lanes on the highways so the cars can go about their business without humans around. Once they can talk to each other via radio, they can better keep a reasonable traffic speed and not have to worry about humans doing anything unpredictable. On the surface streets though, expect to see more accidents. People can be awful drivers but it is a system we put in place that suits us better than it does machines.

        1. So basically, Google has unleashed a horde of robotic elderly drivers. Maybe installing them exclusively in old Cadillacs would successfully signal to other drivers what to expect. (I’m only half kidding.)

        2. I live in the world’s autonomous hot-zone. Autonomous vehicles (95% of which are Google cars) are everywhere, often with more than one waiting at the same intersection. Not once have I seen the type of driving behavior that you describe. If anything the most recent tweak of the software has made the cars too aggressive.

          Having driven around them I have had lots of time to play games with the algorithms. Case in point: on a four lane road, the G cars will actively drive defensively by changing speed to avoid being trapped next to another car.

          As a frequent cyclist, I am *much* more comfortable with the autonomous vehicles than the human piloted vehicles.

        3. I’ve been living in Mountain View, CA and I see the Google cars all the time. However, I have never seen the behavior you are describing. I’m not going to go into more detail, but basically, I feel like maybe you got a small (and bad) sample of the pie. See the other comment about these cars from another native above.

        4. Where slowpokes cause the most irritation to outright havoc is on two lane highways with a plenty of curves* and too much traffic to make passing easy and safe.

          They poke along at 10~15 MPH under the limit with a long line of vehicles backed up behind them. Then when someone who wants to drive at the speed limit gets impatient and tries to pass… *POW* into an oncoming vehicle while the slowpoke merrily drives away without a ticket and huge fine. Even worse than an ordinary slowpoke is one who mashes their foot down when anyone attempts to pass them. Then there are the pulse and gliding hypermilers who refuse to drive at a steady speed, making it impossible for anyone behind them to set their cruise control on the speed limit.

          Most states have traffic laws about impeding the flow of traffic or failing to keep up with traffic. In Idaho if you have 3 or more vehicles backed up behind you on a two lane road you are supposed to pull over at the first safe place and allow them all to pass. The police almost never enforce the slowpoke laws in any State that has them. They focus on writing speeding tickets, which often get handed out to motorists attempting to pass the “You’ll never pass ME!” accelerator stomping slowpokes.

          There needs to be a “Driving like an a-hole” ticket with a fine double the highest for a speeding ticket. This should also include tailgaters who will not pass vehicles going slower than the speed limit. There’s been many times when I have been going the limit when some jackhole in a lifted pickup zips right up on my rear bumper and I’ve had to pull over and stop before he’d pass. Doesn’t matter if it’s a dead straight two lane highway with no other vehicles clear to the horizon or on a multi-lane freeway. That sort of jackassery needs to result in a license suspension until the driver takes a full driver training course with special emphasis on “How not to drive like a jerk which may get you shot in the face by someone you’ve pissed off by driving like a jerk.”

      2. But it is any less predictable that a human control car that could brake unexpectedly? No matter what or who is control of the car that brakes unexpectedly, if an accident results it’s the fault of the whoever is in control of the car that would hit the braking car. Unless it can be proven the lead car braked with the intent to cause an accident, good luck with that.

        1. That is what Following distance is for.

          The way my dad put it when teaching me how to drive: You should always expect, that any car in front of you, regardless of speed, may become stationary very rapidly. Therefore:

          -Know the stopping distance of your vehicle. (And how this changes with Heavier loads, or wet conditions)

          -Accurately estimate that distance to the car in front of you (A 3 count on a stationary object is a Rule of thumb, but needs to be increased in a heavy van, for example)

          -Be wary of changing lanes into other peoples stopping distance. That large space in front of a semi truck isn’t open, that is his stopping distance. (As my uncle, a truck driver says, It will not stop on a dime but it will stop on a hatchback.)

  1. “A cyclist signalling a left turn would really like to know if the car behind has the new bicyclist-handsignal-recognition upgrade before entering the lane.”

    At least in Europe, this would be the wrong approach. The cyclist wouldn’t have to think about it, cause without the recognition, there’d be no way to get your rotten autonomous suv an homologation

        1. From my experience as a road cyclist, I’d sooner be on the road than the pavement. Car drivers are much more predictable than pedestrians. When you’re doing 20mph+, a meandering pedestrian is a nightmare, and you’ll cause serious injury to yourself and them if you hit them. Cars can’t suddenly move at right angles, and very rarely stop abruptly, and never without good cause, which as a cyclist with your head a bit higher up (UK – most cars aren’t hummers) you can usually see before they can.

          1. Well, you don’t train for tour de france on a footpath if that’s your thing.

            You got other cyclists to worry about as well, because not everybody wants or is able to go 20+ mph. Trouble is, if you put everyone on the road with the cars, you have to be an athlete just to get along. That means the kids, the elderly and those in less than excellent shape can’t join in. I don’t want to own a 32+ speed special alloy bike that costs hundreds of pounds and gets all mangled up from the slightest bump, and pedal myself sweaty with it every time I go downtown. I got a 7 speed hub and all-year tires so I can ride in the winter without slipping, and 10 mph is plenty enough.

            As a corollary, what I’ve experienced of the UK, the cyclists are quite mad as well.

    1. That would be my expectation, that self driving cars would have a problem with human drivers. I’d not be surprised if these incidents of a self driving car being unpredictable are the car reacting to something it didn’t allow for from another driver. Sudden changes in course or speed are still asking for trouble.

    2. “people are less predictable than machines.”

      Nevertheless people have become experts in predicting other people, and from the chaos arises order. The problem isn’t unpredictability, but the unexpect and different behaviour of the robot cars, which causes them to break the overall pattern, and that causes mayhem.

      Even if you can predict how the AI will drive, unless it drives like a person it’s going to cause trouble simply by being different.

      1. Sure, something new will cause problems. But it won’t be new for that long. Also, aside from the fact things will definitely improve as the vehicles are further developed, the current vehicles are set to extremely cautious parameters for obvious reasons. Production vehicles should integrate much better–and in ways that they don’t humans will quickly adapt.

  2. Why not just program the AI to act like an asshole to bikers and pedestrians, that would be just normal and totally predictible for anyone riding a bike in city traffic…

  3. The more complex the signaling system, the more time it takes for a human driver to evaluate it, which in many instances would defeat the purpose. Especially if a driver is seeing it for the first time.

    It seems like the simplest method already exists, yet has been neglected. Emergency vehicles use flashing lights to let other drivers know they might perform movements outside of the norm. Some bicyclists also use them to draw additional awareness. It works. Why isn’t this used on self-driving cars? Designate a color not already used, perhaps purple, and put them on there already.

    1. really the point of a flashing light is to get drivers to pull over, either for a tug or just to get out of the way.
      you don’t have to put up with them for long unlike bicycles lit up like a christmas tree.
      flashing amber on slow vehicles are nowhere near as distracting but are pretty pathetic: if you can’t react and stop reasonably in the distance you can see in front of you then you are going too fast.
      so less flashing lights the better in my opinion

      self driving cars are all wrong anyway, industrial robots follow buried wires why not do the same thing with roads. we already waste enough money on ‘smart motorways’ that do nothing.
      if robot cars can’t safely drive on roads they shouldn’t be allowed to. its the same reason we don’t let trains use the motorway, it wouldn’t be safe. same with pedestrians, if you can’t cross a road without getting run over then it’s your own fault.

      1. “if robot cars can’t safely drive on roads they shouldn’t be allowed to.”

        I don’t think anybody disagrees with that statement. The disagreement comes from the belief that autonomous vehicles CAN and will drive safely on the roads–more so than human drivers.

        Adding sensors to the roadways has been looked at for half a century, and dismissed as impractical for a number of reasons.

  4. Given it pulled in front of a bus that was in the lane, it seems they mostly failed to give it an understanding of right of way and to yield to the bigger vehicle. In a car? Yield to the semi. In the semi? Yield to the train. Instead it simply and slowly pulled in front of someone who had right of way given they were in a lane with traffic.

    Actually, scratch that, it drives perfectly acceptable given what I see with Bay Area drivers, SoCal drivers and to a lesser extent, CA drivers.

    1. Sounds like it expected to be able to pull into the lane, then immediately accelerate; which wouldn’t be so bad. Instead it saw something unexpected, sandbags. It couldn’t tell if the sandbags were a passable obstacle. And so it continued into the lane, to get a better view of the sandbags, and at a speed (2 MPH) allowing it plenty of time to stop if they weren’t passable. Maybe. The descriptions of the accident I’ve seen are vague.

      Ideally it could have, through inference, realized the sandbags were passable by observing prior cars passing in that lane. Or done the same by realizing the bus driver, who had full view of the sandbags, wasn’t slowing. And therefore fully committed to its intended action, of accelerating after its lane change.

      But the more complex the inference, the more likely it will fail in weird ways.

    2. “Give way if you don’t have right of way” seems a pretty basic part of the algorithm to miss out when it comes to driving anything.

      Google reckon the bus was to blame for not slowing down. When I learned to drive I was told that any manoeuvre should not involve any other vehicle having to change what it was doing unless I had right of way.

      1. Great advice under most circumstances – however, even in the small-ish city I live in, under occasional heavy traffic conditions you often have exactly two choices: either wait for a couple of hours blocked in an uncontrolled intersection (with a rapidly growing queue of cars behind you, very soon trying to go around you and do what you’re not willing to) until traffic subsides and you get an honest chance, or ultimately force someone’s hand slightly by cutting in and expecting them to slow down and let you in, even if not by much. I think it’s obvious what happens, yet amazingly weeks can pass without any incident…

        Not to say how much that applies to the Google crash, all descriptions of it I’ve seen were really unclear…

  5. Ai cars will be a hazard due to human error.
    Humans can’t drive!
    We have all these rules about how to drive yet accidents still happen because humans can’t be bothered to follow rules and these means that accidents with self driving cars will increase

      1. No, the plan is for the programmers to sangria’s the environment so that there machines will not be exposed as the glorified video games that they are. And I’m a programmer!

      2. Nope, Increase the penalties to force drivers to follow driving standards and not kill and/or maim people.
        If drivers drove properly, accidents would be reduced and the need for blood sucking lawyers and insurers would resault in saving money.

        1. Increasing penalties almost never increases rule-following.
          The one thing it’s proven many times over is to increase is corruption, since it becomes cheaper to bribe (and the bribe becomes enough to motivate law enforcement to risk taking them) then to pay the penalty.

          Murder is also illegal, yet it still happens even in countries which use the death penalty.

  6. An AI can merge with a perfect riffle merge, where alternate cars merge. Humans wake up as assholes, and get worse with traffic and quite often refuse to merge fairly.
    The AI car can video these encounters and in case of an accident, the blame can rest where it should.

    I wonder if you could hack an AI car and make it act badly, be a bully, speed etc, hang around with loose women/men…etc.

  7. Problems won’t go away until almost all cars are self-driving and in constant communication with each other, at least those nearby. This is the ironic thing about this technology is that it is going to need to be far more sophisticated at the beginning in mixed situation that latter on when it is common. In the end, I suspect we are just going to have to accept that there will be accidents that wouldn’t occur between two human drivers, but far fewer than if we continue with manual controls.

    1. Until navigation gets better I will stick to driving my own car. More than once Google Maps could not get me to my location and I had to figure it out. It is usually a problem with finding the entrance to the parking lot.
      Now I am all for driving assistance, Lane keeping, anti-collision braking, blind spot monitoring, and adaptive cruise control all sound great to me.

      1. Anti-collision braking has the same issue. It causes cars that behave in non-human ways, hard braking suddenly due to a false alarm or over-cautious algorithms and -causing- accidents becuase of it.

        And auto-brake systems can also make things worse by trying to brake where steering and avoiding would be more appropriate, such as trying to avoid a pedestrian in slippery conditions. In such conditions you have to compromize between steering and braking even if you have ABS, because steering AND slowing down takes more traction from the wheels than just steering, and so if the car is trying to slow down at a maximum rate chances are you’ll still plow into the pedestrian.

        1. ABS is a problem in cases with extremely low traction. Except for quite recent vehicles, what happens is once all four wheels stop rotating, the ABS assumes the vehicle has stopped and quits pulsing the brakes.

          Then you’re in a very heavy hockey puck. The proper thing for the driver to do then is *get off the brakes* so the wheels can rotate and hopefully get some grip to control the vehicle.

          What has finally started to be implemented are sensors to detect if the vehicle is still moving after all the brakes are locked. If motion is detected, the ABS resumes pulsing the brakes. It’s safer, yay!

          But there’s still 25 or so years worth of vehicles on the road with ‘dumb’ ABS that assumes lack of wheel motion = lack of vehicle motion.

          There’s a hack opportunity! Add an accelerometer to an old car. Have the addition monitor all the wheel speed sensors and allow their signals to pass through without alteration. Then when the brake pedal is pushed, the accelerometer goes active, watching for the wheel speed signals to cease. When they do, it should check to see if the vehicle is still in motion. If so, then it should fake a wheel speed signal condition that will cause the ABS to pulse the brakes again.

          In other words have it take the place of a driver smart enough to let off the brakes then mash the pedal again.

      2. The speed at which errors get corrected in Google Maps is already far better than it used to be. With the rise of autonomous vehicles I would expect that to dramatically improve. There will be lots more people using maps to get around, and mistakes will be more problematic and likely to be fixed. Not to mention the highly sophisticated and accurate mapping Google is having to do just to enable its self driving technology.

      1. While I know your statement was tongue-in-cheek, there have been ideas for so-called ‘people movers’ based on pods running automatically on light rail networks since the 1939 World’s Fair, if not before. Even though there have been several demonstration projects, this idea never caught on, which is one of the reasons I’m surprised that self-driving cars are being seen by many as inevitable.

        1. Self driving car functionality is rolling out as an add-on to existing vehicles, just like the electric starter. Eventually the technology will get to the point manually driving will be a bit like crank starting your car. Doable, but why?

        2. Such systems require a ton of new infrastructure and don’t offer nearly the flexibility of cars, nor do they offer backwards compatibility.

          Self driving cars suffer from none of these detractions. The question is, why WOULDN’T they catch on?

    2. The interoperability across different makes and models of self-driving cars is going to be a major headache. We all know how well vendors closely implementing standards, right?
      I hope the programming for driving games are not involved in the software. :P

    1. I’ve driven there only once. When I was leaving the airport in a rental and looking for a place to merge the taxi behind me laid on the horn. Realizing that a crash would prevent him going anywhere I realized that just gunning and swerving was the way in. Sure enough, traffic parted like the Red Sea for Moses. After that, a piece of cake. Tailgate, close every gap, and swerve without signalling and everyone was fine with that.

  8. Self driving cars will clearly become the norm at some point but, right now, the sweet spots seem to be long distance highway travel (no loss of attention over long periods) and start-stop driving (fewer rear end accidents due to lack of attention). For the reasons stated earlier, it will be hard for an autonomous vehicle to be anywhere near as good as a good human driver at picking up small cues as to what other drivers can be expected to do. Apart from such things as making eye contact I also watch for where cars are in their lane (right/left wandering). When another car is stopped at a junction, I look for small front wheel movements to see if the car is about to start out. Making such predictions requires that the software includes a human behavior model and the current state of the art is quite crude in that respect. I look at AI as being, for the most part, in its second generation. The first was rule based, the second is probabilistic behavior modeling and the third is true behavior predection based on “real world” modeling – The software builds up a world model that understands the relationship between “objects” in the world and uses that with statistical analysis to predict behavior. There is quite a way to go on this approach.

    1. >every discussion about autonomous cars takes away from the discussion of alternate energy cars.

      How on earth are you suggesting they do that? In fact just about all the people I know that believe in an autonomous driving future are also huge supporters of electric cars and believe there is a significant amount of synergy between the two.

    1. Anywhere it snows is basically a no-go area, because the lidars get confused and the location of the lanes shift based on where the cars actually drive.

      All the self-driving cars are basically running on virtual rails, where they try to align an internal virtual 3D map to the reality around them to decide where they actually are. Since they don’t understand or percieve the surrounding reality to any reasonable degree, they rely almost entirely on the built-in map where everything is explained already, and so they actually behave like a person who is sleepwalking and interacting with a fantasy world where real-world objects and events are only vaguely present if at all.

      The AI understands so little of its surroundings, that if you were the AI and you would sit there at your desk, and someone would move your pen from where you know you last placed it, the pen would effectively dissapear to you. Even if you saw the pen somewhere else, you would not identify it as your pen anymore because that’s not your pen is in your internal model of reality, or a pen at all – it’s just an meaningless shape!

      1. That may very well have been one of the least informed posts ever made on the internet,
        Congratulations.

        I can’t find a single provable fact anywhere in it, and furthermore would estimate your understanding of the technology involved to be less than 0.

        1. Read just about any in-depth article about the self driving cars and they all state that the cars work as follows:

          1) a human driver drives a route while the car scans the environment with the LIDAR and records the path
          2) a human operator reviews and cleans up the recording, marking traffic signs and signals, alternate lanes, stops, etc.
          3) the car then repeats the previously recorded path by taking a GPS reading to see where in the world it should be, and then matching the LIDAR data with the previously recorded 3D map of the route to find out where precisely it is.
          4) the car then attempts to steer itself onto the previously recorded “track” that was marked down.
          5) anything that’s not recorded in the map is considered an anomaly and the AI then tries to determine whether it’s something you need to avoid, stop, or just ignore.

          Anything that is not in the pre-recorded and pre-reviewed map of what is supposed to be there is an anomaly, and the AI is not intelligent enough to percieve it to any other degree than “blob”, “another blob”, “a third blob that’s moving”.

          1. http://www.makeuseof.com/tag/how-self-driving-cars-work-the-nuts-and-bolts-behind-googles-autonomous-car-program/

            “Currently, Google has mapped approximately 2,000 miles of road for the driverless car to operate on. To give you an idea of scale, there are more than 170,000 miles of road in California alone, and over 4-million miles of public road in the United States.”

            “The reason the cars have performed so well in their initial 700,000 mile test is largely due to the fact that the cars get to “cheat” in the way in which they respond to their environment. That is to say, each car isn’t making decisions in real-time on how to respond to external stimuli, and Google hasn’t tested the car’s ability to respond to situations outside of these mapped environments.”

            Of course they haven’t, because the cars can’t drive anywhere that isn’t scanned and mapped. If no Google Car has taken the journey between A to B, no other Google Car can go there, because GPS is not precise enough and the cars don’t actually understand what a road is well enough to deduce where they should be driving.

            The point of Google is that each and every Google Car is continuously charting the environment and sending data back to home base, where supercomputers can crunch the numbers and then upload relevant changes to all the Google Cars to keep them updated on what they should be seeing. The cars themselves don’t make many decisions – they’re literally running on virtual rails – and deviate from those lines only in order to avoid accidents.

        2. http://www.theatlantic.com/technology/archive/2014/05/all-the-world-a-track-the-trick-that-makes-googles-self-driving-cars-work/370871/

          “The Trick That Makes Google’s Self-Driving Cars Work”

          “Today, you could not take a Google car, set it down in Akron or Orlando or Oakland and expect it to perform as well as it does in Silicon Valley. Here’s why: Google has created a virtual track out of Mountain View.”

          “The key to Google’s success has been that these cars aren’t forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel.”

          “”Rather than having to figure out what the world looks like and what it means from scratch every time we turn on the software, we tell it what the world is expected to look like when it is empty,” Chatham continued. “And then the job of the software is to figure out how the world is different from that expectation. This makes the problem a lot simpler.””

          “What was a nearly intractable “machine vision” problem, one that would require close to human-level comprehension of streets, has become a much, much easier machine vision problem thanks to a massive, unprecedented, unthinkable amount of data collection.”

          1. Here’s the point I was making

            “We can align what we’re seeing to what’s stored on the map. That allows us to very accurately—within a few centimeters—position ourselves on the map,” said Dmitri Dolgov, ​the self-driving car team’s software lead. “Once we know where we are, all that wonderful information encoded in our maps about the geometry and semantics of the roads becomes available to the car.”

            What that means is, if the car should ever lose or mistake its precise position on the road, it will lose all the semantics – or what everything means. It may see the curb, but it doesn’t understand it’s a curb anymore. It may see the road ahead, but it doesn’t look like a road to the car, because the car is not relying on what it sees, but what is encoded in its virtual map. All the actual intelligence is outsourced to the people who draw the map!

            That presents very obvious problems. For example, if the map includes a building, and that building got knocked down since the mapping team was last there, the car will lose a point of reference and probably mistake its position by a few inches as it tries to align its map to the environment the best it can. So far no great harm done – but once you include situations where things -really- change, like snow banks collecting up and lanes shifting due to how the cars and plows run, it’s going to be hopelessly lost because its internal map simply does not match the environment.

          2. Of course, there’s a very good argument to be made about real people and how we all live in the same kind of fantasy/dream world because we operate on internal models of the external world. The difference of course is, that we’re continously and seamlessly updating our models and identifying/labeling the things we see as we see them. We’re not relying on outside actors to tell us what we’re seeing because we’re actually intelligent enough to do that independently.

            In the DARPA autonomous vehicle challenges, the cars actually do that. They take lidar and sonar images, camera images etc. and actually try to guess what is road and what is not road, but the task of course is easier because they’re running in the desert so what “road” means can be simply “smooth enough to drive on” – and so the car follows the road simply by avoiding the ditch.

            That however is not good enough for driving in the urban environment. You can’t clip through someone’s lawn to take a corner short, so you have to be almost perfectly faultless in identifying where you can drive, and that’s why the self driving cars don’t even try to do that – they have people to tell them exactly where they should be driving and how they should be driving.

    1. good question… we can have the cars plead guilty by default though, so it can be jailed or forced to do community service (public transportation?) … this symbolic gesture might appease any victims (humans or other self-driving cars) of the accident… such a policy would emquicken social acceptance of presence of self-driving cars on the roads, guaranteeing enough accidents to learn from before potentially being banned for some decades…

  9. Autonomous vehicles have no business being on the same roads as human beings. Until computers can process the same amount of information as a human brain, at the same speed, with the same ability to deal with abstract information, and with the same reliability (ever see a computer run for 85 years without a glitch? Me neither), they should not share the roads with humans. Actually, they should never share the road with us. Call me a

    Another point, who’s driver’s license are these stupid things driving under? In the US you definitely need a license to drive on public roads, so why are these things allowed out without even a driver, much less a license?

    1. Good luck finding a human being running 85 years without a glitch. Hell… Good luck finding a human being running a whole day without a glitch. When it happens to people they are called mistakes. Hell, the phrase isn’t “To err is machine”. We’re nothing but glitches muddling through as best we can.

    2. Driving is such a small subset of the human brain that many drivers do it “”without thinking””. Its a very passive form of intelligence. Theres not even that much abstraction needed. Understanding signs, understanding geometry of the environment.
      Its not like, say, trying to parse meaning from this text.

      As for reliability – they already beat humans. Only on the subset of roads they have been tested on, sure, but thats exactly why they are only on that subset of roads at the moment.

    3. “Until computers can process the same amount of information as a human brain”

      In a great many ways computers can already process far more information as relates to driving.

      >with the same ability to deal with abstract information

      Far more challenging, but autonomous vehicles don’t have to be able to do EVERYTHING that humans can do as well as we do it, anymore than humans should be prevented from driving because they’re not as good at many things as computers. What matters is that the overall experience–whether the sum of their abilities allows them to function competently on the roads with humans, and ultimately be safer. There is every reason to believe that by the time they are available for public use that will be the case.

      > (ever see a computer run for 85 years without a glitch? Me neither)

      I’ve never seen a human run for 85 years without a glitch either. Again, what matters is the frequency and severity of said glitches compared to human drivers.

      >Another point, who’s driver’s license are these stupid things driving under?

      At this point under the driver’s licence of the trained driver overseeing the test vehicle. Federal and state governments are working towards regulations and procedures that will license the vehicles themselves. They will undoubtedly have to pass far more rigorous testing than any human driver ever has.

      “so why are these things allowed out without even a driver, much less a license?”

      They’re not.

  10. Some subtle clues that may not be obvious to a robotic car, or that an ai may think silly:
    – before steering at an intersection, other drivers steer or look toward the direction they want to drive.
    – when waiting to turn behind a car, the car slowly turns when advancing, caused by previous point.
    -flashing high lights to signal “go” to a pedestrian
    – going more at the side of the road to greet or offer a lift to a pedestrian they know
    -slowing to encourage a pedestrian to cross the road, then accellerating if pedestrian doesn’t start to cross after noticing
    -disregard red semaphore if road is clear and there is sufficient visibility of other intersecting roads
    -drive more carefully when another car has high volume music
    -go on reverse, on the other side or on a sidewalk if can help passage of a emergency vehicle
    -slow down when rains to avoid splashing pedestrians

    1. I am inclined to agree with the comment however I disagree with the reason.

      In Australia we constantly are fed the mantra ‘speed kills’ and this is used to justify the massive amount of fines including the contributions from private contractors (Redflex) who monitor speed.

      If self driving cars were allowed, this massive source of (we can’t actually call it revenue) money would dry up – QED the will never be allowed.

      Stan

      1. Sure, it will mean a loss of revenue streams, but it’s not like the government isn’t good at inventing new ways to tax and pillage. It will just be replaced with something else–no reason to halt the progress of technology and all the ways they might benefit not only the populace but government coffers just because you might have to have a few meetings coming up with new ways to collect money.

    2. Not sure about that. Theres certainly legal issues work out – my preference is google, or whoever made the vehicle, also supply’s the insurance. If they have faith in their creation, that insurance should be dirt cheap.

      But in general? why? These vehicles have more data, by a huge margin, then any human human collision. Any accident will almost certainly be cut and dry as to who to blame.

    3. Car crashes alone cost $871 billion per year in the US. Factor in reduced congestion, the value of free time, etc, and the benefit of autonomous vehicles could potentially reach trillions of dollars. Sure, we may well see more high dollar lawsuits against automakers and software developers for these vehicles than we see today, but it will be easily offset by other factors.

  11. Circa 1999, I helped drive home a point of road safety to a student driver. I was walking back to work with my lunch and as I got to an intersection, a student and instructor pulled up on the cross street from the left.

    The student was studiously looking to the left, only to the left. I decided to stay standing on the curb. The instructor and I made eye contact. I gave a little nod. The instructor gave a little nod.

    Then just as the student went to pull out to turn right and *finally* started to look to the right… I stepped off the curb. BRAKE!

    The instructor smiled, I smiled and waved then walked on to work. I never saw them again, but I’m certain the instructor used the encounter to teach a lesson that hopefully has served that driver well in the years since.

    Always look both ways, even when turning right (left for the UK and former colonies + Japan), even if you’re turning onto a one way road. Never know when there will be a pedestrian or a Wrong Way Willie. Fiddle Week here in June, there’s always some out of towner ignoring the signs on the two one way streets.

  12. Predictability is based on experience of the human, surely as these vehicles get more common they will naturally get more predictable too us because *we* have adapted?
    While increasing feedback to humans probably doesn’t have a downside, I worry a little about the thinking the cars need to act more like us. We should work out what the “ideal” is when the road is filled with them, – Maybe we need to act more like them rather then visa-versa.

  13. Two major reasons this is never going to be adopted outside of the California bubble:

    1. Snow. Those of us from much of the midwest and northeast know this is a completely insane idea.

    2. Ticket revenue. Either there is no one to charge with speeding, dangerous driving, etc., or the company that makes the car is responsible. Either possibility torpedoes wide adoption

    1. There is no reason to believe snow is some kind of intractable problem. Google and others are only just beginning to tackle it. There is no reason to believe improvements in hardware and software won’t be able to fix current issues.

      As for ticket revenue that issue is highly overrated. Self driving vehicles will also save municipalities significant amounts of money, and there is no reason to believe lost revenue sources can’t be replaced by new taxes and fees. The government is good at that.

Leave a Reply to James PurcellCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.