Does Your Programmer Know How Fast You Were Going?

News reports were everywhere that an autonomous taxi operated by a company called Cruise was driving through San Francisco with no headlights. The local constabulary tried to stop the vehicle and were a bit thrown that there was no driver. Then the car moved beyond an intersection and pulled over, further bemusing the officers.

The company says the headlights were due to human error and that the car had stopped at a light and then moved to a safe stop by design. This leads to the question of how people including police officers will interact with robot vehicles.

For Cruise’s part, they have a video informing law enforcement and others how to approach one of their vehicles (see second video, below). You have to wonder how many patrol cops have seen it though. We don’t think we’d get away with saying, “We mentioned our automatic defense system in our YouTube video.”

Honestly, we aren’t sure that in an emergency situation we would want to find our list of automatic vehicle companies to find the right number to call. At the very least, you’d expect to have the number prominently on the vehicle. Why the lights didn’t turn on automatically is an entirely different question.

We can’t imagine that as autonomous vehicles catch on that regulations aren’t going to be forthcoming. Just like fire departments have access to Knox boxes so they can let themselves into places, we are pretty sure a failsafe code that stops a vehicle dead and unlocks its doors regardless of brand is probably a good idea. Sure, a hacker could use it for bad purposes, but they can also break into Knox boxes. You’d have to make certain the stop code security was robust.

What do you think? What happens when a robot car gets pulled over? What happens when a taxi passenger has a heart attack? We’ve talked about the issues surrounding self-driving anomalies before. Some of the questions have no easy answers.

83 thoughts on “Does Your Programmer Know How Fast You Were Going?

  1. The equivalence between cryptographic backdoor key and physical key box is as old as it is incorrect.

    The problem is that with the latter, someone has to physically be present, limiting the reach of a vulnerability or compromised key. With these cars, how will it be accomplished? Transmission over RF would mean a hacker could mess with every car in range. the internet would be far worse.

    Police light patterns are standardized. Just install a sensor for them that makes the car pull over as soon as practical.

    1. Are you only assuming “police light patterns” are standardized? And your solution would stop all autonomous vehicles in sight of the “police light pattern.”

      A better system would be _similar_ to 2-factor authentication that was used with a code generating fob. The 2nd factor could be the license plate alphanumeric. The offending vehicle would accept the valid code plus its plate number and pull over.

      1. The stoplight overrides are certainly standardized.

        For vehicle override it would be wise to support validatable overrides. Nothing complicated is needed; any general public/private key systems used in signature mode would be sufficient; TLS is merely one specialized version of this.

        It might be wise to design the system to support “emergency vehicle priority” (ie, non-emergency traffic keep right and yield right-of-way until emergency vehicle is past”) as well as “vehicle with tag X, come to a stop until released”. Implementation-wise, the first is a broadcast message “to all recipients”; the second is a destination-targeted message that other vehicles won’t respond to.

        Also, for police safety, it would be wise to include a visual acknowledgement indicator. It would be dangerous if an officer assumed the vehicle was stopped-until-released when it was merely temporarily stopped due to other considerations.

      2. “Are you only assuming “police light patterns” are standardized?”

        I wouldn’t, but it doesn’t seem like a hard problem on the surface. The real issues will arise when people start gaming ML/algorithmic* “police light pattern” detection implementations to make autonomous cars pull over on demand.

        Some secure exchange would be better, but harder an more costly to deploy (every emergency vehicle will need some hardware). Possibly this will be the normal in the long term.

        * “please click on the animated pictures with police lights flashing”

    2. Changing from radio to visible wavelengths wouldn’t make it any more hacker-proof though, now anyone with access to some blinking lights can stop any autonomous vehicle that passes by.

        1. Would 2 be true ?
          If multiple frequencies were used at once, You could fool a human into believing that they were seeing a steady white light with constant luminance to to parts of the spectrum that their eyes are sensitive, but simultaneously while maintaining constant luminance a human have flashing lights appear to the digital camera using exact frequencies that the silicon was more sensitive.

          Hackers always going to hack.

          1. “Hey camera, only pay attention to red and blue for me, yeah? You see anything else, just ignore it.”

            Cameras, like out eyes, can in fact tell the difference between different wavelengths of light.

          2. @M
            eyes have 3 sets of light sensors, digital cameras also have 1 light sensor with 4 colour filterers sitting above a group of four subpixels (red, 2xgreen and blue) for each pixel. So it should be possible to shine a light in part of the spectrum that our eye is less sensitive, while simultaneously overloaded the sensor. Look at the spectral sensitivity of human eyes ( ). My assumption is that the camera filters are just as non-linear as human eyes, and that both do not overlap perfectly. Lets just focus on blue. So you shine a solid high luminance blue at the frequency of peak sensitivity for human eyes (~450nm) and then at say two frequencies still in the blue part of the spectrum that the human eye is less sensitive (480nm to 500nm) you modulate a much lower intensity blinking. My line of thinking is from audio, that you can not hear a near ultrasonic whisper at the far side of the room if someone is shouting in your ear at frequencies your ear is most sensitive (basically how mp3 achieves its compression, remove what probably can not be heard, so that it compresses better). My guess would be that the eye would be similar if it was overloaded it might not notice, or at least not notice as well as a computer algorithm, that is designed to look for it.

          3. The point M isn’t that a camera can’t tell colours but that our eyes/brain have a sustain – so a flickering light looks solid white, and cameras have a polling rate so in that flicker could be blue only flashes that sync with the camera’s polling rate so it only sees that blue flash as its the only light on when it looks but your eye saw the blue,red,green flashes all at about the same time, so to your eye maybe the light is pulsing slightly and slowly in the rhythm of the police light, but it probably doesn’t have to even do that enough for you to notice, as presenting a tiny amount of extra blue only when the camera is ‘looking’ probably doesn’t have to be bright enough for our eyes to discern…

            Seems like it would be quite hard to pull off, but its certainly plausible.

          4. Not to mention other vehicles, like snow plows, that you really don’t want to stop in front of do not have a standardized blinking light color.

            I’ve seen yellow, red, and blue on various plows in various states.

          1. Yeah, you try stealing somebody’s tires from right in front of them, see how that works out for you. Just because it’s a self driving car doesn’t mean it’s unoccupied.

          2. Also, it’s very likely in realtime data contact with its base. The same system that uses cellular data to monitor location and schedule destinations generally also provides remote realtime surveillance archival. Burning the vehicle doesn’t destroy the video record. It’s also common for the vehicle to passively log bluetooth, wifi, and cellular IMSI identities of any nearby devices. While this is usually used as just another cue for proximity detection and collision/pedestrian-impact avoidance, it’s also useful data to identify perpetrators after a robot car gets mugged. With both sets of data combined, the result is usually an open-and-shut case for the local prosecutor.

            As the data collection is passive (as opposed to active IMSI-catchers), it doesn’t require unlicensed transmissions or other unlawful acts in most jurisdictions, and disclosure of intercepted data to law enforcement generally doesn’t qualify as “unauthorized disclosure to the public” even in jurisdictions where it’s not automatically lawful to receive arbitrary transmissions.

          3. Why would a “Self driving Taxi” be pulled over by the cops ?

            The only thing I can come up with is that it is not a real taxi, it is a fake taxi (it is working for a cartel as a drug mule, or a mobile candy store for a cartel).

          4. If there is a human passenger, the taxi could have a panic button. Lift cover and push and turn button to pull out of traffic and park in the nearest available safe zone.

      1. Radio provides no benefit when the stop itself is guaranteed to be a line-of-sight message; using optical minimizes interference. It’s not a security thing per se, that’s better taken care of by using chain-of-trust keys.

        The message itself is neither secret nor magic. What causes it to be valid is that the stop message is validly signed by a certificate, and that this certificate is signed by a known public agency cert. Whether you use actual SSL or merely some domain-specific asymmetric crypto system in signature mode, the result is that hackers would have to surrpetitiously steal trusted certs rather than just manipulating messages.

      2. Not only can random folks spoof police lights – unitended consequences mean cars would potentially stop for nearby emergency vehicles, police stopped at a scene trying to make cars move on past, police chases on TV screens in visible range, drive-in movies…

        Honestly I think the whole self-driving thing is going to be fighting edge-cases and ML wrinkles for at least a decade… it’s starting to feel like the next Nuclear Fusion but with a shorter imaginary timescale, perpetually just a few years away from Full Self Driving…

    3. Things not to do:
      Search the internet for instructions on how to legally get a fire department ‘knox box’ and produce the key from it. Works in every city, so don’t do the research, you hypothetical ‘bad person’.

      NYC’s elevator override key has been compromised for _decades_…they can’t change it…having one of the keys is possession of burglars tools, so don’t.

      Cop car keys, one for each manufacturer. You should never have a copy of that key (your city will likely only own one type), it would be ‘wrong’.

      The list goes on…Lazy, stupid government employees (double redundant).

      Locks keep honest people honest (I’m told).

      I’m reminded of the first kid in my neighborhood that got bolt cutters…Padlocks just attracted him. Best to leave your bike unlocked, until he went to juvie.

      1. The first time I saw the battery powered angle grinder in the tool bag of a friend, I joked: “You probably never again have a shortage of bikes.” Of course I knew, why he really needs it: He was just helping me by mounting a few PV panels on the roof.

    4. Public and private key cryptography. Give the police/first responders access via a smartphone app that they scan a qr code on the side of the car to authenticate key pairs and shut off the vehicle.

    5. “Police light patterns are standardized. Just install a sensor for them that makes the car pull over as soon as practical.”
      In what country? UK maybe, but in USA no they are not, A) Fire dept has different from state to state, they are not standardized, even colors are different. B) LEO’s no as well, this will vary from state to state from precint to rpecint even county to county, even HWP and Local Cops have different patterns, and colors, C) Even the Govt issues of the Forestry which can arrest you and stop you and ticket you have different lights and patterns, as well.

      So no your comment is wrong, and not actually thought out or researched. That is 1… 2… here in the USA, IR lights are used to automatically change the lights to their favor via the right pulses to have them changed(think IR remote for yout TV) YES these can be hacked for personal use, but honestly, as soon as it changes, it will go back to the original setup/use after they pass the intersection for here in the USA(Arizona Especially here), and they do actually vary even on the readers from county and city to city even here.

      If you are talking UK setups, i doubt they are even standardized there either, do i care about it, but the highly unlikely they have it standard either, but flatout don’t generalize the ‘Police light patterns are standardized’ like the ‘UPC’ across the globe…cause they are not.

      i know this only due to I install the systems for these in my area, and they are not always standard…and as well they do NOT always work, even for the precinct/city/county that is wanting them installed, which annoys everyone in that area.

  2. “… we are pretty sure a failsafe code that stops a vehicle dead and unlocks its doors regardless of brand is probably a good idea”. 100%, completely, and unequivocally disagree with that statement. We must find some other means to accomplish the goal.

    1. It’s still a car, so stop strips, wheel webs, Pitt maneuvers should still work. How an ostensibly autonomous vehicle responds to those things is an open question. In the worst case, – say, an autonomous vehicle loaded with explosives – there’s no standing law against shooting or exploding a robot to death. GAU-8 Avenger, anyone?

      1. Heh, every Sidewinder or Stinger is an autonomous vehicle loaded with explosives… Of course, this isn’t true for all missiles, just the self-guided (usually IR-homing) models.

        The big problem is that public safety authorities generally eventually get good at handling “the normal case”… but active-malice-and-intent situations generally are quite different. For example, many police departments would struggle to handle a transit bus being used as a weapon. If it happened enough, they’d get good at it, but there’s a difference between training and good instincts. You can create the first, but the second can only be discovered (and thus always in short supply).

        Being ready for the emergencies that happen every day is their job. Being ready for emergencies that happen every few months is generally pretty cheap and usually a good idea. Being ready for extremely rare emergencies drains funds from things that could save more lives daily.

        A runaway car (whether intentional or accidental) is definitely in not-that-rare category. This isn’t limited to robot cars, they just bring their own unique kinks. But those can (and should) be figured out and prepared for…

  3. “This leads to the question of how people including police officers will interact with robot vehicles”

    AFAIC a computer should never replace a driver unless said driver becomes incapacitated while driving. Driverless-by-design vehicles shouldn’t EVER be allowed on public roads, and the fact that they are speaks volumes about just how thoroughly corporations have co-opted government.

    1. I think the governments impression is that it will eventually become as safe as the air transport industry but their ignoring two things, 1) I lot of people died before the aviation became safe and 2) the AV industry is making no attempt to learn from the historical mistakes of aviation or attempt to employ the systems and processes that keeps aviation safe today.

      To say AV’s are safer than the average human driver is really a very low bar for technology assisted transport if you include aviation.

      1. Claiming they are ‘safer then humans’ is an apples to oranges lie.

        Automated cars on the interstate are safer than humans in all conditions but more dangerous than humans on the interstate.

        The problem with neural nets is that nothing is ever ‘known done’. New training set, got to check all the things you thought were solved, again. Not unlike all software, but well designed software is at least modular.

        The majority aren’t even trying to do it without LIDAR. Which is an economic deal breaker.

        I see automated interstate highway truck convoys, with only the first truck having an alert driver. I also see horrible pileup accidents when that first driver and his assists screw up.

          1. see the bowing max q…. and it’s multiple failures. no thanks. my control is driven by the will to preserve life, any company is driven by the will to preserve profits.

    2. Possibly… The hazard isn’t the robots, it’s the people. Even the odd corner cases and unready-for-primetime errors occur at a much lower rate than the hazards caused by standard meatsack drivers. But the robots and the meatsacks both find each other hard to compensate for, so there’s more hazard at the point of overlap.

  4. How is it acceptable that to stop an autonomous car, police are expected to have watched a YouTube, call a number, wait on hold, provide information as to the legitimacy of their stop, and finally receive a code or follow instructions to disable the vehicle?
    Whereas for a human driver they just shout “stop” and shoot?

    1. “We’re experiencing an unusually large call volume at the moment. Please stay on the line and the first available autonomous car wrangler will assist you shortly.”

      *Elevator music*

      “We’re experiencing an unusually large call volume…”

    2. @Al Williams said: “This leads to the question of how people including police officers will interact with robot vehicles.”

      Answer: Autonomous robot vehicle moves when it should not. Shoot to kill.

      Look at the cops standing between the back of the obviously unhappy robot car and the front of the parked police cruiser – dumb. At that moment the robot car is thinking, Got ’em!, followed by reverse then FLOOR IT!

  5. Don’t know about a stop code being required.. I mean if you were in your car and you saw flashing police lights behind you’d stop too more often than not. Computer vision could do a similar job at identifying a fake cop (no uniform, wrong kind of paint scheme on the car) as an average driver can. Do autonomous vehicles require more protection than that?

      1. No but you can call 911 if you’re followed by an unmarked police and have them confirm it is unmarked police or if there is no police following you, they can send a real one to stop impersonator.

      2. there used to be laws in place where you are not required to be detained by anyone no matter what unless you have a fully uniformed officer. i dont know if that is still true, but you wont see me doing anything but walkin tf away from anyone not in uniform, and even if in uniform, they will be questioned and i will answer no questions.

        “im with the fbi, i beed to perform a cavity search” says a guy in a suit and a fake id.

        nope, no thanks. kindly leave.

  6. I love the police going up to the car and being like “can I please see your license and registration?” This situation is so totally outside of their procedures that they don’t know what to do at all.

    Which is not a bad metaphor for the rest of us either. Who is culpable when an empty automatic car runs a red light? We just haven’t decided that yet. We’re all like these cops just looking in the window and going “I dunno”. There is simply no legal framework in place for this situation.

    1) Develop autonomous vehicles
    2) Something about society and laws?
    3) Profit!

    Who wants to influence the formation of these future laws the most? And has the deepest pockets? I bet I know where Alphabet / Cruise will want to place responsibility. Is there an equivalent NGO / interest group to provide countervailing lobbying pressure?

    It’s very telling that the car manufacturer is putting out a video telling law enforcement how they should interact with their cars, rather than the other way around.

    1. There actually were similar issues (despite people being involved) during the early transition from carriages to internal-combustion vehicles. The new device, despite not actually breaking any existing laws, often badly violated assumptions in ways that reduced safety (and made for hilarious reading).

      Plus, at least one car got pulled over for lacking pollution-prevention devices (horse diapers do nothing on cars, but some jurisdictions had mandated them). I read that years ago in a microfiche archive, and have been looking for the reference ever since…

    2. “Who is culpable when an empty automatic car runs a red light?”

      Tesla was recently busted for programming their (very not) “Full Self Driving” to drive through controlled intersections without first coming to a complete stop. Tesla was found culpable and ordered to fix it.

  7. I wonder why a company that is spending considerable amounts of money developing this technology didn’t spend the (minimal) extra on a more obvious/memorable contact telephone number? F1 race cars have lights (red/green) on the top of vehicle structure that indicate to race marshals if the car (high voltage hybrid system) is safe to approach – would it not make sense to mandate something similar to visually confirm to first responders if the vehicle is safe for them to interact with?

  8. If it’s a robot driving the vehicle, you have to realize it’s simply no longer a human being you are dealing with. You’ll be dealing with preprogrammed responses. If the driver did something to enrage you, or made a mistake, you can’t blame that specific person, you have to blame the people who programmed the vehicle, or at least those who setup the system.

    Of course, if a human being is responsible for operating the lights of the vehicle, then it makes sense to start a conflict.

  9. An easy solution to all of these.. have a human operator.

    That’s right, an operator, not a driver. An operator can be a remote person sitting in an office somewhere. Does that defeat the purpose of self-driving? No, not entirely. An operator isn’t driving so one operator might be “operating” several cars at once. They aren’t doing the actual driving. That is still automatic.

    An operator watching a camera pointing behind the car can see they are being pulled over, send a signal to the car to pull over as soon as is safe and even communicate with the officer via voice. Placing the mic and speaker in the back would probably be appreciated by the officer because then they don’t have to stand next to the car, sticking out into traffic.

    Placing a camera on the occupant means the operator can see something is wrong when the passenger has a heart attack or other medical emergency. They could then pull over and call an ambulance. Or maybe re-direct to the hospital themselves although that would only be a good idea if the hospital is very close by since ambulances have trained people and equipment to start working on the patient immediately. They would probably just always choose the ambulance route for liability reasons.

    I imagine a single operator would have a large monitor split with views of several cars at once. Each would cycle between front/back/passenger view every so many seconds. A large company might have an office full of workers doing this similar to a call center. In that case maybe when they have to start interacting with one specific car, such as a police or medical situation maybe the system would automatically shuffle the other cars they are watching over to other employees.

    Better yet, this is post 2020. Instead of an office it could be many people running this software at home via a VPN.

      1. at which point the language barrier will be so bad and their scripted responses so terrible that the person trying to resolve the situation gives up and goes to best buy to purchase another crappy car.

    1. oh wow. yeah… and i thought my steven segal quote from “hard to kill” put me on the temporal map. 😋

      “i missed…. i never miss… they must be smaller than i thought”

    2. Fiction: Robotic police officer fights crime and defeats AI-powered-mecha-armed-with-several-gatling-guns-gone-rogue.

      Reality: Robotic police officer is assigned to road duty and give fines to robotics cars.

      That would be the worst movie ever :D

      Meh ^^

  10. “Why the lights didn’t turn on automatically is an entirely different question.”

    Full time lights would mean never worrying about it. Use extra low beams to prevent any complaints.

  11. First of all, not pulling to the side of the road for following you red/blue blinky lights is generally a violation. it is a reason most places CONUS won’t allow forward facing red/blue blinkey lights. Obviously having a WTF moment prevented the officer from getting in front of the vehicle which in ALL cases should stop the car AND get operating monitor to be alerted.

    1. “not pulling to the side of the road for following you red/blue blinky lights is generally a violation”

      it depends on where on this globe you are. If you do that on an european highway, you might end up paying substantially more. Here, the cops will overtake and signal you to follow to the next exit.

      1. in France, if you’re being followed by a police car with headlights, you are not supposed to do anything unless it overtakes you and the officer gestures towards you to slow down and park on the side.

        Police cars can have their alert lights on for a variety of reason, and per the “rules of the road”, it only means that the car with such lights has special priority (it can cross through red lights and you are supposed to make way for it).

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.