Self-Driving Cars And The Fight Over The Necessity Of Lidar

If you haven’t lived underneath a rock for the past decade or so, you will have seen a lot of arguing in the media by prominent figures and their respective fanbases about what the right sensor package is for autonomous vehicles, or ‘self-driving cars’ in popular parlance. As the task here is to effectively replicate what is achieved by the human Mark 1 eyeball and associated processing hardware in the evolutionary layers of patched-together wetware (‘human brain’), it might seem tempting to think that a bunch of modern RGB cameras and a zippy computer system could do the same vision task quite easily.

This is where reality throws a couple of curveballs. Although RGB cameras lack the evolutionary glitches like an inverted image sensor and a big dead spot where the optical nerve punches through said sensor layer, it turns out that the preprocessing performed in the retina, the processing in the visual cortex and analysis in the rest of the brain is really quite good at detecting objects, no doubt helped by millions of years of only those who managed to not get eaten by predators procreating in significant numbers.

Hence the solution of sticking something like a Lidar scanner on a car makes a lot of sense. Not only does this provide advanced details on one’s surroundings, but also isn’t bothered by rain and fog the way an RGB camera is. Having more and better quality information makes subsequent processing easier and more effective, or so it would seem.

Computer Vision Things

A Waymo Jaguar I-Pace car in San Francisco. (Credit: Dllu, Wikimedia)
A Waymo Jaguar I-Pace car in San Francisco. (Credit: Dllu, Wikimedia)

Giving machines the ability to see and recognize objects has been a dream for many decades, and the subject of nearly an infinite number of science-fiction works. For us humans this ability is developed over the course of our development from a newborn with a still developing visual cortex, to a young adult who by then has hopefully learned how to identify objects in their environment, including details like which objects are edible and which are not.

As it turns out, just the first part of that challenge is pretty hard, with interpreting a scene as captured by a camera subject to many possible algorithms that seek to extract edges, infer connections based on various hints as well as the distance to said object and whether it’s moving or not. All just to answer the basic question of which objects exist in a scene, and what they are currently doing.

Approaches to object detection can be subdivided into conventional and neural network approaches, with methods employing convolutional neural networks (CNNs) being the most prevalent these days. These CNNs are typically trained with a dataset that is relevant to the objects that will be encountered, such as while navigating in traffic. This is what is used for autonomous cars today by companies like Waymo and Tesla, and is why they need to have both access to a large dataset of traffic videos to train with, as well as a large collection of employees who  watch said videos in order to tag as many objects as possible. Once tagged and bundled, these videos then become CNN training data sets.

This raises the question of how accurate this approach is. With purely RGB camera images as input, the answer appears to be ‘sorta’. Although only considered to be a Class 2 autonomous system according to the SAE’s 0-5 rating system, Tesla vehicles with the Autopilot system installed failed to recognize hazards on multiple occasions, including the side of a white truck in 2016, a concrete barrier between a highway and an offramp in 2018, running a red light and rear-ending a fire truck in 2019.

This pattern continues year after year, with the Autopilot system failing to recognize hazards and engaging the brakes, including in so-called ‘Full-Self Driving’ (FSD) mode. In April of 2024, a motorcyclist was run over by a Tesla in FSD mode when the system failed to stop, but instead accelerated. This made it the second fatality involving FSD mode, with the mode now being called ‘FSD Supervised’.

Compared to the considerably less crash-prone Level 4 Waymo cars with their hard to miss sensor packages strapped to the car, one could conceivably make the case that perhaps just a couple of RGB cameras is not enough for reliable object detection, and that quite possibly blending of sensors is a more reliable method for object detection.

Which is not to say that Waymo cars are perfect, of course. In 2024 one Waymo car managed to hit a utility pole at low speeds during a pullover maneuver, when the car’s firmware incorrectly assessed its response to a situation where a ‘pole-like object’ was present, but without a hard edge between said pole and the road.

This gets us to the second issue with self-driving cars: taking the right decision when confronted with a new situation.

Acting On Perception

The Tesla Hardware 4 mainboard with its redundant custom SoCs. (Credit: Autopilotreview.com)
The Tesla Hardware 4 mainboard with its redundant custom SoCs. (Source: Autopilotreview.com)

Once you know what objects are in a scene, and merge this with the known state of the vehicle and, the next step for an autonomous vehicle is to decide what to do with this information. Although the tempting answer might be to also use ‘something with neural networks’ here, this has turned out to be a non-viable method. Back in 2018 Waymo created a recursive neural network (RNN) called ChauffeurNet which was trained on both real-life and synthetic driving data to have it effectively imitate human drivers.

The conclusion of this experiment was that while deep learning has a place here, you need to lean mostly on a solid body of rules that provides it with explicit reasoning that copes better with what is called the ‘long tail’ of possible situations, as you cannot put every conceivable situation in a data set.

This thus again turns out to be a place where human input and intelligence are required, as while an RNN or similar can be trained on an impressive data set, it will never be able to learn the reasons for why a decision was made in a training video, nor provide its own reasoning and make reasonable adaptations when faced with a new situation. This is where human experts have to define explicit rules, taking into account the known facts about the current surroundings and state of the vehicle.

Here is where having details like explicit distance information to an obstacle, its relative speed and dimensions, as well as room to divert to prevent a crash are not just nice to have. Adding sensors like radar and Lidar can provide solid data that an RGB camera plus CNN may also provide if you’re lucky, but also maybe not quite. When you’re talking about highway speeds and potentially the lives of multiple people at risk, certainty always wins out.

Tesla Hardware And Sneaky Radars

Arbe Phoenix radar module installed in a Tesla car as part of the Hardware 4 Autopilot hardware. (Credit: @greentheonly, Twitter)
Arbe Phoenix radar module installed in a Tesla car as part of the Hardware 4 Autopilot hardware. (Credit: @greentheonly, Twitter)

One of the poorly kept secrets about Tesla’s Autopilot system is that it’s had a front-facing radar sensor for most of the time. Starting with Hardware 1 (HW1), it featured a single front-facing camera behind the top of the windshield and a radar behind the lower grille, in addition to 12 ultrasonic sensors around the vehicle.

Notable is that Tesla did not initially use the radar in a primary object detection role here, meaning that object detection and emergency stop functionality was performed using the RGB cameras. This changed after the RGB camera system failed to notice a white trailer against a bright sky, resulting in a spectacular crash. The subsequent firmware update gave the radar system the same role as the camera system, which likely would have prevented that particular crash.

HW1 used Mobileye’s EyeQ3, but after Mobileye cut ties with Tesla, NVidia’s Drive PX 2 was used instead for HW2. This upped the number of cameras to eight, providing a surround view of the car’s surroundings, with a similar forward-facing radar. After an intermedia HW2.5 revision, HW3 was the first to use a custom processor, featuring twelve Arm Cortex-A72 cores clocked at 2.6 GHz.

HW3 initially also had a radar sensor, but in 2021 this was eliminated with the ‘Tesla Vision’ system, which resulted in a significant uptick in crashes. In 2022 it was announced that the ultrasonic sensors for short-range object detection would be removed as well.

Then in January of 2023 HW4 started shipping, with even more impressive computing specs and 5 MP cameras instead of the previous 1.2 MP ones. This revision also reintroduced the forward-facing radar, apparently the Arbe Phoenix radar with a 300 meter range, but not in the Model Y. This indicates that RGB camera-only perception is still the primary mode for Tesla cars.

Answering The Question

At this point we can say with a high degree of certainty that by just using RGB cameras it is exceedingly hard to reliably stop a vehicle from smashing into objects, for the simple reason that you are reducing the amount of reliable data that goes into your decision-making software. While the object-detecting CNN may give a 29% possibility of an object being right up ahead, the radar or Lidar will have told you that a big, rather solid-looking object is lying on the road. Your own eyes would have told you that it’s a large piece of concrete that fell off a truck in front of you.

This then mostly leaves the question of whether the front-facing radar that’s present in at least some Tesla cars is about as good as the Lidar contraption that’s used by other car manufacturers like Volvo, as well as the roof-sized version by Waymo. After all, both work according to roughly the same basic principles.

That said, Lidar is superior when it comes to aspects like accuracy, as radar uses longer wavelengths. At the same time a radar system isn’t bothered as much by weather conditions, while generally being cheaper. For Waymo the choice for Lidar over radar comes down to this improved detail, as they can create a detailed 3D image of the surroundings, down to the direction that a pedestrian is facing, and hand signals by cyclists.

Thus the shortest possible answer is that yes, Lidar is absolutely the best option, while radar is a pretty good option to at least not drive into that semitrailer and/or pedestrian. Assuming your firmware is properly configured to act on said object detection, natch.

72 thoughts on “Self-Driving Cars And The Fight Over The Necessity Of Lidar

    1. I am reminded of that self-driving car in San Francisco that ran someone over and got them wedged in the wheel well. It didn’t have sound or haptic sensors, and the wheel well was in a blind spot, so it had no way of knowing it had picked up a passenger.

      If I recall correctly, it went something like a quarter mile before pulling over because it thought it had a flat tire.

    2. We drive by the seat of our pants, for sure. And there’s always “does that tire sound funny to you” which is most often just a rock or something that got jammed between the ridges, but could be an early warning.

      That one time I smelled the cabin filling with smoke…well a self-driving system probably wouldn’t have made it much worse anyway.

      But that’s a super interesting point about self-drivers in general, that they are designed for the “normal” situations, and may not react well when things get far enough out of the box — on the edge of traction over a sketchy gravel road, or with actual problems.

      1. And there’s always “does that tire sound funny to you” which is most often just a rock or something that got jammed between the ridges, but could be an early warning.

        I once heard a very slight metal tinging/clinking from the back while driving a relatives car.
        Had to bring it in for a regular checkup I think (relatives were on holiday) and mentioned that noise to the shop.
        Turns out the tip of one of the rear suspensions springs/dampeners(?) had broken off and pretty much turned the car road unsafe (not in USA I assume).

        Of course assuming the car shop wasn’t grifting my relatives (I’m not a 100% they weren’t).

        1. When you get a pilot’s certificate they check your vision and hearing. A friend of mine has a VERY nice airplane, a Cessna 210, and is significantly older. As we were taking off from his runway, I heard just the faintest brrip brrip brrip kinda sound as we accelerated and it stopped almost the moment we came off the ground, so I thought, huh, something’s going on with one of the tires. He didn’t hear a thing. When we landed at the next airport, it was a little louder. I mentioned it and we looked. Smaller aircraft have bolt-together wheels, two halves with (in this case) eight bolts holding them together clamped around the tire and tube. Four of the bolts had failed and the halves were slightly apart on one side, so the tire was wider and slightly touching the axle yoke, and the sound I heard was it brushing as the tire accelerated.
          Sounds like that matter. Or like when you hear your tire pick up a nail and it’s going tink tink tink and you think ah I’m about to get a flat.

          1. I had a similar experience last Xmas in my parents car. They’d come down from ‘Up north’ about 250 miles away. We all went out somewhere for lunch, chatting about the funny noise coming from the front, which seemed to go away at higher speeds. My spidey sense was triggered, so we ditched the ladies at home and popped round the local garage. After about 2 minutes the young mechanic comes over. Hey guys look at this! Front left wheel was held on my ONE tight ish nut, one finger tight, just fell to the floor when he poked at it. The other three bolts were AWOL. They would not have made it to their hotel that night without ending up in a ditch. Nowadays I check his wheel nuts before getting in. You know, just in case. Eek.

        2. It terrifies me that people who wouldn’t notice the problem are allowed to drive.

          Or people who just think “huh that’s a weird sound” rather than notice “wow that corner of the car really doesn’t want to settle after a bump”.

          I’m definitely a hearing-oriented person. I sometimes get up and run the waste disposal for a few seconds of the dishwasher sounds off.

      2. For sure, the ‘normal’ stuff is easy (it’s really not but meatsacks make it look easy).

        Where it gets difficult is when it’s not normal, I used to tag along with a friend who recovered cars for a living some of the stuff we saw was just so random and off the wall you’d not believe it if you’d not seen it.

        Those situations are where humans fail to cope and make bad decisions purely because the situation is so unpredictable, rapidly changing etc. and we have all the input data plus experience.

      3. Not that it’s relevant to self-driving cars, but I was thinking about this “seat of the pants” concept just the other day when I hopped in my classic, rarely driven, manual transmission pickup and considered what input I was processing to know when it was time to shift.

        Windows down in traffic, I couldn’t really hear the engine. It doesn’t have a tach. I don’t drive it enough to judge by speed. It was mostly just vibration, and maybe a little bit of sound.

      4. More worrying still are the drivers who can hear the sounds, feel the knocking and smell the excessive heat from the engine when they park – but carrying on driving regardless in wilful ignorance that things will continue to be fine.

    3. Mark Rober’s Lidar vs Tesla Camera Tests

      6 tests conducted at 40 mph. Tesla with camera only vs other vehicle with lidar only (no camera).

      Test.                                   Lidar.      Camera

      Standing Person.            Y.               Y
      Person stepping out.      Y.               Y
      Heavy fog.                        Y.               N
      Heavy rain.                        Y               N
      Bright lights in eyes.        Y.               Y
      E.Cayote painted wall.     Y.               N

      See Mark’s test on YouTube.
      Result: No need for camera, Lidar only best.

      1. That test was a disgrace sponsored by the Lidar company that provided the experimental Lidar car. Rain is not one of Lidars strongest points. It is unclear if the car stopped for the obstacle in the rain or the wall of rain itself. The object was mostly invisible in the Lidar screen, because it was hidden in the rain.

        He didn’t do this on the latest gen Tesla and AP version. He said later that he doesn’t believe this is relevant, because they still rely on camera only, so it is not different from what he used. Well turns out, HW4 does brake for a Wile E. Coyote wall.

        Of course Lidar does have advantages like providing a 3d model of your environment without guesswork and being able to see through some things that normal vision can’t. But Lidar alone won’t drive your car ever. Lidar can’t read signs for instance. Differentiating between a plastic bag and a rock is also quite difficult for Lidar.

        IMO everything is in the software. If you use Lidar or not, the software makes the difference between a safe drive or not. Having Lidar probably make it easier to write a safe driving agent. And Lidars are not that expensive anymore. But Waymo and the others have shown, having a lot of sensors is not enough to have a true autonomous driving system.

  1. What I want to know is how does this work in, for example, a canadiwn winter? The road isn’t visible, there are no lines because they are under a foot of snow. Lidar won’t work in the snow I think, and the road surface changes from hard pack to powder to black ice without warning. How will the car know if it can get up a hill or if it’s going to slide down backwards?

    No matter how good ai gets I don’t think it will ever be ready for real winters..

    1. The road conditions you describe (heavy snowfall, sufficient accumulation to hide even the presence of a road, frequent black ice) are too dangerous for any vehicle less substantial than a snowplow to navigate, regardless of what’s behind the wheel.

      Yes, I’m sure many of the rugged northern tough guys in the audience do it all the time. That doesn’t make it safe.

      1. In this part of the world at 2500 meters altitude school bus drivers do that 5 days a week from November to April. They even have self-deploying chains for the tires. The rest of us plebes make do with studded winter tires.

      2. And yet thousands of drivers can manage that safely. Including the snowfall, which if I’m not mistaken OP didn’t include in their description.
        I don’t know how you assume a snow plow would manage that more safely? They sure could be better if the road top snow was over a feet but again that’s not mentioned in the OP.
        Here the side of the roads are marked with roadside poles for normal cars and snowplows alike, I assume Canadian roads are the same?

          1. The average Floridan is unable to drive if there’s 1/8″ of snow on the road. I’ve witnessed it in Jacksonville where it snows once in a blue moon.

    2. The human mind is a really, really good signal processor. Think about the times that you have driven in a rain that overwhelmed your (streaky) windshield wipers, fog (both outside and fogged up windows), and driving snow – particularly at night. The fact that you are here shows that the mind is really good at inferring missing visual information from the little good info that it may have available.

      1. I think it also shows that people drive in really deplorable conditions and that we as a society accept a level or risk every day driving these vehicles around and that were essentially desensitized to it. If I were to make an art situation where you had a random naked steering wheel or dial and you had to spend 20 minutes keeping the wheel within some electrodes moving around like a game of operation and the electrodes would kill you or someone else like some saw movie, we’d all think that’s insane. But in the context of a car commute it’s normal.

    3. Just because you drive car in winter conditions doesn’t mean you have to be racist towards those who don’t. I am Vietnamese living in Sweden and during winter I use public transport because where I came from driving in snow was not part of teaching course and I am afraid of causing an accident.

      1. Your skill issues are not somebody else’s racism. Quit projecting.

        That’s not to say you haven’t experienced real racial adversity – Sweden isn’t exactly known for it’s diversity. This just isn’t it.

    4. The author fails to explain, but it turns out that sufficiently trained and powered neural nets recognize and respond to a variety of conditions with the same sensors that humans have.

      1. Keyword being sufficiently.

        As it is, nobody has a computer that is powerful enough and trained enough to actually perform the task of self-driving, while using little enough energy to actually perform its duties. If you put 1000 Watts worth of GPUs in the vehicle, that actually starts to consume a significant amount of fuel/electricity.

        Lidars in a sense are a hack to bypass this issue, by reducing the complexity of the input to the point that simpler machines can deal with it, but that doesn’t make them good drivers. They’re very limited in things like object permanence (knowing that an unseen object still exists), to avoid hallucinations in the model. It’s like letting your 2-year-old drive the car.

        While lidars enable the computer to detect objects more reliably, they exclude other information like color, reflectivity, etc. A puddle to a lidar for example is a black hole because the laser doesn’t reflect back. To fill that information back in, you’d need the cameras, and you’re back to square one with the requirement for powerful image processing.

        Then there’s the problem of sensor interference with all other cars blasting lasers and sonars around you, which gives you spurious readings and masks your own sensors. As demonstrated recently, the lidar in your car can actually break camera sensors on other cars if the beam hits the sensor directly. It leaves dead pixels, which then confuse other driving AIs.

    5. Canada sounds quite scary in Winter, but I have many friends there and the photos they show often tempt me to make the 20 hour plane flight to enjoy it as well.

      I wonder how LIDAR does react to scattering from snow. A quick search suggests there are ML techniques that can cope with the scattering of photons, and the fact the light permeates the surface of snowbanks, but there are a lot of gotchas in those papers.

      I guess all the more reason to have visual, LIDAR, and RADAR for self driving vehicles.

      As far as ice, traction control and ABS likely give some indication, but snow scares the halloween-pumpkins out of me though so I can’t comment on the additional skill driving in conditions like this requires.

  2. Proponents of the cameras-only approach like to point out that we humans have been driving cars with just two front-mounted cameras for decades.

    Brother, if I could have innate panoramic knowledge of the 3D environment around my car, do you think I wouldn’t want that?

        1. Stop all people, animals, rain and snow, from entering the roadway. Fence off and cover the roads completely; no mixed traffic, only cars.

          Embed electrical cables in the road to send out a signal that can be detected by the cars, to keep them in their lanes.

          Implement a central tracking system, so all cars know where all other cars are and can avoid collision.

      1. The problem is the computer that still can’t understand of 99% of the information you throw at it. Even if you have cameras on every corner, the processing power and algorithms to make sense of it are missing.

        LIDARs simply mask this problem by performing as a sort of “pre-filter” – so the computer gets less ambiguous information that it doesn’t need to process as much. If the laser says there’s something in the way, there probably really is something in the way – but what is it or what to do about it is an entirely different question.

        “Self-driving” was technically accomplished in the 1980’s when they put an 8-bit home computer in the trunk of a car and had it follow just a handful of pixels out of a B&W camera pointed at the painted line on the side of the road. It takes surprisingly little computation to stay on the road, and much more than anyone would like to admit to do it safely without supervision.

    1. I couldn’t agree more. Although I have been driving a car for years now, having to keep track of all those details around the car using just two Mk.1 eyeballs and a gaggle of mirrors will never stop being at least slightly nerve-wrecking. Anything to make that experience easier would be awesome.

      Heck, we got sensors now to make parallel parking less of a struggle, so we may as well do the same for when we’re driving.

      1. Sensory overload becomes an issue.

        Modern cars already give you all sorts of warnings all the time, and it just goes ignored because 99% of the time it’s completely irrelevant. I know there’s a bus in the next lane right besides me, you don’t need to blink an amber light at me because I’m not going to be switching lanes right now. It’s like having a “co-driver” who keeps giving you unhelpful instructions and panicking about why you’re not braking yet, why didn’t you take that turn… WATCH THAT PEDESTRIAN! etc.

    2. Another argument for cameras only is that it’s a passive sensor. You’re not beaming out energy that would confuse other cars when it hits their sensors. In heavy traffic, you’ve got laser beams going everywhere, hitting everything. There was a recent video showing that putting a cellphone camera up to a car’s lidar actually kills the image sensor by blasting out some of the pixels.

      The same could be said about car headlights though, but that just illustrates the point: getting blinded by other people’s headlights is a perennial complaint. Thing is, there’s no “low beams” for lidar.

      1. Unlike car headlights, LiDAR beams are pulsed and scanning. The chances of a LiDAR interfering with another by aiming at another one at the EXACT same time the other one is looking directly back are extremely slim. An occasional “sparkle” that is likely to only occur for one rotation and is easily filtered out.

        As to damaging imaging sensors – AEye and Luminar LiDARs are pretty rare and only 1550nm units have this problem. They rely on the fact that the human eye can’t focus 1550nm and as a result the eye safety power limits are MUCH higher.

        Most LiDARs are in the roughly 850-900nm range and thus have strict eye safety limits. For non-visible light, it must be safe even with continuous exposure of the eyeball at the output lens of the device.

        Also – there are many LiDARs that carry functional safety certifications. I have NEVER seen any vision-based system that carried one without relying on structured illumination, modulated illumination (both of which start moving them towards being a form of LiDAR), or fiducials in the environment.

    3. Ah, that old argument. Except each camera swivels in its mount, and the entire mounting assembly can swivel across a significant arc. Those cameras have higher dynamic range capability than any Bayer-on-CMOS camera, and so far no one has developed an artificial vision sensor that exceeds the capabilities of Bayer-on-CMOS. Many have tried (Foveon, X-Trans, etc), all have failed.

      The system also contains smell, vibration, and audio sensing.

      (yes, I’m agreeing with you that a vision-only system is inferior)

    4. If I could choose between 8 eyes and the attention to watch them all the time or Lidar replace my nose, I’d choose the cameras everytime. I can’t remember I ever had to guess how far away someone is. Sure I don’t know exactly. And I’d love to know the speed of vehicles so I can adapt (which is something you can get with Lidar/Radar). But that is a comfort and never a safety issue.

      So a panoramic view would be my first choice, then additional Lidar.

  3. It takes years to master effective driving and many drivers never really achieve top level skills. Expecting such from a modern system is a bit ambitious. I feel there is still a long way to go before we can trust these systems.

    I’m reminded of training my grandson when he first started driving. Sure he could see the road, the signs, and the traffic. I was often pointing out various things like a car approaching a stop sign on a side street, a stale green light, a car slowing down while approaching a Starbucks up ahead. Many such details feed into expected situations and possible driving adjustments to accommodate what is happening around us. The ever evolving situation around driving requires a high degree of pattern recognition and situational awareness. Edge detection and range determination are the barest levels of understanding, and a higher resolution of a blocky perception is not really an improvement. A much higher level of comprehension around the driving experience is required for truly autonomous driving.

    Take the example of modern adaptive cruise control systems. They detect a slower vehicle in front of you and slow your speed so you don’t run into them. But an alert driver would recognize the slower vehicle and change lanes to avoid the slower car. But what about the car approaching from the rear, is it going faster? Can you wait for that car to pass then change lanes? Do you speed up and move around the slower car to minimize slowing the faster traffic?

    1. The amount of half truths on this sub are amazingly high. Speak to people that use the software, and have been for more than a decade. If they are now on V14, they can clue you into what you are missing.

      When radar and ultrasonics were tied into the feedback loop, we had phantom breaking.

      In industrial automation we don’t combine sensors on critical equipment. The confusion and cross signaling creates a confusion state that can cause massive safety problems.

      Waymo is doing everything possible to ditch lidar and sensor redundancy. Ask AI if you don’t believe me. They have been leaning on this crutch because they don’t have the skills to code a fully AI model. They also don’t have access to the data set.

      Look at Tesla outgrowing their operations boundaries in TX in matter of weeks.

      FSD V14 is getting a ton of attention because it’s that good.

      1. Yup. V14.1.4 is a game changer. I push a button in my driveway and arrive 300 miles later not touching wheel.or pedals. It pulls in my driveway, parks in any parking lot. This article is so out of date that it adds nothing to the discussion.

    2. I’m teaching my son how to drive and we use FSD a lot in that process. Thinking it can’t learn these things and that they are inherently human only, you are really missing the story.

      Don’t take my word for it – watch V14 FSD videos online.

  4. The goal is build a product that will drive autonomously at least as well as the average human on an average road, and at a cost that is acceptable to the public. Most people would agree that adding more LIDAR, radar, ultrasonic, etc. sensors would help in specific edge cases. At some point, adding more sensors will return low marginal values for the average situations, but will increase costs. Designing for every known edge case will not be practical, and new edge cases will be discovered. There will always be occasional accidents, and if this is not acceptable, then we should ban human drivers as well.
    I feel it is naive for someone who is not experienced with the design and trade-offs to flaunt their opinions, but this seems to be common these days.

  5. You didn’t mention the maintenance aspects. Those spinning Lidar sensors are expensive to calibrate and maintain. I suspect that a Waymo vehicle costs around twice as much to maintain as a Tesla Robotaxi. You can argue that cost isn’t important if it saves lives, but if it makes your business model too expensive, it doesn’t matter.

  6. I found the opening sentence a bit insulting. The “under a rock” phrasing assumes a lot about the reader’s context — both geographically and culturally. I’m quite tech-savvy, work in IT, and follow sites like Hackaday, Ars Technica, and others regularly, yet I haven’t seen much of the supposed media “arguing” about sensor packages for self-driving cars. That discussion seems mostly centered in the U.S. tech ecosystem.

    It’s a small thing, but it struck me as a lazy way to open an otherwise interesting and enjoyable article.

    1. So then, did you in fact live under a rock to reach this level of offense, or are you just reacting to how you fell other people subjected to such an environment should feel?

  7. “Although RGB cameras lack the evolutionary glitches like an inverted image sensor…” Actually, video cameras DO have inverted sensors. That is a result of optics, not biology or evolution. Just as our neural networks fix that, video cameras scan from bottom-to-top and right-to-left so that when processed the picture information is ordered from top-to-bottom and left-to-right.

    What they lack is a sensor that has an area of enhanced sensitivity and a servosystem that points that portion of the sensor toward the area of maximum interest in the visual field, so that computing can be concentrated on that area while still considering lower-resolution information from a very wide field of view.

    1. And they also lack the dynamic range of human eyes, which are more sensitive to light and motion at the periphery. Cameras become slow in the dark because they have to increase exposure time to gather enough light, so they effectively become night-blind much sooner than people.

  8. How about we concentrate on making vehicles less expensive! Now there’s a concept. The average price of cars has gone up one thousand dollars per year since 1989. People are paying on money they borrowed two cars ago. Some time down the road it’s going to hit the wall.

  9. This article is so uniformed. Waymos crash all the time hence the mene “#needsmorelidar”. Tesla hasn’t had issue with perception in years it’s a well solved problem at this stage, none of the issues fsd has are because it didn’t see it but that it didn’t behave correctly.

    1. Spot on. People with no expertise keep writing articles like this. It’s getting really old. At this point, Tesla FSD is better than most drivers. I have seen it react correctly to pedestrians that I was physically unable to see (blocked by A pillar).

  10. You need to perceive depth, so if you go camera only then you need freaking stereo cameras.
    Is it so hard to figure out? FFS.
    Having some AI guess depth will only work in very predictable situations, and that might be 85% of the time, but the accidents happen in those remaining 15%.
    So use freaking stereo cams. And outlaw camera-only systems that use AI instead to ‘guess’ depth.

  11. Sensors: my Honda has some sort of radar to alert me if I am about to run into the back of another car – BUT – it false trips when I drive across railroad tracks that are buried flush in the street at right angle to the street – but it has been nice a few times when it gave me a little extra warning – so nothing is perfect – – the right side video camera that displays the view when I turn on the right turn signal – very useful and makes driving in traffic much safer

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.