If you haven’t lived underneath a rock for the past decade or so, you will have seen a lot of arguing in the media by prominent figures and their respective fanbases about what the right sensor package is for autonomous vehicles, or ‘self-driving cars’ in popular parlance. As the task here is to effectively replicate what is achieved by the human Mark 1 eyeball and associated processing hardware in the evolutionary layers of patched-together wetware (‘human brain’), it might seem tempting to think that a bunch of modern RGB cameras and a zippy computer system could do the same vision task quite easily.
This is where reality throws a couple of curveballs. Although RGB cameras lack the evolutionary glitches like an inverted image sensor and a big dead spot where the optical nerve punches through said sensor layer, it turns out that the preprocessing performed in the retina, the processing in the visual cortex and analysis in the rest of the brain is really quite good at detecting objects, no doubt helped by millions of years of only those who managed to not get eaten by predators procreating in significant numbers.
Hence the solution of sticking something like a Lidar scanner on a car makes a lot of sense. Not only does this provide advanced details on one’s surroundings, but also isn’t bothered by rain and fog the way an RGB camera is. Having more and better quality information makes subsequent processing easier and more effective, or so it would seem.
Continue reading “Self-Driving Cars And The Fight Over The Necessity Of Lidar”






