Forgotten Tech — Self Driving Cars

The notion of self driving cars isn’t new. You might be surprised at the number of such projects dating back to the 1920s. Many of these systems relied on external aids built into the roadways. It’s only recently that self driving cars on existing roadways are becoming closer to reality than fiction — increased computer processing power, smaller and power-efficient computers, compact Lidar and millimeter-wave Radar sensors are but a few enabling technologies. In South Korea, [Prof Min-hong Han] and his team of students took advantage of these technological advances and built an autonomous car which successfully navigated the streets of Seoul in several field trials. A second version subsequently drove itself along the 300 km journey from Seoul to the southern port city Busan. You might think this is boring news, until you realize this was accomplished back in the early 1990s using an Intel 386-powered desktop computer.

The project created a lot of buzz at the time, and was shown at the Daejeon Expo ’93 international exposition. Alas, the government eventually decided to cancel the research program, as it didn’t fit into their focus on heavy industries like ship building and steel production. Given the tremendous focus on self-driving and autonomous vehicles today, and with the benefit of hindsight, we wonder if that was the best choice. This isn’t the only decision from Seoul that seems questionable when viewed from the present — Samsung executives famously declined to buy Andy Rubin’s new operating system for digital cameras and handsets back in late 2004, and a few weeks later Android was purchased by Google.

You should check out [Prof Han]’s YouTube channel showing videos of the car’s camera while operating in various conditions and overlaid with the lane recognition markers and other information. I’ve driven the streets of Seoul, and that alone can be a frightening experience. But [Han] manages to stretch out in the back seat, so confident in his system that he doesn’t even wear a seatbelt.

29 thoughts on “Forgotten Tech — Self Driving Cars

  1. What happens with these self-driving cars if it encounters a vehicle with a stop sign, a zebra crossing, a person, an elephant, or perspective angled lines painted on the back?

  2. Could have been an RC car for all we know. Not enough info here. No pics of the engine bay. I am going to call BS on this one till somebody shows some real time video from the desktop of that computer that was stashed in the floorboard.

    1. It seems to be a legitimate project applied to at least three different cars that I can deduce from the various videos and reports I read. All of them were full-sized vehicles, carrying one or two passengers in the back seat. You can see some demonstration videos of the computer’s view, showing lane recognition and distance keeping as it navigates around various streets and highways on Prof Han’s YouTube channel. For example, here is one showing the Lane-Keeping System (LKS) https://www.youtube.com/watch?v=eqFo7aBwmkc&t=110s, and there are other similar videos as well. I’m curious to learn more, and have reached out to Prof Han to see if I can arrange an interview.

  3. There was also the VaMP built by German research labs in the early 90s:
    https://en.wikipedia.org/wiki/VaMP
    https://medium.com/@davidrostcheck/the-self-driving-car-from-1994-fb1ec617bd5a
    “Semi”-autonomous driving on highway with little traffic excluding construction areas and with a human taking decisions for changing lanes or directions looks complicated but still doable for something based only on computer vision. It’s basically a complicated line following robot with adaptive cruise control.
    “Fully”-autonomous driving in a busy city or on unpredictable country roads is much more complicated and still not here yet.

      1. Not only does it require general intelligence, but it requires a human “theory of mind” if there exists a sufficiently high density of human drivers in the system simultaneously. People will insist that the vehicles understand human behavior and what makes sense to them without any requirement for humans to understand why vehicles might do something seemingly nonsensical.

        There’s an asymmetry in the expectations such that vehicles would need to operate at a “superhuman” level in many respects. They will need to be able to understand when someone intends to let them proceed and not falsely give the impression to a human of their own intentions, etc. This kind of non-verbal “mind reading” is something people take for granted and is incredibly difficult, if not impossible to replicate in machines that have not had the lifetime experience of a human in a human environment!

        In my own research, that is the one thing for which I’ve never been able to find a satisfactory substitute. We will need to not only have machines embedded in human environments to gain that experience, but simultaneously evolve a human enough brain to match what a human would do.

        As someone on the Autism spectrum, I can tell you first hand how hard it is to act according to the expectations of most people, when I am in fact a human being with human experience but just slightly off in my own wiring enough that I had to take a good 15-20 years of my life simply to readjust myself to human society while I’ve watched ideas I had back in college come to fruition by others some 10-20 years later.

        Back then I was quite excited about artificial neural nets and had already realized the need for convolutions to be applied to vision inputs about 10 years before most others would. I was working on an AI IoT and a distributed cryptocurrency with chains of signatures similar to blockchain but using a DAG structure for scalability to enable decentralized transactions years before Bitcoin existed. Only now do you see things like that appearing, such as Iota. I anticipated so much that happened in the field since then but wasn’t able to actively participate simply because I didn’t fit into human society. That’s why I know how hard it will be for machines that are much less human than I to do the same!

        1. +1

          “I’ve watched ideas I had back in college come to fruition by others some 10-20 years later.”

          I can relate to this, I think. I had the idea of a 3D printer in the mid-late 90s already when I was little.
          However, my idea was a bit more sophisticated than what’s mainstream now (it involved a laser). That being said, I didn’t consider my own idea ingenious whatsoever, it merely looked logical and useful to me. And sorta cool (a laser, yay!)

          1. I find that many people who “have ideas” actually have only vague notions of something on a very abstract level. They later correct or augment the idea to match that which actually exist to “prove” themselves right and say “I told you so.”

            This happens very subtly, because they forget what it is they actually knew, thought or said, years, months, days, or even minutes ago, generating a false memory instead. It has the effect of, “Lasers? Oh yeah, I thought about that when I was eight!”, when the actual 8-year-old them didn’t even know what photons were at that time, and a “laser” was just a pew pew gun in a cartoon.

            This may happen easier to people who have an impaired theory of other minds because your past self is also “an other mind”. There is a tendency to have the intuition that other people know the same facts and feel the same feelings as you do, which also applies to your own past self. Personal events in history get auto-updated to match your present understanding as soon as the actual memory of it starts to fade – with the unfortunate side effect that the person starts to believe they’re a genius because they already knew all this stuff.

            So best keep a diary.

  4. “Many of these systems relied on external aids built into the roadways.”

    This! But let’s take it a step further.

    Totally self driving cars that drive freely on public roadways mixed with human drivers may one day be commonplace. But it’s a hard technical challenge and even once that is totally solved convincing enough people it is safe and getting the laws adjusted to make it legal is going to take forever.

    You can’t even convince half the fools out there to take a covid vaccine even after smallpox was eliminated. How are you going to convince the anti-tech public to trust self driving cars?

    But there are a lot of unused railways out there. I know I’m not the first to propose this. Let’s just start putting train wheels on cars. Add a transponder with a bit of logic and some sort of rule set and they could be self driving when they are on the tracks.

    1. One think I wondered about, watching that old 386 computer plot the lane and obstacle recognition marks on the video feed, was how does it know where to go?

      Something I learned in the long distant past, about missiles and rockets but it applies here, too, is that you have two parts of the problem: Guidance (how to get there from here) and Control (keeping on the planned course). Modern autonomous vehicles are not completely autonomous – they take inputs from GPS satellites. This is more convenient that putting wires in the roadways, but it is still an external aid. Clearly GPS information is used to solve the guidance problem, like “where is the nearest Pizzeria?” But how much is GPS used in controlling the vehicle as well? It seems to me that the data would be too coarse to be of substantial help, but maybe it could pick up on gross things, like, “oops, I’m on the wrong side of the freeway”.

      Maps are another external aid, but I think it is fair to ignore them when comparing people and self-driving cars. After all, people also need maps or some basic input about the roads. So getting to Busan from Seoul is pretty easy, once you’re on the freeway. It’s the city navigation that’s a challenge, to computers or people. Hmm, in the early 90s GPS was working, but selective availability was active. Still, maybe it could have been used to plot a course? If I get a chance to meet Prof Han, that’s certainly one of the questions I’ll ask.

      “How are you going to convince the anti-tech public to trust self driving cars?”

      That’s a problem in general, not just regarding tech. Most people, and I’m guilty myself, are not good at evaluation and judging statistical odds. Even if you demonstrate statistically that taking a robot-controlled taxi is safer than making a milkshake in your home, people will still feel uncomfortable about it.

      1. > people, and I’m guilty myself, are not good at evaluation and judging statistical odds

        More importantly, our judgement on what we should/should not allow are not based on statistical odds. In the US, more than 10,000 people die every year because they fall off stairs. Now, assume we replace all the stairs by escalators that prevent most of the falls, but due to some mechanical failure kill 10 people per year. We would find those 10 deaths absolutely unacceptable, despite statistically being much better than before.

        1. It’s about agency. People accept risks they have influence over. You choose to take the stairs even when you feel a bit wobbly, and if you should fall it’s your own fault.

          For example, the risk of accidents in traffic depend largely on how you drive. If someone else, such as a robot is driving, you haven’t got that choice. The AI car is a great equalizer – it increases the risk of accidents for people who choose not to engage in risky behavior, even while the overall accident rate would be lowered.

          The distribution of risk in traffic is such that a relatively small minority of drivers carries the most risk. For example, drunk drivers are implicated in a third of the accidents, yet a third of the drivers at any given time are not drunk. Because of this, statistically speaking, the majority of drivers are way better than the average, and the reduction of risk would apply to a minority of drivers.

          The question is, are you ethically justified in throwing the majority of people under the proverbial bus to lower the risk for a minority of people? Or should there be another approach, such as using the AI to check and maintain the condition of the driver instead…

      2. Generally they fall along two lines: feature recognition and environmental mapping.

        Google/Waymo approach is to first scan the entire roadside with a lidar and then reduce this data into a virtual representation of the surroundings. When the car drives itself, it makes a new scan and runs a best-fit algorithm with the known measured data to estimate its precise location. This has the added advantage that subtracting the model from the measured data makes all the differences pop out, and identifying non-static objects such as other cars from the background becomes almost trivial.

        However, the more the environment or the measurement changes from what was previously recorded, the less accurate the fit becomes. Adding a new billboard for example may shift the algorithm to drive two inches westwards. It also suffers from information overload: while it can detect other objects, it mostly doesn’t have any idea what they are, or what to do about them, except slow down and try to go around.

        The Tesla approach is to rely on particular visual and environmental cues which are hard-coded (trained) into the system, such as lane markers, ditches, the visual appearance of the road surface, the tail lights of the other cars… etc. The car knows where it is on the road because it deduces the centers of each lane from these environmental clues, which is more along the lines of how people do it. It actually takes surprisingly little information – just a few pixels in a video feed – which is why this sort of “self-driving” was already accomplished in the late 80’s and early 90’s and the last 30 years has been about honing it from a 99% accuracy level to 99.999…%

        However, the system is still very limited in the amount of information it does process and how it processes that information, so when the very specific cues it is looking for are misleading, mis-identified, or missing, the car tends to veer off the road or drive into things. It’s a system that reacts to things that it has been programmed (trained) with, and it doesn’t even try to deal with anything else.

        These differences largely explain why the Google cars act like scared mules in traffic, making emergency stops any time they see something they don’t understand, getting rear-ended in the process, while the Tesla cars just happily plow into trucks, road dividers, toll booth barriers…

  5. It ain’t hard to drive a production vehicle on rails. UP & BNATSF do it all the time. Getting permission is hard. Getting access is harder still unless you bootleg it

Leave a Reply to OstracusCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.