Full Self-Driving, On A Budget

Self-driving is currently the Holy Grail in the automotive world, with a number of companies racing to build general-purpose autonomous vehicles that can get from point A to point B with no user input. While no one has brought one to market yet, at least one has promised this feature and had customers pay for it, but continually moved the goalposts for delivery due to how challenging this problem turns out to be. But it doesn’t need to be that hard or expensive to solve, at least in some situations.

The situation in question is driving on a single stretch of highway, and only focuses on steering, so it doesn’t handle the accelerator or brake pedal input. The highway is driven normally, using a webcam to take images of the route and an Arduino to capture data about the steering angle. The idea here is that with enough training the Arduino could eventually steer the car. But first some math needs to happen on the training data since the steering wheel is almost always not turning the car, so the Arduino knows that actual steering events aren’t just statistical anomalies. After the training, the system does a surprisingly good job at “driving” based on this data, and does it on a budget not much larger than laptop, microcontroller, and webcam.

Admittedly, this project was a proof-of-concept to investigate machine learning, neural networks, and other statistical algorithms used in these sorts of systems, and doesn’t actually drive any cars on any roadways. Even the creator says he wouldn’t trust it himself, but that he was pleasantly surprised by the results of such a simple system. It could also be expanded out to handle brake and accelerator pedals with separate neural networks as well. It’s not our first budget-friendly self-driving system, either. This one makes it happen with the enormous computing resources of a single Android smartphone.

35 thoughts on “Full Self-Driving, On A Budget

  1. In the old days, a software project that stayed at a floating 90% done for years was a _failed_project_. That project and its ‘owner’ were taken out back and shot before more resources could be wasted.

    Does ‘AI’ change that somehow? Explain.
    Is ‘impossible to debug’ a feature or a flaw of neural nets?

    1. “Self-driving cars” isn’t a singular software project. It’s a research goal. That’s like being incredulous we still research the traveling salesman problem or something.

      1. Full Self Driving Teslas on the other hand, IS a failed software project. Same for Googles ‘me too’. IIRC Apple abandoned their effort.

        The first cars that were sold with the feature are EOL and incapable of running newer versions of ‘not full self driving’.

      1. I guess that they the people that train trains are called trainers.
        I’m not quite sure if the passengers on those trains that are being trained are called trainees, perhaps it’s just the train itself that’s the trainee.

        Some trains have spots painted on them for decoration purposes, these are applied by a specialized highly trained group of painters referred to as trainspotters.

    1. They are trumped by the small issue that convincing people to vote for ripping up almost all their transportation infrastructure to replace it with trains isn’t going to happen.

    2. I guess you must have missed all those news articles about train collisions, thefts from freight cars and safety issues resulting from staffing cutbacks so that greedy CEOs can increase profits while sacrificing rail safety. A shame you didn’t get those memos.

      1. There’s a difference between the problems of “privatizing a public service resulted in it going to shit” and “the tech doesn’t work despite years of development, and even if it did, it really wouldn’t fix much”.

    3. Even if we got self driving cars, too, the amount of road space they take up is way more than a train. I always like to say – self driving and electric cars are a great way to save the auto industry, but not a great way to fix transit.

  2. Rather than attempting to fully integrate every function of operating a vehicle, it could be easier to handle tasks separately. Maintain Speed. STeering. Obstacle Detection. BRaking. Have each task ‘listen’ for interrupts from the others. If OD senses the vehicle ahead is too close it ‘broadcasts’ an interrupt message with data on where the obstacle is. MS, ST, and BR code decides how to avoid the obstacle. If OD says it’s a obstacle moving in from off the right side of the road then MS will disengage, BR will hit the brakes, and ST will turn to the left. If OD says the obstacle is straight ahead and moving the same direction then MS will disengage. If that doesn’t result in a return to the safe following distance then BR will slow the vehicle down.

    1. And your response, while seemingly detailed, highlights the differences between proof of concept and a deliverable product. It’s the edge cases and unexpected situations that contribute to software complexity and development time. With most projects, 5% of the effort goes to the POC but 95% of the effort toward aforementioned edge cases.

    2. Responding to emergencies that happen at speed fundamentally requires all the different parameters to be handled by a single, integrated system. For just a couple examples of why: tires produce a limited amount of traction. That “traction budget” can be used to change speed or direction, but the more you do of one the less you have left to do the other. Doing either also shifts the center of mass, increasing or decreasing the normal force (and thus traction) on individual wheels.

    3. That is a massively simplified idea of what it needs to do and by hard coding it the number of scenarios it can react to are limited. Considering every road and every time on that road will be different and have different hazards, hard coding the rules doesn’t really work.

      Also a lot of newer cars have the features you are describing but they count as driver assistance not autonomous driving as they only engage when needed.

    1. That’s just fancy Lkas. On most vehicles it uses the built in cruise control. It will also void any warranty you had. I have seen insurance also drop claims for those that have it.

  3. >it doesn’t need to be that hard or expensive to solve, at least in some situations.

    I could not agree more, if you don’t mind running people over or hitting other cars, or driving into lakes etc, it is easy.

  4. I’m beginning to think that the biggest reason machine learning self driving doesn’t actually do what it’s supposed to do is that the people designing the systems have absolutely no clue what a good driver actually does when driving. They just tell it how to operate the vehicle and to recognize the road, the signs, and obstacles, and they think they’ve covered everything a human does and just need to iterate from there.

    As for independent isolated systems for each control input, even a poor driver will drive worse if you teach them that their steering and pedal inputs are completely isolated from each other. If that’s what they learn, they might try applying maximum gas or brake at the same time as steering at a steep angle, which is a great way to make things worse than doing either thing on its own. Best to make sure that the actual decision maker has all the information and makes a single determination of what to attempt to make the vehicle do.

    1. If you think the people working on it have no idea what a human does when driving then why don’t you do it? I can guarantee it is nowhere near as easy as you seem to think.

      1. Because I don’t want it to exist or to be one of the ones who is working on it. Even if I did, I wouldn’t want to start from scratch. The whole point of my comment is that I think people who are fans of various popular uses of AI/ML/autonomous machines probably don’t appreciate the humans they’re trying to replace enough. With the amount of driving that’s done subconsciously, and the difference in skill between the average and the best drivers, and the unique skills needed for driving in various environments, it’s absurd to think you don’t need to spend time studying what a diverse group of expert drivers do, and figuring out what their decision making process is, even though it happens in a blink. Some of them will teach you about car handling, some will teach you about how to recognize when another car is about to enter your lane without a blinker, some will teach you how to make sure other cars know what you’re about to do when you are both at an intersection…

  5. Creating a self driving car is easy. Creating one that is safe and reliable is hard. 99.9% safe is not enough. We expect a self driving car to be safer than humans in all possible scenarios. And stopping in case it’s confused doesn’t make it reliable. There are so many corner cases. A self driving car cannot think, it can only work within data that is close to its training set. Put it in a unique situation and it might make the wrong decision.

    1. Indeed. I stopped the video when he was explaining that buying some hardware to read some CAN messages was already too expensive for him. Those extra nines of reliability are hard to get. What if for example a single transistor fails in an H-bridge and the servo yanks on the steering wheel with full power? Things like that are hardly a concern for toy cars, but have serious implications for the car, and people in and around it.

  6. I’m 87% sure my 10$ OBD Bluetooth dongle can read steering and acceleration data. And he taped an arduino to his steering wheel to use as a spirit level? Quite the researcher…

  7. I had a self driving Tesla awhile ago in Chicago. It quickly and confidently blew every stop sign and red light I tested it at 😂

    Good thing I tried it out at night or the thing probably would have gotten me killed.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.