Training A Neural Network To Play A Driving Game

Often, when we think of getting a computer to complete a task, we contemplate creating complex algorithms that take in the relevant inputs and produce the desired behaviour. For some tasks, like navigating a car down a road, the sheer multitude of input data and its relationship to the desired output is so complex that it becomes near-impossible to code a solution. In these cases, it can make more sense to create a neural network and train the computer to do the job, as one would a human. On a more basic level, [Gigante] did just that, teaching a neural network to play a basic driving game with a genetic algorithm.

The game consists of a basic top-down 2D driving game. The AI is given the distance to the edge of the track along five lines at different angles projected from the front of the vehicle. The AI also knows its speed and direction. Given these 7 numbers, it calculates the outputs for steering, braking and acceleration to drive the car.

To train the AI, [Gigante] started with 650 AIs, and picked the best performer, which just barely managed to navigate the first two corners. Marking this AI as the parent of the next generation, the AIs were iterated with random mutations. Each generation showed some improvement, with [Gigante] picking the best performers each time to parent the next generation. Within just four iterations, some of the cars are able to complete a full lap. With enough training, the cars are able to complete the course at great speed without hitting the walls at all.

It’s a great example of machine learning and the use of genetic algorithms to improve fitness over time. [Gigante] points out that there’s no need for a human in the loop either, if the software is coded to self-measure the fitness of each generation. We’ve seen similar techniques used to play Mario, too. Video after the break.

12 thoughts on “Training A Neural Network To Play A Driving Game

  1. I think the AI is more likely to learn that specific track, andunlikely to actually learn anything about driving. That’s like saying somebody who memorized the keys for a single song knows how to play piano.

  2. A fun demo, but it appears there is no “learning” here at all. It’s just naively selecting which of 650^4 = 178 billion random trajectories will complete the course. Splitting it into four iterations just makes it more computationally tractable.

    But, really, if you have those 7 number inputs, *every* car should be able to complete the course.

  3. I don’t think so.

    The inputs of the neural net are 5 distances, speed and the direction.

    Even if speed and direction could be used to evaluate the relative position, these 7 inputs will be used to infer the next action (accelerate, brake, turn left, right…), whatever the position.

  4. people are still harping on “neural networks”?
    they are SIMULATED, and nothing more.

    … a set of algorithms to create further algorithms, there you admitted it. now stop referring to the technology that we have been fearing or waiting for for 50 years.

    if it has been developed, we dont know about it. and i hope we never do. recently, we finally have a movie out that explains my point.

    when networks that contain actual neurons grow an AI (or we give it one),
    we are on the brink of near-extinction.
    we are flawed, and a flawless intellegence will undoubtly decide we should be on the chopping block. not end of story; but OUR end of story. we will never live to see the result of further changes to the GENETICS and NEURONS of biological computing.

    1. You sound like a crackpot, do you know that?

      No person working on AI algorithms claims that neural networks have anything to do with biological neurons. There is some similarity, but basically it’s just a lot of algebra and a set of algorithms to find matrices of coefficients which approximate a dataset. They seem to work on some problems quite well, which make them useful.

      All this talk about evil AI is pure nonsense. If we should fear something, it’s not algorithms turning evil – it’s people using these algorithms for malicious purposes.

      1. Training a neural network to play racing games is a difficult task that can be solved using a genetic algorithm. I will tell you how it was possible to achieve success in training a neural network and create an automatic racing genetic expert. Our approach to learning and results promise exciting adventures on the race track.

    2. We already have lab grown minibrains, and there already are ethical concerns(Not about super AI though, but about the possibility that they are suffering).

      Strong AI does seem like a pretty bad idea though. There’s no hard proof that it would do any particular task all that much better than purpose built weak AI, there is ethical concerns, and I think it’s pretty obviously Not The Best Idea, just like a mars colony with anything resembling current tech.

Leave a Reply to PaulCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.