Although not the first video game ever produced, Pong was the first to achieve commercial success and has had a tremendous influence on our culture as a whole. In Pong’s time, its popularity ushered in the arcade era that would last for more than two decades. Today, it retains a similar popularity partially for approachability: gameplay is relatively simple, has hardwired logic, and provides insights about the state of computer science at the time. For these reasons, [Nick Bild] has decided to recreate this arcade classic, but not in a traditional way. He’s trained a neural network to become the game instead.
To train this neural network, [Nick] used hundreds of thousands of images of gameplay. Much of it was real, but he had to generate synthetic data for rare events like paddle misses. The system is a transformer-based network with separate branches for predicting the movements of the ball, taking user input, and predicting paddle motion. A final branch is used to integrate all of these processes. To play the game, the network receives four initial frames and predicts everything from there.
From the short video linked below, the game appears to behave indistinguishably from a traditionally coded game. Even more impressive is that, due to [Nick]’s lack of a GPU, the neural network itself was trained using only a pair of old Xeon processors. He’s pretty familiar with functionally useful AI as well. He recently built a project that uses generative AI running on an 80s-era Commodore to generate images in a similar way to modern versions, just with slightly fewer pixels.
Very cool, really provides a look into the more unconventional uses of NN
This is a very important frontier in AI today and any exploration of it is valuable.
As AI-powered robotics advance, we’re depending on their ability to self-improve their physical movement by pre-planning their actions through internal simulations. It is surprisingly difficult to have an AI look at a situation and construct a simulation using a traditional physics engine. It is more effective to have the physics themselves simulated by neural network. What you lose in accuracy you make up with speed and flexibility. Consider: When you’re going to pick up a towel you imagine in advance what will happen when you grasp the fabric and lift your hand, but your imagined scenario isn’t likely to accurately predict small details like the exact folds in the fabric as you move it. Those fine details don’t matter.
As we move away from manually-programmed robots and towards robots that observe and devise their own solutions to tasks, replacing traditional simulation with generative AI is crucial.
Pretty cool way to make a 256 byte game into a 160GB game
I think any random generator will have the same result but combined with plenty of pull-up resistors in order to increase the personal gain. I’m not mentioning any name here either, but if you put the circuit in a box, I would suggest an big orange box to match certain skintones.
Type this request into your fav LLM: ‘How much time and gpu power would it take to train an artificial neural network the size of a human brain with data from a 75 year old person? ‘
sure, 256 byte :-)
TBF if they’d used Unreal Engine it would have been 170GB.
:)
It’s 1MB, but yeah, point taken. Efficiency doesn’t come first in AI.