Videogames have always existed in a weird place between high art and cutting-edge technology. Their consumer-facing nature has always forced them to be both eye-catching and affordable, while remaining tasteful enough to sit on retail shelves (both physical and digital). Running in real-time is a necessity, so it’s not as if game creators are able to pre-render the incredibly complex visuals found in feature films. These pieces of software constantly ride the line between exploiting the hardware of the future while supporting the past where their true user base resides. Each pixel formed and every polygon assembled comes at the cost of a finite supply of floating point operations today’s pieces of silicon can deliver. Compromises must be made.
Often one of the first areas in games that fall victim to compromise are environmental model textures. Maintaining a viable framerate is paramount to a game’s playability, and elements of the background can end up getting pushed to “the background”. The resulting look of these environments is somewhat more blurry than what they would have otherwise been if artists were given more time, or more computing resources, to optimize their creations. But what if you could update that ten-year-old game to take advantage of today’s processing capabilities and screen resolutions?
NVIDIA is currently using artificial intelligence to revise textures in many classic videogames to bring them up to spec with today’s monitors. Their neural network is able fundamentally alter how a game looks without any human intervention. Is this a good thing?
“So you take this neural network, you give it a whole bunch of examples, you tell it what is the input and what is the exact expected output; and you give it a chance to try, and try, and try again trillions and trillions of times on a super computer. Eventually it trains, and does this amazing thing.”
– Jensen Huang, CEO of NVIDIA
Artificial Intelligence, Revisionist History
We all stopped being able to count on Moore’s Law as transistors pushed towards 10nm dies, and NVIDIA knew this better than most. Alongside the announcement of their RTX series of GPUs the company stated that they would leverage neural network technology to boost overall performance of their cards. By feeding this neural network thousands of game screenshots taken at a higher resolution than the GPU can render natively, their AI model is able to learn how to display the higher quality imagery with no change to the on-board processing power. Their press release called this process “AI Up-Res”.
AI Up-Res is essentially a hands-off approach to increasing the overall resolution of model textures in games … which is exactly the problem. The traditional method of increasing a game’s resolution was to port a game to a newer, more powerful platform and have digital artists create new textures. Regardless of which development team performs the update process, there is the back-and-forth approval process where people intimately familiar with the game make decisions regarding its artistic direction. These type of projects additionally serve as proving grounds for up and coming developers who could lead to future creative projects.
A great example of this process is The Legend of Zelda: Twilight Princess which has seen multiple releases in recent years. The original game was created for the Nintendo GameCube which ran at a 480i resolution, at a time before high definition televisions were in mass market adoption. A decade after the original, Nintendo commissioned Tantalus Media to create a port of the game that would run the game at 1080p resolution for the Wii U console.
Each step in this remastering process was signed-off by the original game’s director, Eiji Aonuma, and required constant communication to ensure the artistic intent behind of every texture in the game was preserved. Nintendo also recently made the 2006 original available on NVIDIA’s Shield TV platform in China which employs the use of the AI Up-Res technology. So here we get to see how the AI stacks up against the team of humans.
Zelda makes a great testing ground for the new technology, as there are Legend of Zelda fans out there who care more deeply for Link than their extended family members. These same people carry with them a great deal of nostalgia that is only satiated by replaying these classic games unaltered. So how does the AI stack up? The results of each approach can be seen in the screenshots and video below.
Connected Consoles Disconnected From The Heart
This all comes a time where the entire videogame industry is contemplating a switch to the cloud computing model. The concept potentially opens the door to true parity amongst all devices, but its requirement of constant connectivity makes every game an online-only experience. If there has been anything learned from the lifespans of online-only games, like World of Warcraft or Fortnite, it is that everything a player sees is subject to change. The 1.0 releases of those two previously mentioned games hardly resemble what they have gone on to become. Online-only games are continually under revision, for better or worse, and do not allow players to revisit them in a state that is just how they remember them being. It wouldn’t take much for someone to envision a future fraught with multiple “Berenstein vs. Berenstain Bears” type conspiracies.
But we’re talking about revisiting the classics here. We certainly don’t prefer the AI textures. The “improved” textures generated by a neural network are larger than the originals they replace, but without really adding anything new or artistic in those extra pixels. Stochastic gradient descent is not a method that can measure beauty, and it takes what is a purely subjective pursuit is thrust into an ill-fitting exercise in objectivity. NVIDIA are not the only ones doing it, because a similar process has been used in an open-source capacity with Doom (1993), but if no one seeks to preserve a game’s original vision we are destined to forget what made the game so special to begin with.