side by side of upscaling in the AGI engine

Upscaling The Sierras

If you played many games back in the mid-80s to 90s, you might remember the iconic graphics from Sierra’s Online Adventure Games. They were brightly colored (16 colors) and dynamic with some depth. To pay homage, [eviltrout] worked to upscale the images. Despite being rendered at 160×200 at 16 colors and then stretched, storing all those bitmaps even at only 4 bits per pixel would take all the storage available on the floppy disk. The engineers on the game decided instead to take a vector approach to a raster problem.

When [eviltrout] came through to try and upscale the backgrounds, he started by writing some code to extract the draw commands from the engine of the game, known as Adventure Game Interpreter (AGI). Comparing the vector commands to equivalent PNG versions with the best compression, the AGI vector versions were around half the size. Not bad for a couple of game developers in the 80s. Since it is all vector commands under the hood, it should be relatively simple to draw them at a much higher resolution. At least, that’s what he thought. The first issue was with flood fills. Since the canvas is larger, there are gaps between lines, and the flood escapes. A few approaches were taken, such as using a low-resolution reference and marching squares, but neither was satisfactory. Eventually, [eviltrout] expanded flood fills and used thicker lines. He also first rendered to a lower resolution and connected neighboring lines of the same color. Finally, he used ImageMagick to denoise white specs in the output.

We find the effect charming, but some might say you’re distorting art into what the artist never intended to be. But, as with all graphical enhancements, some artistic liberties are being taken without the original artist involved. The code is available on GitHub under an MIT license. Video after the break.

Continue reading “Upscaling The Sierras”

AI Upscaling And The Future Of Content Delivery

The rumor mill has recently been buzzing about Nintendo’s plans to introduce a new version of their extremely popular Switch console in time for the holidays. A faster CPU, more RAM, and an improved OLED display are all pretty much a given, as you’d expect for a mid-generation refresh. Those upgraded specifications will almost certainly come with an inflated price tag as well, but given the incredible demand for the current Switch, a $50 or even $100 bump is unlikely to dissuade many prospective buyers.

But according to a report from Bloomberg, the new Switch might have a bit more going on under the hood than you’d expect from the technologically conservative Nintendo. Their sources claim the new system will utilize an NVIDIA chipset capable of Deep Learning Super Sampling (DLSS), a feature which is currently only available on high-end GeForce RTX 20 and GeForce RTX 30 series GPUs. The technology, which has already been employed by several notable PC games over the last few years, uses machine learning to upscale rendered images in real-time. So rather than tasking the GPU with producing a native 4K image, the engine can render the game at a lower resolution and have DLSS make up the difference.

The current model Nintendo Switch

The implications of this technology, especially on computationally limited devices, is immense. For the Switch, which doubles as a battery powered handheld when removed from its dock, the use of DLSS could allow it to produce visuals similar to the far larger and more expensive Xbox and PlayStation systems it’s in competition with. If Nintendo and NVIDIA can prove DLSS to be viable on something as small as the Switch, we’ll likely see the technology come to future smartphones and tablets to make up for their relatively limited GPUs.

But why stop there? If artificial intelligence systems like DLSS can scale up a video game, it stands to reason the same techniques could be applied to other forms of content. Rather than saturating your Internet connection with a 16K video stream, will TVs of the future simply make the best of what they have using a machine learning algorithm trained on popular shows and movies?

Continue reading “AI Upscaling And The Future Of Content Delivery”

“Enhance” Is Now A Thing, But Don’t Believe What You See

It was a trope all too familiar in the 1990s — law enforcement in movies and TV taking a pixellated, blurry image, and hitting the magic “enhance” button to reveal suspects to be brought to justice. Creating data where there simply was none before was a great way to ruin immersion for anyone with a modicum of technical expertise, and spoiled many movies and TV shows.

Of course, technology marches on and what was once an utter impossibility often becomes trivial in due time. These days, it’s expected that a sub-$100 computer can easily differentiate between a banana, a dog, and a human, something that was unfathomable at the dawn of the microcomputer era. This capability is rooted in the technology of neural networks, which can be trained to do all manner of tasks formerly considered difficult for computers.

With neural networks and plenty of processing power at hand, there have been a flood of projects aiming to “enhance” everything from low-resolution human faces to old film footage, increasing resolution and filling in for the data that simply isn’t there. But what’s really going on behind the scenes, and is this technology really capable of accurately enhancing anything?

Continue reading ““Enhance” Is Now A Thing, But Don’t Believe What You See”