side by side of upscaling in the AGI engine

Upscaling The Sierras

If you played many games back in the mid-80s to 90s, you might remember the iconic graphics from Sierra’s Online Adventure Games. They were brightly colored (16 colors) and dynamic with some depth. To pay homage, [eviltrout] worked to upscale the images. Despite being rendered at 160×200 at 16 colors and then stretched, storing all those bitmaps even at only 4 bits per pixel would take all the storage available on the floppy disk. The engineers on the game decided instead to take a vector approach to a raster problem.

When [eviltrout] came through to try and upscale the backgrounds, he started by writing some code to extract the draw commands from the engine of the game, known as Adventure Game Interpreter (AGI). Comparing the vector commands to equivalent PNG versions with the best compression, the AGI vector versions were around half the size. Not bad for a couple of game developers in the 80s. Since it is all vector commands under the hood, it should be relatively simple to draw them at a much higher resolution. At least, that’s what he thought. The first issue was with flood fills. Since the canvas is larger, there are gaps between lines, and the flood escapes. A few approaches were taken, such as using a low-resolution reference and marching squares, but neither was satisfactory. Eventually, [eviltrout] expanded flood fills and used thicker lines. He also first rendered to a lower resolution and connected neighboring lines of the same color. Finally, he used ImageMagick to denoise white specs in the output.

We find the effect charming, but some might say you’re distorting art into what the artist never intended to be. But, as with all graphical enhancements, some artistic liberties are being taken without the original artist involved. The code is available on GitHub under an MIT license. Video after the break.

Continue reading “Upscaling The Sierras”

“Enhance” Is Now A Thing, But Don’t Believe What You See

It was a trope all too familiar in the 1990s — law enforcement in movies and TV taking a pixellated, blurry image, and hitting the magic “enhance” button to reveal suspects to be brought to justice. Creating data where there simply was none before was a great way to ruin immersion for anyone with a modicum of technical expertise, and spoiled many movies and TV shows.

Of course, technology marches on and what was once an utter impossibility often becomes trivial in due time. These days, it’s expected that a sub-$100 computer can easily differentiate between a banana, a dog, and a human, something that was unfathomable at the dawn of the microcomputer era. This capability is rooted in the technology of neural networks, which can be trained to do all manner of tasks formerly considered difficult for computers.

With neural networks and plenty of processing power at hand, there have been a flood of projects aiming to “enhance” everything from low-resolution human faces to old film footage, increasing resolution and filling in for the data that simply isn’t there. But what’s really going on behind the scenes, and is this technology really capable of accurately enhancing anything?

Continue reading ““Enhance” Is Now A Thing, But Don’t Believe What You See”