Wolfenstein In 600 Lines Of Code

What’s more impressive, the fact that this Wolfenstein-like game is 600 lines of code, or that it’s written in AWK?

AWK is a language primarily used for text processing. But if you can write code the world bows to your wishes. [Fedor Kalugin] leverages the ability of a Linux terminal’s color options to draw his game. The 3D aspect is produced through ray-casting which generates a 2D image from 3D coordinates.

Trying out the game is extremely simple, install gawk, clone the repo, and play:


 

sudo apt-get install gawk
git clone https://github.com/TheMozg/awk-raycaster.git
cd awk-raycaster/
gawk -f awkaster.awk

We really appreciate the four different display modes which illustrate doing a lot with very little. They include: black and white text, color text, color background, and combination of color text and background. It’s an advanced texture technique with which every ansi artist is familiar.

awk-wolfenstein-textures

Don’t limit yourself to playing with the script and losing interest. Crack that thing open. Try making a spinning wireframe cube based on this framework!

[via HackerNews]

62 thoughts on “Wolfenstein In 600 Lines Of Code

        1. Perhaps not in objective reality. Although fractals are often described as having a fractional dimension.

          The point is, it’s 3D-looking, but not full 3D. Raycasting produces limited results, it can’t be used for many 3D purposes, but does well for some, and is fast to render.

          “2.5D” is a well-known phrase, particularly as applied to games like Wolfenstein 3D, and even Doom, which looked a lot more free and advanced than Wolfenstein, but still had very rigid limits for what you can do. Clever level design worked within the limits, to give the impression of more graphical freedom than there really was. There’s still a few tricks Doom level designers have come up with.

          And you couldn’t do a 3D cube with this. Well, you could, viewed from the inside, a square room. But Wolfenstein-type engines have the floor fixed at the bottom half of the screen, ceiling at the top. A cube would require the other way round.

          1. @Dave neither is space nor time. These are merely constructs that best describe our experience of macroscopic interactions with a universe complex enough that we haven’t figured it all out yet. At this stage everything is a demodulated form of thinly spread interactions, plus gravity.

            Sure, String theory is looking weaker than ten years ago, but supersymmetry has a long way to go, and n-brane models require exotic interactions to describe them.

            All I mean to say is, you’re right; we don’t experience a non-integer dimension. However it is not excluded in our myriad ways of describing interactions, and the point is moot; Even if there is not 2.5th dimension, it is a term commonly used within the context of computer programming and consumer engineering to describe 3D systems where one axis (usually z) is fixed or constrained (such as the pen plotter, where z can essentially be “0” or “1”).

          2. Yes, important in the way that a family can have 2.2 kids. It is necessary to allow fractions in calculation, but it does not translate to a fraction of a dimension in real life.

        2. @Dave
          If you think the phrase “2.5D” reflects a genuine belief on the part of folks that use it that such systems model two and a half spatial dimensions you’re an idiot. It’s a tongue-in-cheek figure of speech intended to describe games that simulate 3D but aren’t actually. Jesus christ.

    1. Thats a matter of perspective what you consider 3d. You could have a 3d spinning cube with no perspective.. but raycasting does have perspective… both have something missing which is more 3d? All 3d on a computer ends up being projected back to 2d.

      1. Doom had “perspective correction”, Wolfenstein didn’t. If you spin on the spot in Wolfenstein 3D, you notice a sort of weird circular or sinusoidal quality to the rendering, that modern games don’t have.

        Of course all computer screens (well, most) are 2D, but you can still model 3D objects in the computer. But Wolfenstein doesn’t, it’s a hack. A very clever hack. Doom was even more convoluted, and more clever. John Carmack is a very rare talent. Until Doom came along, nothing like it existed on ordinary computers. After, when people had a bit of time to figure out most of what he was doing, there were loads.

        FPSes used to be called “Doom clones”, in the good old 486 days. Also the Internet was better.

        1. I can make a cube spin on the screen with perspective, but you cannot walk around it or look up and down because I didn’t code it that way. Is the cube more 3d than the walls of a raycaster that I can walk around. The raycaster world is defined in 2d with x,z and a fixed y height. With the correct rendering and maths it is as legitimately 3d as my cube. I could create a 3d engine with polygons and define the world in 2d with a fixed height…. would this be 3d?

      1. What? Small intros are coded in assembly for running on MS-DOS, using Windows and/or Linux executables wastes too many bytes to be useful. And in MS-DOS there aren’t any graphics drivers to abuse.

        For larger intros the overheads of a real operating system can be worked around, e.g. one can use compression and run part of the code on a GPU (which can be made compact). Even then there isn’t any abuse though.

          1. They don’t do even that. There’s no space for it. :)
            Besides mode 13h provides 320×200 256 color linear framebuffer and can be initialized in 5 bytes (mov ax, 0x13 ; int 0x10) which is pretty ideal for a tiny intro. No double-buffering and non-square pixels but those aren’t a huge problem. To make a 320×240 intro requires a lot of extra bytes (don’t know exactly but there are a number of VGA registers that need tweaking + that the drawing code will be much more complicated and slower), even doing a 320×200 buffer centered in a 320×240 mode requires a bit of coding.

            Old-school to-the-metal demos for PC hardware is pretty uncommon nowadays sadly, partly because tweaked modes etc. often requires old computers to work correctly. But emulators are improving I’m told so maybe someday there will be a revival. :)

          2. Not all demos are stuck with 256 bytes, or 4K, or whatever. Lots of demos do weird things with VGA. Mode X, a planar mode, also has advantages that, if you mess with the planes properly, you can blast pixels a lot faster.

        1. Also… back in the bad old days, graphics cards had VESA drivers for MS-DOS. Any game that wanted better than standard VGA used them. Usually they were in the card’s ROM, but sometimes you had to load a VESA extender. Dr Something or other was a popular one.

        2. Nowadays 256 byte demos DO run on Windows and feed Direct3d (usually). The size of the OS, DirectX, etc. don’t count in the competition, just your .exe size. How much code do YOU need to import the DLL and bang the specific functions?

    1. The story about their routine for removing unused code is one of my favorite “misadventures in game development” stories. They were running up against the deadline and their file size was still too large. They had to remove some code, but finding and removing unused code manually was going too slowly. So they wrote this routine that runs alongside the game as it’s played. If the player forgot to jump, guess what: the resulting build wouldn’t support jumping.

      Can’t seem to find the article where I read about this, but here’s some stories along similar lines from various dev studios: http://www.gamasutra.com/view/feature/132500/dirty_coding_tricks.php?page=1

  1. Reminds me of when AALib came out, about 15 years ago. A very quick bitmap to ASCII art convertor. People ported Doom to it, as well as video players etc. Never got a chance to use it on a serial terminal, wonder what sort of frame rate you’d get at 115,200 bps? 11.5, I suppose.

    You’d think people would be selling old Wyse terminals and the like for peanuts, but where the demand still exists, people will pay a good price when an old system relies on them. The rest, I suppose, end up in bits in Asia and Africa somewhere. Shame, I’d really like one, used to have loads of fun programming the old Unix mini in college, with Wyse 60s, wrote Space Invaders and a great multi-player shooting fest, 8-player games would last about a minute!

      1. Back in the day, I used this technique on the Atari 800 to produce nearly 1,000 simultaneous on-screen colors, even though its DAC was only capable of 256. Only for static images though. Way cool seeing it used for FMV.

  2. Someone could correct me if i’m wrong, but I’m fairly sure the wireframe cube (and wireframe in general) is done through ray tracing, where as this is ray casting. The two methods are the inverse of each other.

    1. Nah, wireframe is done by simply drawing lines between points. You keep the 3D points in memory, do a bit of conversion with SIN and the like, to turn them into 2D points, then just draw the lines. Ray tracing is infinitely more complicated.

      For ray tracing, you do the inverse of what light does. You start off at a pixel on the screen. You draw an imaginary ray through that pixel, from the imaginary camera, and you note what the ray passes through on it’s way. Which objects, and how transparent they are. You add up all the affects each object has on the ray, and the end result tells you what colour it ends up as. So you make that pixel that colour. Then on to the next pixel…

      Raycasting is a reduced version of ray tracing. You start off casting a ray, and the first object it hits, you stop and render that colour. Quicker!

  3. I like the way they did the map. You could do the same in any language. Since you need collision and bounds there is no good procedural method.

    Few Notes:
    * drawEndX might cause a buffer overflow
    * edit line 173 for a better weapon

  4. Neither the author nor the code says anything about GNU/Linux. While most media is ignorant of the difference between Linux, which is a kernel, and GNU, which is all the tools of an entire OS, Hackaday shouldn’t be. Sure, it’s picking nits, but I’m sure the GNU world is tired of everyone calling their work “Linux”.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.