How The Mazes Were Generated For Classic Berzerk Game

berzerk-random-maze

This is a screenshot from the Atari 5200 version of the classic game Berserk. But the write-up we’re featuring actually looks at the original coin-op version. The maze for each level was established on the fly using a seed number fed into a rudimentary algorithm . Here’s a close look at how the maze building code actually worked.

Recently we saw a talk by Pitfall creator [David Crane] as part of our Retrotechtacular series. That is a real gem of programming history, and one of our favorite take-aways was that the levels were not hardcoded, but built using a random number generator algorithm with a hardcoded seed (so that the game was the same each time you played it). This uses a similar method but with a somewhat random seed.

The maze building was reverse engineered by observing the game in a MAME emulator, and by digging through disassembled code. Each time the code is “cold started” the seed starts out at zero, but from there the room number is used as the next seed. This is fed through a very simple algorithm. It generates directions for the walls, which use s few bit-wise operations to add the pillars inside the rooms.

It’s a great thing to study if you’re writing games for your embedded projects. By generating the room programmatically you don’t use up as much program memory. Of course these days even simple hobby controllers have way more storage to work with than [Alan McNeil] had when he designed Berserk.

[via Reddit]

[Image Source]

16 thoughts on “How The Mazes Were Generated For Classic Berzerk Game

  1. That’s really clever. Interesting what people come up with when resources are limited. Seems like I had heard that the original Super Mario Brothers used some insanely low amount of memory (probably a lot of other games too).

  2. Ahh Coding of real programmers, not like the lazy bums that claim they are programmers from today.

    Honestly, Cs degrees should require a solution and a K limit on the amount of space the solution can take up. it would force these students to actually write tight and clean code.

    1. Why? It doesn’t matter anymore for nearly all code written by computer science engineers. Memory and CPU power is thousands of times cheaper than the wage of even an entry level programmer.

      It is an issue for embedded systems if you sell millions of them, but this kind of applications does not seem to be of interest to most CS schools.

      1. Today I explained to someone at length why their project was dropping interrupts. He was needlessly doing floating point math in the ISR, on an 8Mhz MCU with no FPU.

        And he holds a CS degree. I don’t know what they teach these days, but it sure seems deficient to me. You may rarely need to benchmark and optimize, but it is still a valuable skill, and part of problem solving skills in general which are always needed.

        1. And the percentage of programmers doing uC work?

          And the percentage of those who aren’t using a HLL anyway?

          Yup, bugger all. I used to religiously do the save a byte here, scrimp a cycle there stuff, but y’know, good riddance.

          1. I never stopped. To this day, assembly is still my favorite. That’s not to say I don’t appreciate higher level languages but I am a control freak.

          2. I’ll bet that comes in real handy for doing web pages, SQL DB stuff along with phone & tablet apps.

            That sort of thinking is what killed WordStar & Lotus 123.

          3. Just a guess, but I suspect the percentage of programmers doing uC work is rising rapidly. And even elsewhere, some judicious optimization in time-consuming inner loops can make a big difference in app performance. Especially in a HLL, as the overhead difference for doing certain things different ways can be massive.

          4. And very few of those do it in assembly – the entire point that was being made.

            Arduino? Nope.
            PicAxe? Nope.
            Bare PICs? Nope again.
            RPi? Not a uC, but nope anyway.

            That $3 chip too slow? Buy a $4 one, problem solved. Just like PCs, the uC has come a long way too, as have the compilers.

            It’s HLLs (mostly) all the way.

            If you think programming is assembler put hairs on your chest, well, knock yourself out, you manly man you.

          5. @Tony: Assembly? No need to code up in straight assembly anymore, pretty much ever, unless you just want to. The point I was trying to make is that it’s still important to know how to optimize in general.

            The first MCU example I mentioned was C. It would be silly to upgrade the MCU just to address what was clearly poor coding. And as I was writing about HLL’s, I was thinking specifically about a particular .NET program I wrote which took three minutes to do something. And, by using the equivalent direct API calls instead, took only three seconds.

  3. “In a MAME emulator”? Really, Mike? MAME is the name of a specific emulator, it stands for “Multiple Arcade Machine Emulator”. In fact, the author of this article used MAME, not “a MAME emulator”.

Leave a Reply to strider_mt2kCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.