8-Bit Computers Crunch Advanced Scientific Computations

Although largely relegated to retrocomputing enthusiasts and embedded systems or microcontrollers now, there was a time when there were no other computers available other than those with 8-bit processors. The late 70s and early 80s would have seen computers with processors like the Motorola 6800 or Intel 8080 as the top-of-the-line equipment and, while underpowered by modern standards, these machines can do quite a bit of useful work even today. Mathematician [Jean Michel Sellier] wanted to demonstrate this so he set up a Commodore 64 to study some concepts like simulating a quantum computer.

The computer programs he’s written to do this work are in BASIC, a common high-level language of the era designed for ease of use. To simulate the quantum computer he sets up a matrix-vector multiplication but simplifies it using conditional logic. Everything is shown using the LIST command so those with access to older hardware like this can follow along. From there this quantum computer even goes as far as demonstrating a quantum full adder.

There are a number of other videos on other topics available as well. For example, there’s an AmigaBasic program that simulates quantum wave packets and a QBasic program that helps visualize the statistical likelihood of finding an electron at various locations around a hydrogen nucleus. While not likely to displace any supercomputing platforms anytime soon, it’s a good look at how you don’t need a lot of computing power in all situations. And, if you need a refresher on some of these concepts, there’s an overview on how modern quantum computers work here.

18 thoughts on “8-Bit Computers Crunch Advanced Scientific Computations

  1. I do robotics with students. There was a need for the basic trig functions but the programming didn’t support it. So I broke out routines I had from my PIC days and we used them. Turns out that 2 decimal point accuracy is all you really need :-)

      1. Trig can be done with CORDIC calculations, which are basically rotational matrices. So, when Astro Jetson says 2dp, I think that really refers to 8-bit arithmetic, because +/- 127/128 is 2dps. You can probably write a fractional multiply on an 8-bit PIC in a few instructions:

        Let’s say a and b are source values and c is the destination for a x b + a flag in sgn.

        clrf c,f
        Mul1:
        bcf STATUS,carry
        rrf a,f ;div 2, carry in
        btfsc STATUS,zero
        goto Mul3
        rlf b,f ;x 2 carry out.
        btfsc STATUS,carry
        goto Mul1
        movf a,w
        addwf c,f
        goto Mul1
        Mul3:
        retlw 0 ;OK, done!

        This sequence might have a bug or 2 in it, but it’s basically right for a fractional multiply, but I haven’t bothered with fixing the signs. so, it takes 10c on average per loop & 7 loops = 70c. 12 instructions.

        A rotational matrix involves 4 multiplications, which is approx 280c, maybe 300c to 320c with the overheads. So, even an 8-bit PIC can do some kinds of trig efficiently (approx 3K/s).

  2. “… there was a time when there were no other computers available other than those with 8-bit processors. The late 70s and early 80s would have seen computers with processors like the Motorola 6800 or Intel 8080 as the top-of-the-line equipment…”

    Yes, I’m glad this stark truth has come out. I suffered programming on a 3-bit PDP-8, 4 bit PDP-11 and Novas, had teletype access to a 6-bit CDC machine through the Cal-State computing network, and I heard some very lucky people had access to a Cray-1, a 7-1/2 bit machine. These 8-bit microprocessors were truly supercomputers of their time.

  3. Why is there a screenshot from an Amiga shown when the article links to something about a basic program on a C64? A screenshot from the YouTube video is really all it took to get it right? Now Hackaday readers all around the world are horribly confused for no reason.

    1. I guess to the less discerning eye, a 16/32 bit processor is in the same ballpark as an 8 bit one, compared to modern day processors.

      The article does mention other computers (like the Amiga) for using as a tool to simulate quantum stuff.

  4. “… there was a time when there were no other computers available other than those with 8-bit processors. The late 70s and early 80s would have seen computers with processors like the Motorola 6800 or Intel 8080 as the top-of-the-line equipment…”

    Yes, I’m glad this stark truth has come out. I suffered programming on a 3-bit PDP-8, 4 bit PDP-11 and Novas, had teletype access to a 6-bit CDC machine through the Cal-State computing network, and I heard some very lucky people had access to a Cray-1, a 7-1/2 bit machine. These 8-bit microprocessors were truly supercomputers of their time.

  5. We need to go back to the bit wars.

    I want something like 256bits (or more) to become standard. I want us to be able to represent every point in the universe at Planck scale with int/fixed point values. I want to be free of floats.

    1. Nowadays there are libraries and programming languages that support arbitrary bit sizes for numbers. Computers are so fast that the number of bits they run usually doesn’t really matter. You can do 256bit arithmetic with ease if you really wanted to, it’ll just be slower but probably within acceptable performance requirements. You can also probably speed things up a lot with SIMD if you really wanted fo

  6. I have a smart friend who was able to handle queries to the GSC (guide star catalog, published in CDs at the beginning of the operation of the HST) with a BASIC program. After that I started using a quote that goes like
    “what matters is not the language a program is written in but the mind that conceived it”.

  7. I have a smart friend who was able to handle queries to the GSC (guide star catalog, published in CDs at the beginning of the operation of the HST) with a BASIC program. After that I started using a quote that goes like
    “what matters is not the language a program is written in but the mind that conceived it”.

  8. Even in those days those processors were very under powered for a lot of tasks. My brother had a test program running on a DAI (8085) which drew a nice graph with 3rd degree polynomials and it needed over two hours to run to completion.

    On top of that, BASIC as always been a quite mediocre language, the main reason it became popular was simply that there was no alternative for the home computers in the ’80-ies. Years later I rewrote the basic program in C and to run it on an 80386sx and it was finished in a handful of seconds. On a 80386DX with co-processor (@33MHz) it ran in a few hundred ms. Switching the video mode from “text” to “graphics” and the monitor re-synchronizing took was slower then the actual drawing the diagram.

    As long as your computer is “turing complete” you can run any algorithm on it, but how much time do those algorithms need when run on a C64 to simulate a single clock cycle of a quantum computer?

  9. I’m sure 8-bit computers can do a lot of scientific processing, but BASIC seems like the wrong language to write these programs in… Especially on the commodore 64, the interpreter is rather slow.

  10. I did a fair bit of early scientific computing on an Apple ][. Part of my impetus to learn ‘C’ was that trig functions in Applesoft BASIC took a quarter second to execute.

    The enthusiasm for C was tempered a lot by the 20-minute compile times for anything but the most trivial programs. And this was with dual floppies and a 128 kB RAM disk.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.