Although largely relegated to retrocomputing enthusiasts and embedded systems or microcontrollers now, there was a time when there were no other computers available other than those with 8-bit processors. The late 70s and early 80s would have seen computers with processors like the Motorola 6800 or Intel 8080 as the top-of-the-line equipment and, while underpowered by modern standards, these machines can do quite a bit of useful work even today. Mathematician [Jean Michel Sellier] wanted to demonstrate this so he set up a Commodore 64 to study some concepts like simulating a quantum computer.
The computer programs he’s written to do this work are in BASIC, a common high-level language of the era designed for ease of use. To simulate the quantum computer he sets up a matrix-vector multiplication but simplifies it using conditional logic. Everything is shown using the LIST
command so those with access to older hardware like this can follow along. From there this quantum computer even goes as far as demonstrating a quantum full adder.
There are a number of other videos on other topics available as well. For example, there’s an AmigaBasic program that simulates quantum wave packets and a QBasic program that helps visualize the statistical likelihood of finding an electron at various locations around a hydrogen nucleus. While not likely to displace any supercomputing platforms anytime soon, it’s a good look at how you don’t need a lot of computing power in all situations. And, if you need a refresher on some of these concepts, there’s an overview on how modern quantum computers work here.
I do robotics with students. There was a need for the basic trig functions but the programming didn’t support it. So I broke out routines I had from my PIC days and we used them. Turns out that 2 decimal point accuracy is all you really need :-)
What did you go for? A polynomial approximation?
Trig can be done with CORDIC calculations, which are basically rotational matrices. So, when Astro Jetson says 2dp, I think that really refers to 8-bit arithmetic, because +/- 127/128 is 2dps. You can probably write a fractional multiply on an 8-bit PIC in a few instructions:
Let’s say a and b are source values and c is the destination for a x b + a flag in sgn.
clrf c,f
Mul1:
bcf STATUS,carry
rrf a,f ;div 2, carry in
btfsc STATUS,zero
goto Mul3
rlf b,f ;x 2 carry out.
btfsc STATUS,carry
goto Mul1
movf a,w
addwf c,f
goto Mul1
Mul3:
retlw 0 ;OK, done!
This sequence might have a bug or 2 in it, but it’s basically right for a fractional multiply, but I haven’t bothered with fixing the signs. so, it takes 10c on average per loop & 7 loops = 70c. 12 instructions.
A rotational matrix involves 4 multiplications, which is approx 280c, maybe 300c to 320c with the overheads. So, even an 8-bit PIC can do some kinds of trig efficiently (approx 3K/s).
I thought Astro Jetson said “ruh-row Rorge!”
B^)
ISTR the HP-35 series also used CORDIC
https://www.hp.com/hpinfo/abouthp/histnfacts/museum/personalsystems/0023/other/0023hpjournal03.pdf
The Sinclair Scientific calculator also used CORDIC type algorithms. Except.. they squeezed a scientific calculator’s functionality into the same ROM space, using the same chip as a four-function TI calculator!
http://files.righto.com/calculator/sinclair_scientific_simulator.html
When accuracy isn’t too critical, and with modern microcontrollers, where there’s often a fair bit of flash memory left over, lookup tables can be a fast and efficient way to get “good enough” results at almost no computational cost.
https://www.f3.to/portfolio/math/fastatan2_integer.htm here’s one I did a long time ago for navigation
“… there was a time when there were no other computers available other than those with 8-bit processors. The late 70s and early 80s would have seen computers with processors like the Motorola 6800 or Intel 8080 as the top-of-the-line equipment…”
Yes, I’m glad this stark truth has come out. I suffered programming on a 3-bit PDP-8, 4 bit PDP-11 and Novas, had teletype access to a 6-bit CDC machine through the Cal-State computing network, and I heard some very lucky people had access to a Cray-1, a 7-1/2 bit machine. These 8-bit microprocessors were truly supercomputers of their time.
PDP-8 was 12 bit. Which I have always thought was about the right size for most sensor maths. 8 bit is only +-7 bits, really too small.
The instruction set however defines “reduced”
https://homepage.cs.uiowa.edu/~jones/pdp8/man/mri.html
Um … the PDP-8 was an 8-bit machine; the PDP-11 and the Nova were 16 bits. CDC machines had much longer words.
And how do you do “half a bit”?
I think he left off the /s. He’s being sarcastic because there were plenty of computers more powerful than the 8 bit micros in the 70s and 80s.
Why is there a screenshot from an Amiga shown when the article links to something about a basic program on a C64? A screenshot from the YouTube video is really all it took to get it right? Now Hackaday readers all around the world are horribly confused for no reason.
I came to comment here because I was confused about the screenshot.
Moreover in Amiga (Motorola 68000) is 16/32 bit processor not 8-bit.
I guess to the less discerning eye, a 16/32 bit processor is in the same ballpark as an 8 bit one, compared to modern day processors.
The article does mention other computers (like the Amiga) for using as a tool to simulate quantum stuff.
That’s an explanation, good point.
Some people also think that an IBM PC/XT is a 16-Bit computer, event hough it’s clearly an 8-Bit system from a hardware engineer’s point of view.
Like a Z80-based CP/M system (had some 16-Bit registers, btw).
Yes, the PC processor understands 16-Bit instructions, but the bus, memory banks and so on are 8-Bit data wide (address bus 20-Bit wide).
The PC/AT was a real 16-Bit computer, though. Same goes for AT&T 6300 or Amstrad PC1512/1640 equipped with an 8086 (or V30).
Likewise, the Amiga’s and Atari ST’s were 16-Bit personal computers, really.
While their 68000 processor could handle 32-Bit instructions, it had a 16-Bit wide data bus (&24-Bit address bus).
“There are a number of other videos on other topics available as well. For example, there’s an AmigaBasic program that simulates quantum wave packets and a QBasic program that helps visualize the statistical likelihood of finding an electron at various locations around a hydrogen nucleus.”
“… there was a time when there were no other computers available other than those with 8-bit processors. The late 70s and early 80s would have seen computers with processors like the Motorola 6800 or Intel 8080 as the top-of-the-line equipment…”
Yes, I’m glad this stark truth has come out. I suffered programming on a 3-bit PDP-8, 4 bit PDP-11 and Novas, had teletype access to a 6-bit CDC machine through the Cal-State computing network, and I heard some very lucky people had access to a Cray-1, a 7-1/2 bit machine. These 8-bit microprocessors were truly supercomputers of their time.
Was also going to say, there were also the DEC VAXen, and then a whole slew of 32 bit Unix workstations. Everything that wasn’t a shitty home computer (8 or 16 bit) seems to have been lost from the collective memory.
When Unix and Lisp are such foundational elements of hackerdom in the mid 20th century, you’d think they’d get more coverage here.
Don’t confuse character set bit size and cpu word size… the CDC 6000 computers in the Cal-state system had 60 bit floating point and 60 bit integer arithmetic ALU.
When the Data buss is smaller than the ALU width just means that the cpu has to access memory several times to get a word of data.
True, tho’ a lot of science was done on computers like the IBM 704, which in both CPU speed and amount of memory is inferior to a Commodore 64.
We need to go back to the bit wars.
I want something like 256bits (or more) to become standard. I want us to be able to represent every point in the universe at Planck scale with int/fixed point values. I want to be free of floats.
Nowadays there are libraries and programming languages that support arbitrary bit sizes for numbers. Computers are so fast that the number of bits they run usually doesn’t really matter. You can do 256bit arithmetic with ease if you really wanted to, it’ll just be slower but probably within acceptable performance requirements. You can also probably speed things up a lot with SIMD if you really wanted fo
We need it in the GPUs too. I want the next KSP to not need any tricks for universe scale simulation.
I think there’s an arbitrary precision “bignum” format, but I’m still trying to understand decimal128. IEEE754 documentation is heavy going for me.
I have a smart friend who was able to handle queries to the GSC (guide star catalog, published in CDs at the beginning of the operation of the HST) with a BASIC program. After that I started using a quote that goes like
“what matters is not the language a program is written in but the mind that conceived it”.
I have a smart friend who was able to handle queries to the GSC (guide star catalog, published in CDs at the beginning of the operation of the HST) with a BASIC program. After that I started using a quote that goes like
“what matters is not the language a program is written in but the mind that conceived it”.
Even in those days those processors were very under powered for a lot of tasks. My brother had a test program running on a DAI (8085) which drew a nice graph with 3rd degree polynomials and it needed over two hours to run to completion.
On top of that, BASIC as always been a quite mediocre language, the main reason it became popular was simply that there was no alternative for the home computers in the ’80-ies. Years later I rewrote the basic program in C and to run it on an 80386sx and it was finished in a handful of seconds. On a 80386DX with co-processor (@33MHz) it ran in a few hundred ms. Switching the video mode from “text” to “graphics” and the monitor re-synchronizing took was slower then the actual drawing the diagram.
As long as your computer is “turing complete” you can run any algorithm on it, but how much time do those algorithms need when run on a C64 to simulate a single clock cycle of a quantum computer?
In early ’80s there was a computer magazine in Yugoslavia called “Računari” , which had a “Dejanove pitalice” (“Dejan’s riddles”). It was some kind of a story, with a question, which could usually involve a rather complicated algorithm to solve, and people would send their algorithms to the magazine to be checked & published (and there were some prizes).
But it was the ’80s, and people had different computers at home, and some of the solutions took hours or days on a ZX-80 or Galaksija (homebrew computer popular in Yugoslavia). Some answers were in optimized assembler, but if the solution was in some popular programming language, or the algorithm was easily understandable, then they would take the solution to a fancy 16/32 bit computer and test it there. The differences in execution time were always astounding.
I’m sure 8-bit computers can do a lot of scientific processing, but BASIC seems like the wrong language to write these programs in… Especially on the commodore 64, the interpreter is rather slow.
Amiga Basic, GFA Basic, Lokomotive Basic (GEM), Quick Basic (PC, Mac) or PDS 7 were quite capable, on par with Turbo Pascal.
Heck, even MBASIC/BASIC-80 from 1979 was fine.
QuickBasic alternatives such as TurboBasic and PowerBasic, were great, too!
The problem is that C64 BASIC has ruined the reputation of Basic as a high-level programming language all and for ever.
So it’s just understandable that some people hate Basic with a passion now. Or the whole C64, for the matter. Thanks, Commodore!
That being said, there has been various BASIC dialekts before which were no toys, but used in microcontrollers (8052-AH BASIC) and minicomputers (HP 2000, Wang 2200 etc).
They had support for interrupt control, sub routines, floating point, EPROM storage, labels etc.
I did a fair bit of early scientific computing on an Apple ][. Part of my impetus to learn ‘C’ was that trig functions in Applesoft BASIC took a quarter second to execute.
The enthusiasm for C was tempered a lot by the 20-minute compile times for anything but the most trivial programs. And this was with dual floppies and a 128 kB RAM disk.
Did you have an Z80 SoftCard and CP/M and Turbo Pascal ?
This was before my time, but I was told multiple times that CP/M-80 and Turbo Pascal were sort of an industry standard in school or university.
The latter was the reason many students bought cheap/outdated MS-DOS computers in first place (or slow PC emulators for Atari/Amiga).
They could run Turbo Pascal 3 and up no problem, even if it was something equivalent to an ancient as an Sanyo MBC-55x or an 8088 based laptop with an 640×200 CGA screen.
No Z80 card, but I did have an 80 column card. But then I abandoned Apple and got a V20 with a numeric coprocessor and HGC 720×350 graphics. Much better in every respect.
Hey that Sanyo and TP2/3 saved me a ton of trips to uni when I was finishing my CS degree!
AI single precision floating point apparently adjust addition/subtraction
smaller exponent to match the larger?
This result in the fraction of the smaller shifted to the right possibly losing
precision … and lost bits.
Memory floating point adjusts the larger exponent to match the smaller.
Example of an 8 bit machine: 1 decimal is 00000001 binary.
4 decimal 00000001 with an 8 bit two complement exponent 0010 binary, 2 decimal.
.5 decimal, 00000001 binary with a decimal despondent of -1,
Ones complement of 00000001 is 11111110 plus 1 11111111 for the twos complement.
4 is 00000001 with exponent 2. Or 00000100 with exponent 0.
00000100 is shifted left 1 more bit to make exponent -1 to match the .5 exponent.
00001000.
Add the -.5
00001000 + 11111111 = 00000111.
The low-order bit is .5.
The sever high-order bits are a 3.
So the answer is 3.5? … provided poster has not made a misteak.
Let a computer verify above computations
… even an 8 bit nanocomputer.
8 bit computer can do 64 bit floating point accurately
… some costing <$10US and using less than 1 W of power … using the gcc c compiler, of course.
:)
AI 32 bit floating point can be inaccurate, expensive and use LOTS OF POWER? :(