The Gigatron TTL microcomputer is an exercise in alternative history. What if, by some bizarre anomaly of invention and technology, the 1970s was not the age of the microprocessor? What if we could have had fast, high density ROM and RAM in the late ’70s, but the ability to put a microprocessor in silicon was beyond our comprehension? Obviously we would figure out a way to compute with this, and the Gigatron is the answer. It’s a computer from that era that’s designed with a CPU that’s entirely made of microcode.
While the Gigatron is a popular product in the world of weird electronics kits, the creator, [Marcel van Kervinck], is going beyond what anyone thought possible. Now the Gigatron is emulating a 6502 processor, the same CPU found in the Apple II and almost every other retrocomputer that isn’t running a Z80.
There’s a thread over on the Gigatron forums for this. Although it’s still very early in development, the Gigatron can now run 6502 machine code, and in doing so the Gigatron is now the only dual-core computer without a CPU. All of the addressing modes have been implemented, along with half of the instructions and most of the status flags. All of this interacts with the Gigatron’s existing video subsystem, and all code can switch in between the Gigatron’s virtual CPU and 6502 code with just a few instructions.
This opens the door to a wide variety of software that’s already written. MicroChess is possible, as is MS Basic. This is great; the biggest downside of the Gigatron is that there was no existing code for the machine when it was first designed. That changed when the Gigatron got a C compiler, but now somehow we’ve got a logic chip implementation of a 6502 in far fewer chips than are found in an Apple II. It’s not fast ( about 1/8th the speed of a 1 MHz 6502), but in the video below you can see a munching squares demo.
25 thoughts on “Emulating A 6502 In ROM”
That’s amazing! it resembles some microprocessors that run microcode instead of having all instructions hardwired in logic circuit equivalents.
Hmm. What an interesting supposition – that we could have high density RAM and ROM, but not high density logic circuits. Sounds like the premise for a really bad movie. Or maybe a whole really bad genre – ROMpunk.
Uuuh, ROMpunk, *takes and run with*
I always loved the idea of a less-general-purpose computer, consisting of modules that could be swapped in/out, like, having a “text-editing block” and hooking it up in such a way that it could interface with some “command block” and maybe a general purpose io block..
would be lovely cumbersome to use :D
Welcome to UNIX!
Emulating the CPU is not enough to run a software: you must emulate the whole machine, including its peripherals accurately (keyboard, display, disk drive…).
Strictly speaking, you only need to emulate the facilities that the software in question makes use of, and only the subset of behaviour that it expects :)
True, but for a lot of vintage 6502 software including BASICs, a single entry point to keyboard input and another one for character output is all it takes to make them run. Here we have Microchess code running for the first time for example: http://forum.6502.org/viewtopic.php?p=69409#p69409
In fact, the Gigatron doesn’t need high-density ROM. About 4K is sufficient to run the video and virtual CPU’s. Another 3K to store BASIC. The kit comes with much more ROM, but that’s all used as cold storage for applications (and colorful demo pictures).
Microcoding, finite state machines… These sort of things have always fascinated me. Incredible complexity can be achieved with very minimal hardware, with the right skill set! Bravo on the impressive feat! It’s beyond my skill, but not beyond my awe.
The downside of emulating a CPU in ROM is mainly that ROMs aren’t all that fast. Though, if it is all one has, it is better then nothing.
In terms of the Gigatron, there is yet one more downside. The “magic” of how a CPU/Computer works all get hidden away into a ROM, making it into a black box that isn’t all that useful as an educational product…. Instead of implementing a proper architecture with logic chips. (Though, downside with the later is the “insane” cost of all the logic chips one will need……)
Depends on your point of view. Quite a few teachers use the Gigatron in their classes. The native instruction set and data flow are right in front of your eyes, laid out and annotated on the PCB. It goes down to the level to seeing the truth tables for each of the ALU operations and addressing modes.
A direct TTL implementation for the 6502 instruction set takes indeed 3 times more chips, and isn’t as instructive for that reason. (And that won’t double as video and sound generation as well…)
How educational a processor is, is mostly up to documentation and how the board is layed out.
A poorly documented mess is indeed far harder to wrap one’s head around compared to a well documented neatly put together product.
The Gigatron is fairly well layed out and documented, but hiding away part of the execution into look up tables in the ROM kinda takes away part of the joy per say.
Though, everyone tends to have their personal preferences.
What lookup tables are you referring to? The EPROM only has code, and that’s all commented and open source.
I mostly used “look up table” as an example.
I generally don’t care how one has implemented functions of a CPU in ROM, but rather that there isn’t visible logic externally performing the function in a more traditional sense.
Yet again, everyone has their preferences on how they desire to learn things.
So the statement “A direct TTL implementation for the 6502 instruction set takes indeed 3 times more chips, and isn’t as instructive for that reason.” is at times true, but at times totally incorrect, it depends on how an individual likes to view a topic and how they like to learn it. So for some people, a direct TTL implementation can be the preferred method of learning the architecture.
True. People learn in several different ways, and you usually need a carefully crafted teaching method for something to be truly educational. Any hardware should then support the method, not the other way around. This project seems to attract more hobbyists, as that’s where it came from. Quite a few are teachers still, it seems to be popular with them. With computer architecture you can’t visualise all abstraction levels at the same time at all times. But I doubt many learn the 6502 architecture by looking at a logic level implementation (or even the block diagram), but rather start with the programming model and instruction set. And there are plenty of 4-bit simple CPU designs you can build out of TTL in a lab course, but those systems are limited in what they can do.
Yes, the amount of abstraction should preferably reflect the thing one is aiming to learn.
If one desires to learn to program on a specific architecture/platform. Knowledge of the rules for instruction order and a list of instruction with explanations of what they do, and run it in an emulator for example is sufficient for learning to program on a particular platform.
While if one desires to learn about different hardware implementations and why certain things are implemented in specific ways for certain applications, then one usually requires a deeper look into the actual hardware. A software implementation only executing the functions in a software efficient way, is not sufficient to gain this knowledge.
In most cases, even a well implemented software emulation tends to fall greatly behind even a semi crude hardware implementation (this isn’t always the case). Not to mention that the goal of good software emulation of hardware features is to get the best performance with the hardware at hand, and not about efficiently implementing the hardware for the given application.
This is why I don’t consider the Gigatron as a good platform for learning about computer architectures. Since it practically is a RISC processor emulating more advanced functions through the use of software. (Though, if software emulation is the thing one desires to learn, then the Gigatron is a very good example.)
A crude example of differences in abstraction and the things we can focus and learn about would be addition.
A mathematical representation is: A + B = C There really isn’t all that much more to this than that.
In software we can look at differences in variable size, and how the number is represented to see how that effects our software. (number representation is: integer, fixed precision, floating point, fractions, etc) We can also dive into emulation of one type of number representation that our platform lacks, or in a similar fashion play about with larger variable sizes.
While in hardware, we can look around at variable size, number representation, and different methods of performing the addition itself. Where the method of adding is where we will find a great degree of nuance in relationship to different applications. Ie, we can focus on different things like: power efficiency, latency (time or cycles), throughput, etc
Slow ROM, lump together 16 ROM chips. Load them in parallel into a matrix buffer. Better yet have a stagered clock and load them sequentially in a loop, each ROM loaded a fraction of a clock pulse after each other, run the computer faster than the memory.
Though, then you will need a prefetching system. And risk running into prefetching issues in regards to branches. One could solve the branching problem by having even more ROMs as to give the ability to prefetch both sides of the branch. Or one can just stall…
Or replace it with a small RAM. Back in the day, Intel already offered 70ns and 55ns static RAM. More than fast enough.
6502 and Gigatron – what a great combination. And a great way to place Gigatron in a counterfactual history.
Uh… I thought I remembered that GigaTron already ran Woz Mon at the Superconference 2018?
From the link: that was WozMon ported to the 16-bits vCPU, not the 6502 original.
This is a bit like a flash FPGA made on PCB instead of on chip… Or like the old PAL devices.
Has the HCF command been implemented?
Marcel van Kervinck: “Drafting the v6502 is a surprisingly fun activity during which I can switch off my brain.”
Is this guy bragging, or does he just not realize how crazy smart he is!
Please be kind and respectful to help make the comments section excellent. (Comment Policy)