Going Minimal: 64×4, The Fun In Functional Computing

If you’ve ever wondered what makes a computer tick, the Minimal 64×4 by [Slu4] is bound to grab your attention. It’s not a modern powerhouse, but a thoughtfully crafted throwback to the essence of computing. With just 61 logic ICs, VGA output, PS/2 input, and SSD storage, this DIY wonder packs four times the processing power of a Commodore 64.

What sets [Slu4]’s efforts apart is his refusal to follow the beaten track of CPU development. He imposes strict complexity limits on his designs, sticking to an ultra-minimalist Von Neumann architecture. His journey began with the ‘Minimal Ur-CPU’, a logic-chip-based computer that could crunch numbers but little else. Next came the ‘Minimal 64’, featuring VGA graphics and Space Invaders-level performance. The latest ‘Minimal 64×4’ takes it further, adding incredible speed while keeping the design so simple it’s almost ridiculous. It’s computing stripped to its rawest form—no fancy sound, no dazzling graphics, just raw resourcefulness.

For enthusiasts of retro-tech and DIY builds, this project is a treasure trove. From text editors to starfield simulations to Sokoban, [Slu4] proves you don’t need complexity to make magic.

38 thoughts on “Going Minimal: 64×4, The Fun In Functional Computing

  1. I bet it runs Linux at like 1 FPS 😂 My old Pentium III PC with Windows XP can easily run GTA Vice City, something even modern Linux can’t do because it lacks GPU drivers. Talk about usability, not everyone needs their computer for crunching OpenOffice speadsheets 16 hours a day.

    1. I could imagine that adventure games would play nicely on this computer, via serial terminal.
      Colossal Cave Adventure, The Great Underground Empire, Planetfall, The Hitchhiker’s Guide to The Galaxy, A Mind Forever Voyaging etc.
      Implementing something like a Z-Machine interpreter could make these IF games run.

    2. Actually Linux has no concept of FPS. If the terminal is set to 115200 then it’s pretty fast, 9600 is slow. Most users don’t need windoze and only use it because they don’t know they’re being spied on by the NSA.

        1. Not really…?

          The machine itself can operate without a virtual TTY display. It would be a functioning Linux machine that just piped its text output to a serial port. The thing about those terminal displays is that they’re basically a device emulator talking to the machine they’re running on. The screen buffer, while a part of the extended software suite of the OS, is not a necessary part of it. This is because the “TTY” part of “Virtual TTY” stands for TeleTYpe. The default terminal display is meant to emulate the printout of a remotely-driven typewriter.

          A slightly more modern take on this would have the 64×4 that runs *nix connected via a serial port to some device that pretends to be either a TTY device or (more likely) a VT100 (VT standing for Video Terminal). Whichever machine is running that would have to worry about screen buffers and and FPS.

  2. nice, very nice. In my opinion the processor should be on a separate motherboard and any peripherals on another board. Then others could make better graphics, or access to other hardware (e.g. LCD monitor) and a person could focus only on creating the processor itself and optimize power consumption and speed.
    I think he should also make an asynchronous processor like in the mera400

    1. Nobody needs your opinion :D. He clearly stated his goal, and he achieved it. What you describe also exists, e.g. the JAM-1. There are many such projects. No need to go telling what someone else should do.

  3. Ah! My type of computer (and nothing to do with retro). Glad I’m not the only one thinking modern compute has lost the plot, with computers not being tools any more…or more precisely, it became more about the tool, and not the function it performs. We run multi-gigabyte operating systems, just so the computer can spend the majority of its life showing text in graphical format…eating away power at idle like nothing we’ve seen before.

    1. No thanks. 640 KB hit the wall in the 80s, already. MS-DOS users know the story.
      We were glad for EMS, HMA and free UMBs in UMA.
      Likewise, Turbo Pascal 4 finally broke 64KB limit by introducing EXE support.

      That being said, this behemoth here is a cool microcontroller.
      A few i/o ports, ADCs/DACs, serial and parallel ports would make for a nice addition.
      Like an Arduino of old. That’s “my type of computer” then (microcontroller).

      1. I was speaking more generally, but I get what you mean (I come from that era). What I was really hinting at, is that we need to spend more time to match compute capability to a task. These days almost everything embedded runs a Linux OS (which requires a processor capable of running it…and not just a microcontroller), with the excuse that it simplifies development…which it does to some degree, but the complexity invariably drags its own problems and bugs along, requiring even more software. Instead of ‘adding’, we need to spend more time removing, simplifying, and try to get to the root of problems. You’d think that after a decade-and-a-half of our scopes running full OS’, that things like USB drive functionality would now be flawless. Even on the high-end scopes I invariably run into ‘funny’ USB drive problems (and that is after you’ve figured out which version of FAT or NTFS will keep the bugger happy).

        Arduino is the one platform I am not fond of. I’d much rather grab an assembler and whip up some ASM code (or just plain C), then having to deal with ‘another’ layer of abstraction (which adds its own unique problems…and not really eliminate the lower layer problems). Abstractions work fine in massively complicated software systems…not on microcontrollers.

        1. “I’d much rather grab an assembler and whip up some ASM code (or just plain C)”

          It baffles me that we have 8 billion high level languages and as far as I can tell there’s literally nothing between assembly and C.

          There’s a huge gap between C and assembly and it drives me nuts. We’ve got so many compiler optimizations that we know about (tail-call optimization, inlining, etc.) all of which could be done without the huge overhead that C imposes (forced register calling conventions, lack of ability to track the carry/zero flags, etc.).

          1. “As far as I can tell there’s literally nothing between assembly and C.”

            There used to be Pascal, especially Turbo Pascal dialect (standard Pascal at the time was too limited).
            Pascal was one of the big three compiled languages (ASM, Pascal, C).

            Macintosh system software was written in Pascal, for example.
            Pascal was more sane than C, that’s why people in college, university etc had used Pascal rather than C.

            Then there was Fortran, about as old as Algol 60.
            Tiny Fortran was used in late 70s/early 80s, still.
            Hudson Soft from Japan made some versions, I think.

          2. No, they’re all just as high level as C is. They all require calling conventions and abstract away a lot of stuff. It’s mainly just syntactic differences between them, plus some esoteric stuff.

            A ton of C can be straight converted into assembly just by minimal parsing: tools like astyle make it even more trivial by standardizing blocks/etc.

            But languages typically abstract away the carry/zero flags, so you can’t write stuff like a loop that shifts in 8 times just by shifting, detecting when a starting one falls out the end.

            Even though it could be something like “val = 0x80; do stuff(); val >>=1; while (!C)”.

            That’s just a super-simple example. Basically like a more universal assembly with optimizations as well. Like, I don’t need types, thanks, but could you please just implement a switch table in the best way compactly? Kthxbye.

          3. “No, they’re all just as high level as C is.”

            Dude. C is a glorified macro assembler, some sort of “super assembler”. The whole .h files with their definitions are nothing but macros.
            That’s the whole reason why C is used by operating system developers.
            Just look at Windows 1.x SDK and you’ll see.
            C allows doing low-level stuff in a pseudo high-level environment, which I admit was/is neat.

            By contrast, Pascal is (was) a real high level language.
            Something that could be used for sane, clean programming.
            Especially Turbo Pascal and its cousins. Plain Pascal was quite limited in the past.

          4. By contrast, Pascal is (was) a real high level language.

            And this is why it’s dead as disco while C and C++ are used everywhere, from simple 8 bit CPUs in our homes to massive AI GPU coprocessors in data centres.

          5. “Dude. C is a glorified macro assembler, some sort of “super assembler”. ”

            Yeah, no. A someone who’s actively developing a “glorified macro assembler” for a small softcore processor, it ain’t within miles of a C compiler.

            What distinguishes it from a C compiler?

            To start with, no function arguments or return values. You’re completely neglecting the C runtime and the fact that it forces code into an algorithmic representation taking arguments and returning values. This is how you get a function interface to allow unrelated code to link together seamlessly. You don’t have to have that in a language.

            And like I said, C abandons the concept of having flags be a real thing (because they’re not always) so certain functions have to be done in embedded assembly. Plus it introduces the concept of types, which on many architectures is just Not A Thing. “if (A < -59)”? Yeah. I don’t need you worrying about signed/unsigned comparison, just translate it into 0xC5, please.

            Yes, by modern standards, C is “not a very high level” language. I’m older than that, so if you want to convert my “high level” name (… because C is high level to me) to “low level” and reserve “high level” for Pascal or whatever?

            Fine, but then I need terms for “sub-sub-low level” and I’m saying there should be a “sub-low level”. I can do my own interface tracking, I just want a common code/logic syntax that can be morphed into the best representation on an architecture.

          6. “And this is why it’s dead as disco while C and C++ are used everywhere, from simple 8 bit CPUs in our homes to massive AI GPU coprocessors in data centres.”

            Yeah, the easiest way to think about this is that it’s usually possible to port C to something, even if it might not be the most efficient language for a given architecture. By now processors are usually big enough with plenty of RAM and storage that it’s not a big deal. Plus most processors are designed to map well to modern coding conventions.

            But forcing code into an algorithmic/linkable representation can be really costly. It’d be nice if there were things like attributes/intrinsics which were ‘semi-standard’ that you could glom onto C and avoid the overhead while retaining common syntax.

            The processor here, for instance, is very register poor but it’s got a hardware data stack. It’d probably map fine to C because you’d have a lot of stack operations anyway. You’ve only got two registers, after all.

            But if you’ve got a register rich architecture without a stack, that… doesn’t map all that well, because you’re going to have lots of cases where you could just map everything in a piece of code to a different register set and not worry about a stack much at all.

      2. Adding that I/O is the most straightforward thing; you just memory map it. Von Neumann to the max! Special I/O ports made sense when your word size was likely to be some weird non-power of 2 to accommodate business math and your peripherals were mechanical printers and teletypes, but for microprocessors it’s kind of silly to have a parallel address space with its own access instructions. The 8080A did it the old way and lots of architectures just ignored the I/O space, then the MOS guys realized they could make the 6502 that much leaner by just blowing off the whole idea. As for 640K, you had memory problems mainly because (1) the 8088 instruction set is stupidly hoggy and needs twice the memory of the 8080 or 6502 for the same logic, and (2) those memory expanders were only ever really needed for certain classes of software, roughly in order being games, spreadsheets, games, autocad, games, games, development systems, and oh yeah games.

        1. That makes sense. Though C64 GEOS also had support for geoRAM and REU modules, which made working much with the paint and wordprocessing applications more comfortable.
          The C128 had had 128KB of RAM inside and CP/M Plus supported bank-switching out of box (a feature from MP/M).

  4. This is impressive. I would be curious to see the size if it was built with SMD components (which wouldn’t necessarily make it less “DIY”, as plenty of hobbyists can mill 2-layer boards and solder them in an oven).

      1. Easy but mostly pointless – if you’re going to build a softcore processor on an FPGA (outside of just doing it for fun mind you) – you almost certainly want to have it map well to the hard blocks inside the FPGA to efficiently use its resources.

        Still drives me nuts that most softcore processors on FPGAs don’t use the DSPs even though they’ve got a programmable ALU right there.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.