Z80 Fuzix Is Like Old Fashioned Unix

Classic Z80 computers tend to run CP/M. If you’re a purist you’ll be happy with that because that’s certainly what most serious Z80 computers ran back in the day. However, for actual use, CP/M does feel dated these days. Linux is more comfortable but isn’t likely to run on a Z80. Or is it? Linux borrows from Unix and back in the 1980s [Doug Braun] wrote a Unix-like OS for the Z80 called UZI. There have been lots of forks of it over the years, and a project called FuzixOS aims to make a useful Z80 Unix-like OS.

Of course, 1980 Unix was a lot different from modern-day Linux, but it is still closer to a modern system than CP/M. Fuzix also adds several modern features like 30 character file names and up-to-date APIs. The kernel isn’t just for the Z80, by the way. It can target a variety of older processors including the 6502, the 6809, the 8086, and others. As you might expect, the system can fit in a pretty small system.

The video below shows [Scott Baker’s] RC2014 computer running Fuzix. You’ll see it looks a lot like a Linux system, although that analogy only goes so far.

Although the kernel is pretty portable, there are some tool issues. According to the Fuzix page, there’s no 8086 compiler, and limitation on some of the other C compilers it targets. However, there are a large number of platforms working including Amstrad, Atari, Radio Shack computers, N8VEM boards and many more.

We are always surprised we don’t see more retro computers running MINIX, which was a common Unix alternate back in the day. If you are interested in finding out more about the RC2014, we’ve reviewed it for you.

98 thoughts on “Z80 Fuzix Is Like Old Fashioned Unix

    1. There was a port of Minix 1.x for the Atari ST – 68000 processor, no MMU. Using that plus Tanenbaum’s book on the subject was a terrific (and inexpensive) way to come up to speed quickly on Unix.

      1. I remember I had to patch the keyboard driver for french accentuated characters on MINIX 1.5! You were limited to 64KB code max and 64KB data max, which was annoying considering the money I spent in my brand new 80286 @ 10MHz with 4MB RAM computer. But the experience was really instructive. You could explore the C sources of a true OS, and explore Unix V7 books as if you had an actual Unix computer.

    2. That’s the same as saying “Linux won’t run on 20th century hardware” because latest distros are compiled to expect PAE support minimum. Then step back a few years and it’s 486 minimum etc etc.

      MINIX 1.0 was running on 8088s, no MMU there.

      1. I got Lubuntu 16.10 running on a Compaq NC6000 laptop. Had to use the forcepae install option. The CPU is Intel’s first mobile Pentium design that has PAE, but it doesn’t actually tell the OS it has PAE capability.

        Forcing PAE on one of these CPUs won’t actually do anything because PAE isn’t needed with less than 4 gig RAM. No laptop I know of with one of these CPUs was built to accept more than 2 gig RAM. That’s likely why Intel ‘soft disabled’ PAE on this CPU core. The next revision, with a different core name, had PAE that would properly jump up and say ‘present’ when probed, and IIRC those CPUs were used in laptops capable of using 100% of 4 gigs RAM.

        What gets me is why Linux installers aren’t programmed to check for this PAE case? Should be “Oh, one of *those* CPUs. It has PAE even though it’s not telling me it has it. Go ahead and install with PAE because it will work.”

        Better yet would be options to not install PAE on a computer that’s limited by hardware and/or firmware to less than 4 gig RAM. Why have code in the kernel that’s taking up space doing nothing?

        1. Had the same thing with a P-M 1.6 in an Acer. I believe it is used if you give yourself a big enough swapfile.

          Yah, does seem a bit dumb it doesn’t check CPUID and force PAE by default…. I’m wondering if there’s something goofy about the CPUIDs though and it’s tangled in amongst PIIIs.

    3. You’re conflating MINIX 3.x with Minix 1.x. MINIX today is very different from the MINIX of 30 years ago. I think the following quotation settles the question of portability as a design goal, though there are more explicit statements in that regard.

      “The 8088’s memory management architecture is very primitive. It does not support virtual memory in any form and does not even detect stack overflow, a defect that has major implications for the way processes are laid out in memory.

      The portability issue argues for as simple a memory management scheme as possible. If MINIX used paging or segmentation. it would be difficult, if not impossible, to port it to machines not having these features.”

      “Operating Systems Design and Implementation”, A. S. Tanebaum, Prentice-Hall, 1987 p 226

      Prentice-Hall behaved like real jerks about the source code which is why Linux came to be. You were only allowed to distribute patches against the official source code.

      Fred van Kempen implemented virtual memory on the 80386 for Minix, but Prentice-Hall forced him to take it off the internet because he distributed the full source rather than just a patch. I never looked at Fred’s version, mostly because it had been taken down before I had internet access. Running MINIX was a stated requirement for the first x86 PC I bought. I tested it running on floppies in the store before I paid for it.

      Tanenbaum steadfastly refused to accept implementations of virtual memory in the early versions of MINIX because it unnecessarily complicated the kernel. He wrote MINIX as a replacement for Unix source code when AT&T stopped allowing the use of the source code in courses with the release of version 7.

      The 1st edition is still a good read 30 years later.

          1. @BrightBlueJim, no he said it in the famous debate. But he was talking about porting applications to Linux, which was easy because it used a standard POSIX interface, unlike Minix. He considered portability towards the applications more important than portability towards the hardware.

          2. Good insights from the Torvalds-Tanenbaum debate:

            If you write programs for linux today, you shouldn’t have too many
            surprises when you just recompile them for Hurd in the 21st century. As
            has been noted (not only by me), the linux kernel is a miniscule part of
            a complete system: Full sources for linux currently runs to about 200kB
            compressed – full sources to a somewhat complete developement system is
            at least 10MB compressed (and easily much, much more). And all of that
            source is portable, except for this tiny kernel that you can (provably:
            I did it) re-write totally from scratch in less than a year without
            having /any/ prior knowledge.

            In fact the /whole/ linux kernel is much smaller than the 386-dependent
            things in mach: i386.tar.Z for the current version of mach is well over
            800kB compressed (823391 bytes according to nic.funet.fi). Admittedly,
            mach is “somewhat” bigger and has more features, but that should still
            tell you something.

            Linus

      1. It became vaporware, for the most part. The current leader of the project is a local friend of mine who hasn’t got the time even to find someone else to manage it — and the systems that it targets have mostly been replaced with newer/”better” anyways.

  1. “”We are always surprised we don’t see more retro computers running MINIX, which was a common Unix alternate back in the day.”” This is why, and not the whole story… https://en.wikipedia.org/wiki/MINIX#Licensing .. a couple of ports had been tried for other architectures but on at least 2 occasions the publisher jumped down their throat with army of lawyers and forced suspension of development and removal off internet… there was quite some back and forth posturing on the newsgroups at the time. Anyway regardless of legal merit vs authors intention, it was a chilling effect and made for a lot of FUD around minix development. I seem to recall the Amiga version was on and off Aminet more than once as the shenanigans went on. So as implied there, as soon as legally unencumbered FOSS ‘nixes were taking off, everyone was all “screw this shit” and migrated.

          1. Apple doesn’t use a microkernel. The single crucial property of a microkernel is that different kernel tasks run in their own address space. Apple’s OS runs in a single address space, same as some other kernels where people mistakenly call them “microkernels”

            Sure, you can find true microkernels on embedded applications, where scaling is not an issue.

          2. Actually, no, the defining characteristic is that the “kernel” runs in user-space; this CAN take the form of multiple components running in separate processes (it was the original vision, after all), but it has since been shown that it’s hardly the only configuration in which a microkernel is useful. Consider all the academic studies behind microkernels where students ported all of BSD to run “as a task”. Indeed, build for build, FAR MORE microkernel deployments use a single monolithic kernel running in user-space than, say, a more Amoeba-like constellation of services.

          3. Taking a monolithic kernel and running it as task in user space is pretty much the same as running the same kernel in kernel space, so it makes no sense to call that a “microkernel”, except to introduce a nice sounding buzzword.

        1. The biggest problem with microkernels is that they don’t scale. QNX is a niche OS, meant for small embedded applications that don’t need to scale, so it works okay for that. You won’t see a microkernel running a general purpose desktop PC, or a big server.

          1. The whole point of a microkernel is to have different tasks using their own protected memory space, so they can’t overwrite the memory of another task, and instead have to rely on message passing to get information from one to the other. In these “unpure microkernels”, or hybrid kernels, you’ll find that all the kernel tasks use the same memory space.

            Basically, they’re running a monolithic kernel on top of a thin microkernel layer. All the “problems” that the microkernel was supposed to fix, are just moved to another layer.

          2. There are microkernels without any protection between components – actually the first versions of QNX were examples of that. There are microkernels without message passing, the Nemesis operating system used shared memory and one bit signaling (that didn’t cause context switching) between protection domains, the K42 system used protected procedure calls as did a lot of other designs. The Go design used the segmentation of x86 for protection which meant that pointers could be passed to other components to allow access, not shared memory and not message passing.

            The main problem microkernels solve is that of software complexity, applying good software development practices to operating system components.

            Neither Windows nor OS X are microkernels.

          3. “The main problem microkernels solve is that of software complexity, applying good software development practices to operating system components”

            You don’t need microkernels to reduce complexity. You can make a monolithic kernel just as modular. And actual microkernels, where you don’t allow shared memory between tasks, only increase complexity because of the coherence problem.

    1. 16MB, 16 bit address lines from the CPU + 8 bits from external logic. If the system is based on S100 or ECB bus, the address extension usually was 20-22 address bits, dependens how the vedor defined the few unused bus lines.

  2. It’s really not correct to say the *nix is more modern than CP/M. Thompson was familiar with the fork-exec model from the Berkeley timesharing system from the mid 60’s. c.f. “The Evolution of the Unix Timesharing System”.

    CP/M is pretty much a direct clone of an early DEC OS from a similar time frame. Quite simply, there is the fork-exec model used by Unix and the transient process space model used by DEC and best exemplified by VMS on the VAX 11/780. The “modernity” of *nix is simply the result of having achieved wide spread use.

    IIRC Windows NT followed the DEC model of a transient process space, but it’s been a very long time since I read about the design of Windows NT. Dave Cutler designed both VMS and NT, so it would be logical for him to have used the same model for both.

    1. What I mean by that is that a Unix-like OS is more familiar to users who know Linux or Mac (which is, after all, Mach). Even on Windows you get some exposure to similar things, but no one types PIP or SYSGEN anymore unless they are running retro.

  3. Is building Fuzix for Z80-Pack (emulator) still broken or unmaintained?

    In Fuzix’s early days I had some fun playing with it but then some changes in the build system or my understanding thereof made local build attempts fail. :-(

    Fuzix on Z80-Pack would be a nice appetiser if it still were easy to build…

      1. An Apple II with a CP/M card? I’s been said to be the most common CP/M machine, there were lots of different 8080/Z80 computers that ran the OS, but no single computer sold like the Apple II with the CP/M card.
        Michael

        1. Possibly the Amstrad PCW. Which was sold as an out-of-the-box all-in-one word processor, complete with printer. But internally was a pretty flat Z80 system. And came with CP/M disks from the factory (3″ ones!). Some CP/M software I think was ported to it, but most of the software ran on native bare hardware. And when I say “most”, that’s most of not much.

          Anyway the PCW sold 8 million units, from 1985 through the early 1990s. Probably most users didn’t use CP/M but it came supporting it from the factory.

          1. Yah, if you want to screw around with a CP/M machine that’s 10 years less decrepit, looking out for a PCW 9512 would be a plan, the later ones came with 3.5″ drives at least, but think they were low density still, which finding media for is a problem.

          2. The 3″ were first LD and single sided then DD and double sided in the 8256 and 8512 then went to DD/3.5″ in the 9256/9512 which was a 720KB format.

            In general (Not PC-centric) Low density was 40 tracks, double density was 77-80 tracks, all at between 9 and 11 sectors per track and High Density was 77-80 tracks with 15-22 sectors a track….. apart from 8″

  4. If all you want is a user interface that looks like Unix, it’s been done. Rich Conn wrote ZCPR and its successors, culmilating with ZCPR3. It wasn’t Unix — not even close — but it was, in the *nix vernacular, a shell that was very Unix-like.

    The problem with a CP/M-to-Unix “upgrade” is that Unix was written to be a multi-user timeshare system. complete with time calculations to allowing each user to be charged for the time used.

    NOBODY wants that on a Z80-based computer, or any other small computer. Who wants multiple users logging into a single microcomputer? Far better to follow Jerry Pournelle’s dictum:

    “At least one CPU per user.”

    Multitasking, now, is a whole ‘nother issue, and that would be nice. But for most retro computers, one user, one task should be the model.

    1. The jump from a multitasking operating system to a multi-user operating system is minuscule, amounting to exactly those bits that deal with user accounting.

      As for who wants multiple users on a small computer, well, schools for starters. When I was going through highschool, classes often used small computers with networking equipment to allow multiple users to access files, to print, etc. At first, these started out as Commodore PET machines attached to 8050 drives and daisy-wheel printers. Later, it evolved to PCs running LANtastic, attached to hard drives and laser printers.

      So, yeah, try not to generalize about what people do and do not want. You *might* be a weeeeee-bit surprised at the results.

      1. Pfft. Small computers. That’s nothing. Used to teach BASIC to unwitting EE newbies on timeshared PDP-8s, each running up to six users in 32k words of memory. Each system had a couple of RK05s, at a couple of mega (sixbit) bytes apiece. One of the machines had core, the other had semiconductor memory. You could tell which was which in the fall, when the thunderstorms came through and the power got dodgy.

        Tell that to kids these days and they don’t believe you.

      2. I stand corrected. When I said “nobody wants timeshare on a Z-80 based computer,” I should have said, “Nobody living in the 21st CENTURY wants that.”

        FWIW, a network of single-user PC’s, sharing printers, is not a multi-user system, IMO.

          1. Yeah, but … but … We started out talking about running a more modern OS on our homebrew Z80 systems.

            I think it’s safe to say that system administrators at large corporations aren’t buying homebrew Z80 systems for their personnel to use doing payroll, market research, inventory control, etc.

    2. NOBODY wants that on a Z80-based computer, or any other small computer. Who wants multiple users logging into a single microcomputer?

      All the users of MP/M, for many.

          1. Historically, I think the MP/M86 flavor would have been most popular, heyday in the early 80s, when travel agents, insurance brokers, and other local office types of brokers would have had a $3000 PC and several $400 terminals… with the PC hooking up to mainframe with an X25 leased line or diallup modem.

          2. Any reason to think a travel agents, etc, would use MP/M rather than Minix, or Concurrent (which was a multi-user PC-DOS, I think developed by DR originally). For booking, I think travel agents (in the UK at least) used an online system, something like Viewdata (ie teletext over a modem) connecting to large central servers, presumably run by the travel agent’s head office, or maybe run by holiday providers and airlines.

            Ah, now I’ve looked it up, and Concurrent WAS MP/M, or at least a descendent of it, in the 1980s. I remember seeing it advertised in computer mags at the time. It was definitely MS-DOS compatible at that time, that was one of it’s selling points. DR did that, and of course CP/M, so the connection’s obvious really.

          3. >> Any reason to think a travel agents, etc, would use MP/M rather than Minix, or Concurrent (which was a multi-user PC-DOS, I think developed by DR originally). For booking, I think travel agents (in the UK at least) used an online system, something like Viewdata (ie teletext over a modem) connecting to large central servers, presumably run by the travel agent’s head office, or maybe run by holiday providers and airlines.

            Color me extremely confused (not a unique state, I grant you). How did a discussion about retro Z80 machines end up as a discussion of travel agents and their modems??? Talk about your topic creep!!

      1. I never used MP/M myself, but from what I recall, most if not all MP/M systems had n+1 Z-80 processors, with a dedicated CPU, memory, and serial port for each user, plus an extra one to handle the mass storage and other shared peripherals. So we’re not really talking about sharing an 8-bit CPU.

          1. Running character terminals, a Z80 shouldn’t have much trouble supporting a few users. CP/M itself could support several users, just one of them at a time.

    3. It’s not about user interface, it’s about POSIX, right? Will it run Unix-type software, even if it needs a recompile. And maybe a *little* tweak to the source code is allowable. That’s the point of any OS, to run software.

      Unix is happy with any user experience, you could strap anything you liked onto the front. You can replace file systems. You can unplug and plug in something else, for almost anything, long as it sticks to the interface standards.

      You could argue an OS doesn’t even need code, just specifications. Code is helpful, of course.

      1. >> It’s not about user interface, it’s about POSIX, right?

        Not even close! Who would want POSIX in a machine with 64K of RAM????

        I see that I might have offended some folks of the Unix-uber-alles persuasion (not saying you are one — I don’t know you well enough to say that). It’s always seemed a mystery to me how an OS designed for time-share users in 1969, as an alternative to the 1964-era Multics, is still considered by some to be the cutting edge of OS technology.

        If you seriously believe that, look me in the eye as you tell me that Vi is still your favorite editor.

        1. Er… actually Vi is my favourite editor, for coding at least. I probably wouldn’t write a CV with it. Once you know Vi, you can churn out line after line with a flourish of the wrist! I like the way it has a sortof grammar, you can type up quite complicated sequences of moving the cursor about and editing things, and then do the whole thing over again pressing “.” . Or prefix it with 8 and do it 8 times over.

          It takes a while to get going, but once you do, you’ll amaze yourself with some of the command sequences you’ll come up with, off the cuff, without thinking, just as they’re needed.

          Yep Unix was designed for 1969 machines with 4K RAM, but amazingly works on machines with 4GB an thousands of times more MIPS. Just because it was designed right! Can’t say that about DOS 1.0 or whatever DEC and IBM came up with (several different OSes for each line of hardware!). And because it’s a framework, a set of interfaces. As much a philosophy, or a style, as anything. It’s not designed around any particular hardware like so many other OSes. So it will run on anything, into the future, as long as people need servers and programming. If the day comes you just ask a computer to do what you want, then all programming is obsolete and maybe then Unix will be dead.

          It’s like C, it’s good for low-level back-room stuff. It’s never going to be for ordinary users, but it does a good job in the engine room providing the stuff that programs need. Then those programs are used by the ordinary users.

          Sure it’s got a lot of faults, it’s user-hostile in places, particularly Linux shows it’s heritage as being a heap of programs written by hundreds of people, none of whom collaborated, each making up their own standards and interfaces as they went along. If I need to change a setting in some Windows software I’ve never used, I can figure it out quickly enough. In Linux it’s a matter of finding out what smartarse name the programmer gave to it’s config file, then loading that into Vi and figuring out what the hell the guy was thinking when he laid down the config options. Quite probably on the fly, as needed, as he was writing the code.

          It might be possible for one company to fix all of this, take it all on board, and standardise and systematise a whole Linux setup, applications and all. It would be a big job though, on the sort of scale of Microsoft or Apple. Getting volunteers to do that instead would be like herding cats, the thing would be forking versions before it was even half-written.

          1. >> Er… actually Vi is my favourite editor …

            Of course it is ;-) ;-) ;-)

            >> It takes a while to get going, but once you do, you’ll amaze yourself with some of the command sequences you’ll come up with, off the cuff, without thinking, just as they’re needed.

            I have a story about that. In 1975, I was managing a tiny company. one of the first to do embedded software for microprocessors (can we say 4004 and 4040?). Our target system was a 4004 SBC, our development system an Intel Intellec 8, our console a Teletype ASR-33. The editor was an ED-class line editor.

            Our president had hired a CS grad student, who for some reason decided that the right way to edit/modify existing software was to sit down and program all the edit commands on a paper tape. I’d see him sitting for hours at his desk, writing the “program” to issue the commands. Then he’d punch up the command tape, load the program to be edited, then put the command tape in the reader and launch it. If everything went right, he’d get the edited file in a single click of a key.

            Of course, everything _NEVER_ went right. Somewhere in that command sequence, he’d make a mistake, and the editor would scramble his program file beyond all recognition.

            So did he learn from his mistake? Of course not. He’d sit down and write a NEW command file, to tell the editor how to fix the now FUBAR’d file.

            This guy was one of the main reasons the company failed, and we never became billionaires like Bill Gates.

            No editor command sequences for me, thanks. I much prefer to KISS.

          2. I’d like to make my take on Unix perfectly clear. I fell in love with the ORIGINAL Unix at first sight. Multics itself was HORRIBLE. The philosophy seemed to be, put everything anybody could ever conceive of into every single system utility. The result? The user manual was a rack of folders a good six feet long. The man “page” for readmail was 19 pages long.

            The Bell Lab’s team had a much better idea: Have lots of small utilities, each doing only one thing, but doing it very well. Kernighan and Plauger laid out the concept in their wonderful book, Software Tools, which predated even C. It — and Unix — were the ultimate expression of KISS.

            My problem with Unix (and now Linux) is not how it started,but what it became. In 1983 I was using BSD 4.3 Unix on a VAX. Standard procedure was to shut it down and reboot it every night. It was the only way they could keep it stable. Even then, it often crashed during the day.

            Around 2008 I was doing a job for NASA that required serious number-crunching. We got a top-of-the-line industrial-quality PC cube with tons of memory and horsepower. I thought I’d find a huge stability advantage over our PC’s (running Windows 98SE in those days. But in fact the Linux machine was even more crash-prone than the PCs.

          3. Jackcrenshaw, you were running Windows 98SE in 2008? Not XP (Released in 2001)? Even when Vista existed and Windows 7 was on the cusp of release (in 2009)? What?

        2. It absolutely *IS* my favorite editor. Windows, AmigaOS, and Linux — all three platforms, my editor of choice is Vim. On systems where Vim isn’t available, plain, stock, good old Vi.

          1. EMACS? Eighty Megabytes and Constantly Swapping :D

            EMACS: It’s a nice OS, but to compete with Linux or Windows it needs a better text editor. — Alexander Duscheleit

            Wylbur? Wow, I don’t think I’ve ever run across anyone else who even knew what that was!

            Editor of choice? JOE, mostly because the Wordstar keys are burned into my very soul :D

    1. In a world where we only have Z80 computers with 64K RAM, that would matter. Actually I don’t know if there are ANY Z80s in current hardware, even embedded ones. I know it’s available, but is it popular? You hear of 6502s turning up now and then in cores. Although embedded stuff isn’t really what CP/M was made for.

      1. I think some TI graphing calculators still have them. Probably mostly back-stock though. They went to ARM and rapid prototyping like everyone else.

        Another cool chip is the RISC PIC radioshack use to sale. Same level of abstraction but RISC.

      2. >> In a world where we only have Z80 computers with 64K RAM, that would matter. Actually I don’t know if there are ANY Z80s in current hardware, even embedded ones. I know it’s available, but is it popular? You hear of 6502s turning up now and then in cores. Although embedded stuff isn’t really what CP/M was made for.

        A few years back (like maybe 10), I read that the Z80 was the most popular of all 8-bit microprocessors in embedded systems. The reason? Many FPGA vendors have the logic in their standard libraries. So they were putting them in auto ECUs etc.

  5. No mention of the best portable Z80 machine ever? I wonder if Fuzix could work with only 8 KB of RAM (though it can be expanded in the ROM cartridge). There’s also an improved version of this machine with a color screen and 32 KB RAM, so perhaps Fuzix could run on it. The CPU is not a standard Z80, but it’s supported by SDCC so it shouldn’t be hard to port it.

        1. Gameboy? GAMEBOY? I’m amazed no one yet has mentioned the Kaypro. They used hardware bank switching, putting the ROM and video RAM in one bank, user RAM in the other, to give a full 64K to user programs. They had firmware upgrades to give 800K on DSDD floppies (take that, IBM!).

          I did a bit of souping up of both the hardware and software. My system used ZCPR2, together with user-group utils like cp, ls, dl, etc. All of them supported full wild-card argument lists. ls lists were alphabetically sorted. The software included Wordstar (as both word processor and program editor), Turbo Pascal, Turbo Modula 2 (yes, there was one), FTL Modula 2, BDS C, and the greatest assembler package ever built, SLR. Also a wonderful disassembler from the user group, plus my own debugger.

          I added a 2K RAMdisk, that was big enough to hold all my programs at once. I had a boot script that would load the RAMdisk from floppies each morning. Later I added a 20MB HD. I wrote a menu-based directory “shell” that let me use the 16 user areas (including 0) as a first layer of subdirectories.

          I got dragged kicking and screaming into the world of the PC and MSDOS, not because I was such a reactionary luddite, but because the Kaypro OUTPERFORMED the PC-AT by every measure.

          The main thing I liked about the Kaypro? It worked. Always. And I understood every instruction in the OS. If I didn’t like the way it behaved, I could change it. Try _THAT_ with Windows 10.

          I used to say, “If Zilog ever comes out with a 20MHz Z80, I’m switching back. But of course, they did, so I had to eat those words. But now there are Z80’s in FPGAs, running many times that clock speed.

          As I type this, there’s a TI MSP430 Launchpad sitting here blinking. My dream is to put CP/M on it.

          Well, not CP/M exactly, and certainly NOT executing a Z80 emulator. But a CP/M-class OS that will behave much like a modernized single-user (but multitasking) system. And one simple enough so I can change it at will. Wish me luck.

          1. Yeah, eBay’s full of ’em. FWIW, the Kaypro 2 is newer, and more modern, than the Kaypro II. Go figure. I still have my original Kaypro IV, which became known on CompuServe’s CLMFOR as “The Purple People Eater” after I tore it down and got a custom auto shop to give it a purple & white paint job.

            I also have four or five others, bought in an eBay orgy circa 1999-2000. Along with a few TRS-80’s, and assorted S-100 boards.

        1. Great minds! Yep, was beautiful, in so many ways. If you read about it in detail, the OS was very well designed too. All the menu / window stuff could be hooked into by third-party programs. It could also do full-screen bitmap graphics, but that’s not something that was in the manual. There’s BASIC extensions available that add a few graphics commands.

      1. No, the Cambridge Z88. Have a look at it, tell me I’m wrong. I had a couple. They’re great. Sometimes I have dreams about a model that has a 16-line screen with colour-coded STN, the kind of LCD some organisers and calculators had in the 1990s. LCD, no backlight, with red, green, and blue as the only available colours, plus “clear” of course. Used a special kind of LCD, each pixel changed colour itself, no RGB triads.

        The actual reality version had a 640×80 LCD, full-size keyboard with good movement, a load of good built-in software, and was just generally very good. Weighed less than 1KG, ran for 20 hours on 4xAA batteries. Something Clive Sinclair got right.

  6. Xenix was a popular *Nix clone as well
    In the 1980’s I developed software for Z80’s and X86 platforms. Including my own *Nix clone
    ZATIX which was an in-joke name. Z AT(80) IX(9) or z89 the terminal I used
    I also sold computers under name.
    Multi banked z80 computers where available and commonly used. One system I developed for had 16 serial ports of which 14 could be used for terminals (1 was used for the console, 1 for the Modem). It had 4mhz Z80a and 512K of Ram supporting ALL those terminals. Of course once better hardware was available most upgraded, but some still used the systems into the 1990’s. Mainly used for simple tasks like invoicing, receipting or parts lookup.

      1. Yeah – it was the Microsoft Unix system 7 port to minicomputers that eventually SCO’d. I ran it on a TRS80 model 2 moded to a 16b with a whole meg of ram and a couple of 8 meg hard drives.

        1. When the first company i worked for was bought out , the place we went to sold a warehouse management app that ran on thoroughbred basic on sco. It was a treat to admin /sarcasm. Having a commercial *nix on my resume did help land a gig working the big iron as a HPUX admin (we had maybe a dozen SuperDomes, both PA-RISC and Itanic ), so I guess I cant complain too much.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.