The Acorn Archimedes At 30

The trouble with being an incidental witness to the start of something that later becomes world-changing is that at the time you are rarely aware of what you are seeing. Take the Acorn Archimedes, the home computer for which the first ARM processor was developed, and which has just turned 30. If you were a British school pupil in 1987 who found a pair of the new machines alongside the row of BBC Micros in the school computer lab, for sure it was an exciting event, after all these were the machines everyone was talking about. But the possibility that their unique and innovative processor would go on to spawn a line of successors that would eventually power so much of the world three decades later was something that probably never occurred to spotty ’80s teens.

[Computerphile] takes a look at some of the first Archimedes machines in the video below the break. We get a little of the history and a description of the OS, plus a look at an early model still in its box and one of the last of the Archimedes line. Familiar to owners of this era of hardware is the moment when a pile of floppies is leafed through to find one that still works, then we’re shown the defining game of the platform, [David Braben]’s Lander, which became the commercial Zarch, and provided the template for his Virus and Virus 2000 games.

The Trojan Room Coffee Cam Archimedes, on display at the Cambridge University Computing Department.
The Trojan Room Coffee Cam Archimedes, on display at the Cambridge University Computing Department.

We see the RiscOS operating system booting lightning-fast from ROM and still giving a good account of itself 20 years later even on a vintage Philips composite monitor. If you were that kid in 1987, you were in for a shock when you reached university and sat down in front of the early Windows versions, it would be quite a few years before mainstream computers matched your first GUI.

The Archimedes line and its successors continued to be available into the mid 1990s, but faded away along with Acorn through the decade. Even one being used to power the famous Trojan Room Coffee Cam couldn’t save it from extinction. We’re told they can still be found in the broadcast industry, and until fairly recently they powered much of the electronic signage on British railways, but other than that the original source of machines has gone. All is not lost though, because of course we all know about their ARM joint venture which continues to this day. If you would like to experience something close to an Archimedes you can do so with another computer from Cambridge, because RiscOS is available for the Raspberry Pi.

Sit back and enjoy the video, and if you were one of those kids in 1987, be proud that you sampled a little piece of the future before everyone else did.

Thanks [AJCC] for the tip.

Archimedes header image: mikkohoo, (CC BY-SA 4.0).

25 thoughts on “The Acorn Archimedes At 30

  1. My memory of these was that it was a bit Betamax Vs VHS, the schools were saturated with BBC Micros and there was huge amounts of software for them (and the teachers understood them), and then PC’s were gaining ground for “big school” things.

    1. PCs were hopelessly outclassed by the Acorn kit in both speed and usability until Windows 3.1 came out, at which point they started gaining ground. Have you ever noticed how the Windows 95 taskbar resembles RiscOS? That’s not entirely coincidental, aiui…

      1. Pretty much everything at an affordable price was outclassed by it at the time, PCs, 68K based machines, the lot. Unfortunately it came from a niche British supplier of educational home computers, so nobody really noticed it.

        IIRC the ARM spin-off from Acorn was in conjunction with Apple, who were eyeing it up for what eventually became the Newton. But I may well be misremembering that, and I’m sure Hackaday readers will have among their number some people who were close to the action.

        1. Yay! I still have a Newton. Haven’t turned it on in 20 years… but can’t chuck it because it might have some ID info on it… Gah!
          Still, was a pretty cool machine for its time.

          1. ..and VLSI (not just Acorn and Apple).
            The big money-spinner was slightly later with TI and the ARM7 (most notably the 7TDMI which is still frequently found even nowadays – the early versions had very sly ways to make the device small and very low power by leveraging special gates but that was lost when it got scaled to the very small geometries where such tricks weren’t possible & it was simpler to use standard cells).

            There’s some stuff here:
            https://community.arm.com/processors/b/blog/posts/a-brief-history-of-arm-part-1

  2. I supplied acorn computers in the schools around Coventry in the uk. what killed them off was pressure from the governors to become PC compatible became they where used in the businesses that they ran, even though the Acorn machines where designed to teach programming and had a very large education software base they where bulldozed out of the market by the large corporations like IBM.
    Acorn failed to capitalise on the lead they had in the education market but realised it had a fantastic design in the ARM CPU the rest is history.

    1. To further this comment, the common argument was that “What’s the point in teaching our kids to use these weirdo Acorn systems when they’ll be using IBM PCs as soon as they leave school”.

      1. It’s a pretty relevant argument.
        We all had Amiga’s or ST’s at home anyway.
        Once we got into the lab with 3.1 on top of dos things got more interesting. Here was a system used in the real world and we knew more about them than the teachers.
        We owned the network. We ran it. We hacked things. We hooked them up to modems on the internal PBAX system on the quiet.
        We downloaded more warez for our Amiga’s and ST’s.
        We went on work experience to companies that used the same systems and we knew how to use them better than most of the employees.
        We felt inspired and that finally all the BS we were subjected to daily in class might finally actually have a use after all.

  3. Many thanks from me to my then roommate Andy K.: he purposely bought an Archimedes when it came out, for himself (after his C128) and making fun out of the PeeCees.
    I was already aware that there were various computers (by CPU architecture AND OS) as I yanked my Olivetti286 in favor of a MacIIsi instead of following the sheeeeps toward Win3x (nothing to win there – bullock naming).
    But thanks to Andys Archie my perception of the computing world did not stick to the just B/W model.

    About Performance: that Archie ARM@8MHz crunched away “about on par” to my ‘030@25MHz.

    1. About performance: the main reason why some of the early machines outperformed the x86 architecture was that they had some trick up the sleeve that ultimately lost them the game because they were too specialized and became incompatible with their own future.

      The PC just kept on adding brute force, which was a scalable process, whereas the competition ran out of clever tricks to bypass their lack of perfomance – the Amiga was a prime example of this: it was one hack after another and any upgrade to the system made the hacks non-functional and irrelevant.

      So the people who made fun of the PC were essentially proud before the fall.

        1. The way the Amiga did it: yes.

          Different functions were spread across a number of chips, “fast” and “slow” RAM and different access schemes depending on which chip it is connected to, HAM was a total hack…

          It was all because they wanted to save money by utilizing unused chip area to reduce the number of components. As a result, upgrading the chipset in any major way to sort out the mess broke software compatibility entirely.

          1. And the IBM PC also had co-processors. The 8087 math processor, and the 8089 I/O processor. The trick is, you didn’t necessarily need them unless you had some special application, so you didn’t have to pay for them, and with most people opting to go without it also meant that the software was written to be compatible with the basic system and the PC could evolve into a more powerful system in a piecemeal fashion with every part replaced separately.

            It was totally unlike Amigas where the whole constellation of chips had to be present for anything to work at all, because the same chip that did audio also read the joystick buttons and the floppy drive, but for some strange reason did not read the mouse or joystick axis – that was delegated to the “GPU”, which in reality was only half of a GPU because the memory controller did half the graphics functions. Obviously in such a clusterfuck, if you change one thing, you have to change everything, and so Amiga upgrades were more or less bolting on a whole other, software incompatible, computer on top of the existing one.

      1. Commodore started this mess even before Amiga; The whole lineup vrom C-16 to C-64 were supposedly an evolution, but actually completely non-compatible. You just couldn’t run VIC-20 / C-16 programs on C-64, you had to make a complete port. The same with C-128; to be able to run C-64 programs it had a whole C-64 inside and was actually THREE computers in one.
        THEN came the Amiga; while the A600 could run most of the A500 programs, many of them couldn’t be used because of dependence on the full A500 keyboard. Lot of programs only worked with a specific Kickstart version, so you had to swap the ROM chips back and forth.
        Totally insane.

        1. Why on earth are you comparing the VIC-20, which predates the C64, with the C16, which came out two years *after* the C64? The C16 was a cut-down version of the Plus/4, which actually had some great features, like 128 colors (121 usable) compared to the C64’s 16, and a considerably more capable version of BASIC. Of course you couldn’t run software made for the C16 and Plus/4 on a computer that came out two years before they did.

      2. ” the early machines outperformed the x86 architecture was that they had some trick up the sleeve”

        Yes, the trick was called ‘good design’. The x86, is simply, especially in its 8088/8086 incarnation is simply – a poor design, created because they had a customer who needed >64Kb of address space and their i432 project was going nowhere. It’s primary muck-up was its unprotected segmentation scheme which, calculating addresses as Segment*16+Offset meant it was incompatible with its protected mode descendant, the i286 which itself was messed up to the point where Bill Gates called it ‘brain-dead’.

        It only started improving with the i386 (which again is object-code incompatible in 32-bit mode with its predecessors, forcing the processor to implement emulation modes to support the legacy code).

        All this has an impact on performance / development cost / wasted heat etc.

        The PC ‘won’, simply because IBM chose it in its personal computer, though Intel being early into the 16-bit era and having a good marketing dept helped a log

        1. It wasn’t a poor design and anybody that claim it was can’t know much of the processor market at the time. If you feel a bit insulted (not that is was intended) it’s okay because you just insulted a go… okay, _pretty_ good processor. :P

          The 8086 was partially designed as a stop-gap measure and partially as a microprocessor that was more capable than most at the time. Some things they got wrong but say any real-world design that have no warts? They also got some things very right which have enabled the processors most of us use to post on this website to be backwards compatible with a 1978 design. That was mostly by accident as the 8086 wasn’t intended to power processors in the far future.

          You talk about unprotected segments. Yes they were unprotected. As were the majority of the competition (can’t actually remember any microprocessor for consumer applications that had any type of protection). Segments were a way to extend the addressable space – nothing more, nothing less. The 16 byte paragraph size (=minimum address difference between two different segments) was odd in two ways. By not simply extending the base address or doing a logical or of the (shifted) segment and the rest of the address computing an address required an extra 16-4 bit full addition. By not having larger paragraph sizes less addressable memory was possible for future extension of the ISA.

          The 286 protected mode isn’t compatible with real mode by choice. It was intended to add that protection you mentioned above and did so by incorporating features from earlier capability systems that used segmentation. Note that it still was possible to write code that was compatible with both real mode and 286 protected mode, IMHO that was a bit messy.
          About that Bill Gates quote: many people thought that Intel had made a stupid design in the 80286 but it was only one thing they complained about, the fact that the 80286 once entered protected mode it couldn’t switch back into real mode via software.
          That lead to code that mixed real mode code with protected mode code only being able to switch by a BIOS software + external hardware hack which added quite a bit of complications. In short one wrote a flag value + a handler address into the RTC (=real time clock) RAM and then used the keyboard controller to reset the computer. When the BIOS got into control it checked the flag and if that was set the processor branched to the handler.

          The 386 is compatible with real mode, 16 bit protected mode and 32 bit protected mode. The modes aren’t emulation but just modes. Protected mode programs can co-exist without problem and can call each other while keeping protection going, real-mode programs are handled somewhat differently as they can’t be allowed direct access to hardware (otherwise one could start RM programs and ignore the protection features of the system). So there is a lightweight virtualization machine supported that can allow real mode code to run native without emulation until they trigger a protection boundary. In that way multiple RM programs can run each in their own VM using virtual hardware accesses while the system as a whole is protected.

          1. Some more information if anyone finds that text block from a search or something:

            One example how one can see why protected mode and real mode are just modes without emulation: loading segment registers in real mode will load two shadow (visible to hardware, not software*) data fields indicating the base address (=segment data*16) and segment limit (how large a segment is) = 64KiB. Protected mode instead load those fields from a table with some differences between 16 and 32 protected mode. But one can switch to protected mode, load the segment limit with a larger value and then switch back to real mode – this is commonly called unreal mode (I called it extended real mode back in the days). The result is real mode programs that can access up to 4GiB memory.
            Another example is that real mode interrupts can be changed by the protected mode LIDT (Load Interrupt Descriptor Table).

            Resetting the 286 to do a mode switch via the keyboard controller added cycles to the already slow (hacked) switch but was what IBM did at the time. Later 286 systems tended to add a faster way to do that reset using an extra port (learned that long after stopped using 286 systems – never understood hardware documentation that mentioned that feature).
            Even faster was letting the processor reset itself abusing(?) the protection mechanism: If the processor get a type of exception while handling another exception it will try to use a special double-fault handler. That’s because while this is an indication of something being seriously fucked up the system may still be able to correct the error (e.g. by rewriting/reloading critical hardware tables). But if the system encounters yet another exception when trying to execute the double-fault handler the system must be in an extremely bad shape and the only valid error handling strategy is to simply stop executing code – this is done by resetting the processor (but not other devices).

            The easy way to trigger this mechanism is making sure each exception handler can’t be fetched – by writing an invalid interrupt table – followed by any type of exception. Smallest size uses the INT3 instruction (1 byte), special cases can use an invalid instruction (which will trigger the invalid opcode exception). This is still the fastest way to reset any x86 processor from protected mode ring 0 (the inner sanctum of protected mode).

            BTW the AMD64 long mode is still a type of mode but makes some significant changes to the x86 ISA. Real mode is still not emulated but segmented protection isn’t as fast as possible (but then that have been true for x86 for a long time).

  4. Fascinating information. As a kid of the 80s I would have loved to have had access to one of these computers, but I am on the wrong side of the Atlantic so I was completely unaware of their existence. Instead I learned BASIC on my TI-99/4A, which was also a really cool computer at the time but certainly nowhere near as powerful as the Archimedes.

    The only question I have is this: If that really was a Sirius Cybernetics elevator, why wasn’t it acting neurotic?

  5. I went to teaching college in 1992, my only experience of computers was an Apple II in Uni and an Amiga at home. Never saw a computer while a pupil at school. Teaching college was mostly Macs, sometimes PC and I think maybe an Archimedes. A few of my early schools as a teacher had Archimedes, particularly in Technical and Art departments but from then on it was mostly PCs in schools.

  6. To see what WE at the Eastern block had as soon as 10 years after the Archimedes, find the IQ-151 on wikipedia. You will literally lose all of your sh#t, but not for the reason you may think before you see it LOL

  7. A significant step in computing that few know much about. A pragmatic RISC design married to a pragmatic system design. Sadly it was more expensive and less available than the competition.
    Can’t say I like the operating system even though it have some nice features – what other system have quick access to a powerful BASIC system with integrated assembly support?

    Will always wonder what would have happened with the Archimedes if someone like Commodore would have taken interest in it. Would we all be running Armigas instead of IBM PC type computers?

  8. Another interesting concept of Risc-OS, which Apple later re-used as ‘packages’, was the fact that any application was in fact a directory with a ! in front of it’s name. Hence graphics, libraries etc. could easily be added to the code, and in fact it was possible to create professional apps using the built-in basic interpreter.

    I can remember writing a Mandelbrot generator in BASIC which ran circles around the PC version. Later I converted the code to ARM assembly and I could generate full-screen Mandelbrot sets in 3 seconds. In 1987! This was at a time when generating a set was measured in hours.

    Still have my Archimedes!

  9. Pure nostalgia. I worked with an Archimedes as a lab tech in the 90s using educational software to show acceleration on an air track and such things to classes. I have to say it all just worked and in a satisfying way , only a couple of years later I remember my first experiences of computer frustration with serial port connections, crashes and the like with early windows machines.

Leave a Reply to MegolCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.