The Flex Computer System: UK’s Forgotten Capability Computer Architecture

Two ICL PERQ 1 workstation computers, Department of Computer Science, North Machine Hall, James Clerk Maxwell Building, University of Edinburgh. (Credit: J. Gordon Hughes)

During the 1970s many different computer architectures were being developed, many of them focused on making computer systems easier and more effective to use. The Flex Machine developed at the UK Ministry of Defence’s Royal Signals and Radar Establishment (RSRE) was one of them, falling in the category of Capability Architectures. These architectures required hardware with programmable microcode, which required either custom hardware, or computer systems like the Xerox Alto-inspired ICL PERQ (pictured). What’s interesting about Flex is that it didn’t just remain in the 1980s as a quaint footnote, but as detailed by [Martin C. Atkins] – who worked on the system – evolved into the Ten15 system, which later got renamed to TenDRA.

Capability architectures have a long history – including the Intel iAPX 432 and more recent implementations – but they all have in common is that they effectively implement an object-based memory architecture, rather than the low-level, flat memory space that we usually see with computer systems. These object-based capabilities, as they were termed, provides a level of memory protection and security that would be hard to implement otherwise. The book Capability-Based Computer Systems by [Henry M. Levy] forms a good introduction here.

Detailed information on the Flex System is somewhat scattered, with much of the information contained in scans of original paper documentation, such as this introduction to the Flex Computer System from 1979, and the documentation on the instruction set and firmware architecture from 1981.

The TenDRA project is as described on the project page a ‘compiler framework for architecture neutral representation of compiled programs’. The Wikipedia entry for the TenDRA Compiler describes it as a C/C++ compiler for POSIX compatible operating systems, which is now two projects: Ten15 and TenDRA. TenDRA compiles into the Architecture Neutral Distribution Format (ANDF) as an intermediate format, which is an Open Software Foundation definition that is based on the Ten15 Distribution Format, continuing the Flex System legacy.

Although Flex System and TenDRA are not well known, and ANDF never get that much traction in a space ruled by .NET, the JVM and others, one could argue that it’s still a relevant and very much alive set of technologies today.

Thanks to [gnif] for the tip.

Top image: Two ICL PERQ 1 workstation computers, Department of Computer Science, North Machine Hall, James Clerk Maxwell Building, University of Edinburgh. (Credit: J. Gordon Hughes)

19 thoughts on “The Flex Computer System: UK’s Forgotten Capability Computer Architecture

  1. At the Large Scale Systems Museum in Pittsburgh, we’ve for a PERQ a few of our volunteers are working on getting running again. It’s definitely a “unique” CPU design, which makes it confusing to debug.

    They also used a Z80 as a CPU just for the I/O controller, which makes a really interesting box.

  2. >object-based memory architecture

    That sounds like performance issues and obscure bugs lurking under the waters, because the programmers don’t have access to information about physical memory mapping, or in the worst case, about how much memory is being used and where it exists.

    1. Early capability systems and descriptor architectures tended to require explicit allocation, even if it was on a stack-discipline.

      That didn’t prevent people from later figuring out how to do resource accounting correctly in capability systems. KeyKOS for the System/370 (the author has some great technical detail on cap-lore.com) introduced the concept of the SpaceBank, a way to safely pass around the authority to allocate, and those systems that care about resource accounting still use some form of this pattern today.

  3. RSRE developed the Viper processor, the first to be formally proven correct from architecture to silicon implementation. A company disputed that and started litigation. Before coming to court the company was liquidated, which unfortunately avoided a court having to make a decision about mathematical proofs. Having said that, apparently no errors were ever found.

    There was an associated safety critical language “NewSpeak”, so named because it would be impossible to express an incorrect thought. I’ve no idea of the details, but it looks like some of its features have been incorporated in newer languages.

    https://en.wikipedia.org/wiki/VIPER_microprocessor
    See chapter 7 of
    http://monoskop.org/images/4/49/MacKenzie_Donald_Knowing_Machines_Essays_on_Technical_Change.pdf

    1. >so named because it would be impossible to express an incorrect thought.

      The 1984 version was about obscuring and changing the meaning of language so it couldn’t support abstract thinking. Every thought is reduced into a simple term, usually through contraction, which then loses all complex meaning when people are made to forget what idea it originally represents through censorship of the past. The meaning of the term is replaced with whatever the party wants it to mean.

      The idea was not so much preventing the expression of incorrect thoughts, but to prevent you from thinking them in the first place because you lack the mental language to even process the concepts. That plot point was made along the Sapir–Whorf hypothesis that suggests our experiences are shaped by our language (linguistic determinism).

      As a computer language, it would be something that not only doesn’t parse illegal commands, but downright lacks any functionality which could lead to illegal operation. E.g. if you could cause a buffer overflow by referring to an illegal memory address, instead of catching the error, the language would simply remove your ability to refer to any memory address. If you could make the program get stuck in a loop, the language would remove your ability to form loops… etc.

      If the language was that strict about it, it would have been almost completely useless.

      1. From the VIPER microprocessor wiki article (linked by [Tom] above):
        A safety critical programming language named Newspeak was designed by Ian Currie of RSRE in 1984 for use with VIPER. Its principal characteristic was that all exceptional behaviour in programs must be dealt with at compile time.

        I guess that is something akin to Java (et al) mandatory exception handling.

          1. The concept of numbers or quantities doesn’t strictly require number words. You could count with your fingers and toes, and develop whole counting systems that way. It’s just that counting and numbers are not innate, so obviously if you’ve never counted anything you will struggle with the concept.

          2. @Dude counting and numbers do actually seem to be innate. Humans are not the only counting animals and no one is teaching the other animals seen counting.

        1. The way you get around the hypothesis is to note that we don’t think solely by spoken language – we have access to abstract thinking in all information domains, so what you can’t think of in words you could think of in pictures, or emotions, gestures etc. which may be less efficient or precise, but still usable.

      2. There are very useful languages that can’t express an infinite loop – quite a lot of the mathematics community program in Agda, which is a total language.

        The property of capability safety exists between memory safety and referential transparency, fwiw. This allows you to understand which modules have access to which functionality: exactly the functionality that has been passed to them. If we built module systems this way today, most of our supply chain attacks would be a lot harder to perform.

  4. Wasn’t the Perq from Three Rivers Computers in Pittsburgh PA? I used one at CMU in the mid eighties, but never heard of an ICL connection. Classic bitsliced microcode machine…

  5. I have a paper copy of the RSRE Report 87014 titled “The Viper Microprocessor”, by J Kershaw.
    This describes the Viper processor and an assemble called Vista. Viper was to be fabricated in SOS by Marconi Electronic Devices where I was involved in ECAD software development.

  6. Thank you Maya for introducing us to this architecture, I had not heard of it since my introduction to the history of object-capability systems was also mostly hank levy’s fantastic book (and though Norm Hardy’s website cap-lore.com is not primarily about capability _hardware_, it also collects some fantastic historical references). I did know that the RSRE also had a Plessey 250, a very elegant UK capability machine from 1972. I’m looking forward to finding more details about this Flex/TenDRA system though!

    I’m a capability theorist, working on operating systems + web.

    Some points you (and readers) may find useful:

    Many capability architectures, including many early ocap systems, were not microcoded. Some had tag decoding directly wired into hardware such as the B6500 and IBM 801. The reason this is important is that a capability-based MMU can often be faster then a heirarchical page-table walking MMU, as all of the access control can be indicated as part of the unboxed capability as represented in registers, or it can live in the page table entry if you don’t have extra bits to spare. For example, while not a capability system itself, the insight that made the MIT CAR performant despite its typed memory is an example of the first style; and the fastest kernel today, seL4, implements the second pattern.

    I think we can understand that in the days of LSI, microcode was probably the _most cost-effective_ way to implement a capability system.

    Many modern capability systems appear to use capabilities for memory protection only, but early capability systems used unforgeable references for interacting with hardware, scheduling, and inter-process communication. I only point this out because it seems a lot of CPU designs that do use statistical methods for pointer tagging are getting compared to capability architectures lately, but they are a much weaker model. There is such a rich history to draw from, and often they are better performing than existing protection methods.

    1. I didn’t know about the Plessey 250 (thanks), but do know that RSRE asked Logica to build them a capability machine, to RSRE’s specification. This was the Flex hardware that was used by RSRE before the Perq. I was also told that the Flex mainframe “became” a secure communications processor, and so infer that the Plessey 250 was related fairly closely to the RSRE Flex machine.

      I agree very much with this statement about “statistical methods for pointer tagging”. A modern counter-example is the Cambridge CHERI architecture extension, which provides full capabilities. The derived Cheriot processor is also moving towards using capabilities for more than “just” memory protection.
      Both are interesting, in active development, and should be easy to find with Google!

  7. As the Martin Atkins referred to above, I’d like to thank Maya for the reference to my webpages. However, I see that they are embarrassingly out-of-date, and incomplete. I will try to address this!

    I would also like to point out that I used Flex, when working on a Ten15 sub-project, but didn’t contribute anything (except awe!) to the development of Flex.
    The Perq was indeed from Three Rivers, ICL re-badged it, and ported Unix to it for the UK Science and Engineering Council (SERC)

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.