Back in 2016, we took you to a collection of slightly dilapidated prefabricated huts in the English Home Counties, and showed you a computer. The place was the National Museum of Computing, next to the famous Bletchley Park codebreaking museum, and the machine was their reconstruction of Colossus, the world’s first fully electronic digital computer. Its designer was a telephone engineer named Tommy Flowers, and the Guardianhas a piece detailing his efforts in its creation.
TNMOC’s Colossus MkII.
It’s a piece written for a non-technical audience so you’ll have to forgive it glossing over some of the more interesting details, but nevertheless it sets out to right a long-held myth that the machine was instead the work of the mathematician Alan Turing. Flowers led the research department at the British Post Office, who ran the country’s telephone system, and was instrumental both in proposing the use of electronic switches in computing, and in producing a working machine. The connection is obvious when you see Colossus, as its racks are the same as those used in British telephone exchanges of the era.
Over on YouTube, [The Modern Rogue] created an interesting video showing a slide-rule-like encryption device called the Réglette. This was a hardware implementation of a Vigenère-like Cipher, technically referred to as a manual polyalphabetic substitution cipher. The device requires no batteries, is fully waterproof, daylight readable and easy to pack, making it really useful if you find yourself in a muddy trench in the middle of winter during a world war. Obviously, because it’s a slide rule.
Anyway, so how does this cipher work? Well, the ‘polyalphabetic’ bit infers the need for a key phrase, which is indeed the first thing all parties need to agree upon. Secondly, a number is required as a reference point. As you can see from the video, the sliding part of the device has letters of the alphabet, as well as numbers and a special symbol. The body has two series of numbers, with the same spacing as the central, sliding part. A second copy of the sliding part is also needed to slide in behind the first unit. This second copy is neatly stowed below the body during storage.
With each message letter, you lookup the corresponding cipher text number, then shift the slider to the next key phrase letter.
The cipher works by first aligning the starting letter of the (variable-length) key phrase with the reference number. Next, encode the first symbol from the cleartext message (the thing you want to encrypt). You simply look up the letter on the slide and read off either of the numbers next to it. Randomly selecting the left or right set adds an extra bit of strength to the code due to increased entropy. The number is the first symbol for your ciphertext (the thing you want to transmit to the receiver). Next, you move on to the next symbol in the cleartext message. Align the following letter of the key phrase with the reference number, look up the corresponding letter in the message, and transmit the following number onwards. When you run out of key phrase letters, you loop back to the start, and the cycle repeats.
The special symbol we mentioned earlier is not really a ‘blank’; it is a control symbol used to retransmit a new reference number with the existing setup. To change the reference number, the blank character is encoded and sent, followed by the latest reference number. When the blank symbol is received at the other end, the following code is used as the reference number, and the key phrase position is reset to point back to the first letter, restarting the cycle anew. Simple, yes. Effective? Well, not really by modern standards, but at the time of limited computing power (i.e. pen and paper, perhaps a mechanical calculator at best), it would have been sufficient for some uses for a couple of decades.
Why is this Vigenère-like? Well, an actual Vigenère cipher maps letters to other letters, but the Réglette uses numbers, randomly selected, adding entropy, as well as the control code to allow changing the cypher parameter mid-message. This makes it harder to attack; the original Vigenère was considered first-rate cryptography for centuries.
If you’d like to play along at home and learn some other simple ciphers, check this out. Kings and Queens of old frequently used cryptography, including the famous Queen Mary of Scots. Of course, we simply can’t close out an article on cryptography without mentioning the Enigma machine. Here’s one built out of Meccano!
The physical layout of the SCHEME-78 LISP-based microprocessor by Steele and Sussman. (Source: ACM, Vol 23, Issue 11, 1980)
During the AI research boom of the 1970s, the LISP language – from LISt Processor – saw a major surge in use and development, including many dialects being developed. One of these dialects was Scheme, developed by [Guy L. Steele] and [Gerald Jay Sussman], who wrote a number of articles that were published by the Massachusetts Institute of Technology (MIT) AI Lab as part of the AI Memos. This subset, called the Lambda Papers, cover the ideas from both men about lambda calculus, its application with LISP and ultimately the 1980 paper on the design of a LISP-based microprocessor.
Scheme is notable here because it influenced the development of what would be standardized in 1994 as Common Lisp, which is what can be called ‘modern Lisp’. The idea of creating dedicated LISP machines was not a new one, driven by the processing requirements of AI systems. The mismatch between the S-expressions of LISP and the typical way that assembly uses the CPUs of the era led to the development of CPUs with dedicated hardware support for LISP.
The design described by [Steele] and [Sussman] in their 1980 paper, as featured in the Communications of the ACM, features an instruction set architecture (ISA) that matches the LISP language more closely. As described, it is effectively a hardware-based LISP interpreter, implemented in a VLSI chip, called the SCHEME-78. By moving as much as possible into hardware, obviously performance is much improved. This is somewhat like how today’s AI boom is based around dedicated vector processors that excel at inference, unlike generic CPUs.
During the 1980s LISP machines began to integrate more and more hardware features, with the Symbolics and LMI systems featuring heavily. Later these systems also began to be marketed towards non-AI uses like 3D modelling and computer graphics. As however funding for AI research dried up and commodity hardware began to outpace specialized processors, so too did these systems vanish.
Top image: Symbolics 3620 and LMI Lambda Lisp machines (Credit: Jason Riedy)
If you want to print, say, a book, you probably will type it into a word processor. Someone else will take your file and produce pages on a printer. Your words will directly turn on a laser beam or something to directly put words on paper. But for a long time, printing meant creating some physical representation of what you wanted to print that could stamp an imprint on a piece of paper.
The process of carving something out of wood or some other material to stamp out printing is very old. But the revolution was when the Chinese and, later, Europeans, realized it would be more flexible to make symbols that you could assemble texts from. Moveable type. The ability to mass-produce books and other written material had a huge influence on society.
But there is one problem. A book might have hundreds of pages, and each page has hundreds of letters. Someone has to find the right letters, put them together in the right order, and bind them together in a printing press’ chase so it can produce the page in question. Then you have to take it apart again to make more pages. Well, if you have enough type, you might not have to take it apart right away, but eventually you will.
The overall theme of the early part of the Cold War was that of subterfuge — with scientific missions often providing excellent cover for placing missiles right on the USSR’s doorstep. Recently NASA rediscovered Camp Century, while testing a airplane-based synthetic aperture radar instrument (UAVSAR) over Greenland. Although established on the surface in 1959 as a polar research site, and actually producing good science from e.g. ice core samples, beneath this benign surface was the secretive Project Iceworm.
By 1967 the base was forced to be abandoned due to shifting ice caps, which would eventually bury the site under over 30 meters of ice. Before that, the scientists would test out the PM-2A small modular reactor. It not only provided 2 MW of electrical power and heat to the base, but was itself subjected to various experiments. Alongside this public face, Project Iceworm sought to set up a network of mobile nuclear missile launch sites for Minuteman missiles. These would be located below the ice sheet, capable of surviving a first strike scenario by the USSR. A lack of Danish permission, among other complications, led to the project eventually being abandoned.
It was this base that popped up during the NASA scan of the ice bed. Although it was thought that the crushed remains would be safely entombed, it’s estimated that by the year 2100 global warming will have led to the site being exposed again, including the thousands of liters of diesel and tons of hazardous waste that were left behind back in 1967. The positive news here is probably that with this SAR instrument we can keep much better tabs on the condition of the site as the ice cap continues to grind it into a fine paste.
Top image: Camp Century in happier times. (Source: US Army, Wikimedia)
At 5:20 PM on November 9, 1965, the Tuesday rush hour was in full bloom outside the studios of WABC in Manhattan’s Upper West Side. The drive-time DJ was Big Dan Ingram, who had just dropped the needle on Jonathan King’s “Everyone’s Gone to the Moon.” To Dan’s trained ear, something was off about the sound, like the turntable speed was off — sometimes running at the usual speed, sometimes running slow. But being a pro, he carried on with his show, injecting practiced patter between ad reads and Top 40 songs, cracking a few jokes about the sound quality along the way.
Within a few minutes, with the studio cart machines now suffering a similar fate and the lights in the studio flickering, it became obvious that something was wrong. Big Dan and the rest of New York City were about to learn that they were on the tail end of a cascading wave of power outages that started minutes before at Niagara Falls before sweeping south and east. The warbling turntable and cartridge machines were just a leading indicator of what was to come, their synchronous motors keeping time with the ever-widening gyrations in power line frequency as grid operators scattered across six states and one Canadian province fought to keep the lights on.
They would fail, of course, with the result being 30 million people over 80,000 square miles (207,000 km2) plunged into darkness. The Great Northeast Blackout of 1965 was underway, and when it wrapped up a mere thirteen hours later, it left plenty of lessons about how to engineer a safe and reliable grid, lessons that still echo through the power engineering community 60 years later.
These days, we take it for granted that you can connect a cheap piece of hardware to a microcontroller and have an amazing debugging experience. Stop the program. Examine memory and registers. You can see and usually change anything. There are only a handful of ways this is done on modern CPUs, and they all vary only by detail. But this wasn’t always the case. Getting that kind of view to an actual running system was an expensive proposition.
Today, you typically have some serial interface, often JTAG, and enough hardware in the IC to communicate with a host computer to reveal and change internal state, set breakpoints, and the rest. But that wasn’t always easy. In the bad old days, transistors were large and die were small. You couldn’t afford to add little debugging pins to each processor you produced.
This led to some very interesting workarounds. Of course, you could always run simulators on a larger computer. But that might not work in real time, and almost certainly didn’t have all the external things you wanted to connect to, unless you also simulated them. Continue reading “Lost Techniques: Bond-out CPUs And In Circuit Emulation”→