If I mention nuclear reactor accidents, you’d probably think of Three Mile Island, Fukushima, or maybe Chernobyl (or, now, Chornobyl). But there have been others that, for whatever reason, aren’t as well publicized. Did you know there is an International Nuclear Event Scale? Like the Richter scale, but for nuclear events. A zero on the scale is a little oopsie. A seven is like Chernobyl or Fukushima, the only two such events at that scale so far. Three Mile Island and the event you’ll read about in this post were both level five events. That other level five event? The Windscale fire incident in October of 1957.
If you imagine this might have something to do with the Cold War, you are correct. It all started back in the 1940s. The British decided they needed a nuclear bomb project and started their version of the Manhattan Project called “Tube Alloys.” But in 1943, they decided to merge the project with the American program.
The British, rightfully so, saw themselves as co-creators of the first two atomic bombs. However, in post-World War paranoia, the United States shut down all cooperation on atomic secrets with the 1946 McMahon Act.
We Are Not Amused
The British were not amused and knew that to secure a future seat at the world table, it would need to develop its own nuclear capability, so it resurrected Tube Alloys. If you want a detour about the history of Britan’s bomb program, the BBC has a video for you that you can see below.
We’ll be honest. If you had told us a few decades ago we’d teach computers to do what we want, it would work some of the time, and you wouldn’t really be able to explain or predict exactly what it was going to do, we’d have thought you were crazy. Why not just get a person? But the dream of AI goes back to the earliest days of computers or even further, if you count Samuel Butler’s letter from 1863 musing on machines evolving into life, a theme he would revisit in the 1872 book Erewhon.
Of course, early real-life AI was nothing like you wanted. Eliza seemed pretty conversational, but you could quickly confuse the program. Hexapawn learned how to play an extremely simplified version of chess, but you could just as easily teach it to lose.
But the real AI work that looked promising was the field of expert systems. Unlike our current AI friends, expert systems were highly predictable. Of course, like any computer program, they could be wrong, but if they were, you could figure out why.
Experts?
As the name implies, expert systems drew from human experts. In theory, a specialized person known as a “knowledge engineer” would work with a human expert to distill his or her knowledge down to an essential form that the computer could handle.
This could range from the simple to the fiendishly complex, and if you think it was hard to do well, you aren’t wrong. Before getting into details, an example will help you follow how it works.
These days, most of the media we consume is digital. We still watch movies and TV shows, but they’re all packaged in digital files that cram in many millions of pixels and as many audio channels as we could possibly desire.
Back in the day, though, engineering limitations meant that media on film or tape were limited to analog stereo audio at best. And yet, the masterminds at Dolby were able to create a surround sound format that could operate within those very limitations, turning two channels in to four. What started out as a cinematic format would bring surround sound to the home—all the way back in 1982!
Full marks for clarity of message. Credit: Euro Route materials
When the Channel Tunnel opened in 1994, the undersea rail link saw Britain grew closer to the European mainland than ever before. However, had things gone a little differently, history might have taken a very different turn. Among the competing proposals for a fixed Channel crossing was a massive bridge. It was a scheme so audacious that fate would never allow it to come to fruition.
Forget the double handling involved in putting cars on trains and doing everything by rail. Instead, the aptly-named Euro Route proposed that motorists simply drive across the Channel, perhaps stopping for duty-free shopping in the middle of the sea along the way.
Back in 2016, we took you to a collection of slightly dilapidated prefabricated huts in the English Home Counties, and showed you a computer. The place was the National Museum of Computing, next to the famous Bletchley Park codebreaking museum, and the machine was their reconstruction of Colossus, the world’s first fully electronic digital computer. Its designer was a telephone engineer named Tommy Flowers, and the Guardianhas a piece detailing his efforts in its creation.
TNMOC’s Colossus MkII.
It’s a piece written for a non-technical audience so you’ll have to forgive it glossing over some of the more interesting details, but nevertheless it sets out to right a long-held myth that the machine was instead the work of the mathematician Alan Turing. Flowers led the research department at the British Post Office, who ran the country’s telephone system, and was instrumental both in proposing the use of electronic switches in computing, and in producing a working machine. The connection is obvious when you see Colossus, as its racks are the same as those used in British telephone exchanges of the era.
Over on YouTube, [The Modern Rogue] created an interesting video showing a slide-rule-like encryption device called the Réglette. This was a hardware implementation of a Vigenère-like Cipher, technically referred to as a manual polyalphabetic substitution cipher. The device requires no batteries, is fully waterproof, daylight readable and easy to pack, making it really useful if you find yourself in a muddy trench in the middle of winter during a world war. Obviously, because it’s a slide rule.
Anyway, so how does this cipher work? Well, the ‘polyalphabetic’ bit infers the need for a key phrase, which is indeed the first thing all parties need to agree upon. Secondly, a number is required as a reference point. As you can see from the video, the sliding part of the device has letters of the alphabet, as well as numbers and a special symbol. The body has two series of numbers, with the same spacing as the central, sliding part. A second copy of the sliding part is also needed to slide in behind the first unit. This second copy is neatly stowed below the body during storage.
With each message letter, you lookup the corresponding cipher text number, then shift the slider to the next key phrase letter.
The cipher works by first aligning the starting letter of the (variable-length) key phrase with the reference number. Next, encode the first symbol from the cleartext message (the thing you want to encrypt). You simply look up the letter on the slide and read off either of the numbers next to it. Randomly selecting the left or right set adds an extra bit of strength to the code due to increased entropy. The number is the first symbol for your ciphertext (the thing you want to transmit to the receiver). Next, you move on to the next symbol in the cleartext message. Align the following letter of the key phrase with the reference number, look up the corresponding letter in the message, and transmit the following number onwards. When you run out of key phrase letters, you loop back to the start, and the cycle repeats.
The special symbol we mentioned earlier is not really a ‘blank’; it is a control symbol used to retransmit a new reference number with the existing setup. To change the reference number, the blank character is encoded and sent, followed by the latest reference number. When the blank symbol is received at the other end, the following code is used as the reference number, and the key phrase position is reset to point back to the first letter, restarting the cycle anew. Simple, yes. Effective? Well, not really by modern standards, but at the time of limited computing power (i.e. pen and paper, perhaps a mechanical calculator at best), it would have been sufficient for some uses for a couple of decades.
Why is this Vigenère-like? Well, an actual Vigenère cipher maps letters to other letters, but the Réglette uses numbers, randomly selected, adding entropy, as well as the control code to allow changing the cypher parameter mid-message. This makes it harder to attack; the original Vigenère was considered first-rate cryptography for centuries.
If you’d like to play along at home and learn some other simple ciphers, check this out. Kings and Queens of old frequently used cryptography, including the famous Queen Mary of Scots. Of course, we simply can’t close out an article on cryptography without mentioning the Enigma machine. Here’s one built out of Meccano!
The physical layout of the SCHEME-78 LISP-based microprocessor by Steele and Sussman. (Source: ACM, Vol 23, Issue 11, 1980)
During the AI research boom of the 1970s, the LISP language – from LISt Processor – saw a major surge in use and development, including many dialects being developed. One of these dialects was Scheme, developed by [Guy L. Steele] and [Gerald Jay Sussman], who wrote a number of articles that were published by the Massachusetts Institute of Technology (MIT) AI Lab as part of the AI Memos. This subset, called the Lambda Papers, cover the ideas from both men about lambda calculus, its application with LISP and ultimately the 1980 paper on the design of a LISP-based microprocessor.
Scheme is notable here because it influenced the development of what would be standardized in 1994 as Common Lisp, which is what can be called ‘modern Lisp’. The idea of creating dedicated LISP machines was not a new one, driven by the processing requirements of AI systems. The mismatch between the S-expressions of LISP and the typical way that assembly uses the CPUs of the era led to the development of CPUs with dedicated hardware support for LISP.
The design described by [Steele] and [Sussman] in their 1980 paper, as featured in the Communications of the ACM, features an instruction set architecture (ISA) that matches the LISP language more closely. As described, it is effectively a hardware-based LISP interpreter, implemented in a VLSI chip, called the SCHEME-78. By moving as much as possible into hardware, obviously performance is much improved. This is somewhat like how today’s AI boom is based around dedicated vector processors that excel at inference, unlike generic CPUs.
During the 1980s LISP machines began to integrate more and more hardware features, with the Symbolics and LMI systems featuring heavily. Later these systems also began to be marketed towards non-AI uses like 3D modelling and computer graphics. As however funding for AI research dried up and commodity hardware began to outpace specialized processors, so too did these systems vanish.
Top image: Symbolics 3620 and LMI Lambda Lisp machines (Credit: Jason Riedy)