Listening To The Sounds Of An 1960s Military Computer

Restoring vintage computers is the favorite task of many hardware hackers. Retrocomputing probably makes you think of home computer brands like Commodore, Amiga, or Apple but [Erik Baigar] is deeply into collecting early military computers from the UK-based Elliott company. Earlier this year he made a detailed video that shows how he successfully brought an Elliott 920M from the 1960s back to life.

It is quite amazing that the Elliott company already managed to fit their 1960s computer into a shoebox-sized footprint. As computers had not yet settled on the common 8bit word size back then the Elliott 900 series are rather exotic 18bit or 12bit machines. The 920M was used as a guidance computer for European space rockets in the 1960s and ’70s but also for navigational purposes in fighter jets until as late as 2010.

Opening up the innards of this machine reveals some exotic quirks of early electronics manufacturing. The logic modules contain multilayer PCBs where components were welded instead of soldered onto thin sheets of mylar foil that were then potted in Araldite.

To get the computer running [Erik Baigar] first had to recreate the custom connectors using a milling machine. He then used an Arduino to simulate a paper tape reader and load programs into the machine. An interesting hack is when he makes the memory reading and writing audible by simply placing a radio next to the machine. [Erik Baigar] finishes off his demonstration of the computer by running some classic BASIC games like tic-tac-toe and a maze creator.

If you would like to code your own BASIC programs on more modern hardware you should check out this BASIC interpreter for the Raspberry Pi Pico.

Video after the break.

 

44 thoughts on “Listening To The Sounds Of An 1960s Military Computer

      1. Did you know there was military version of the 920 (16 bit word length) that was used in tanks and later by the Dutch Air Force? I know, I wrote software for it between 1962 and 1966.

    1. There was a technology that competed with wire-wrap that involved spot welding the wire to pads. The boards and equipment were expensive but much faster than getting a PCB made or wire-wrap. I can’t recall the name but a company that made it had full page ads in the electronics magazines of the 1970’s and early 80’s.

      1. I don’t know if we’re talking about the same technology, but what you say reminds me of a company competing against wirewrap, that had an automated machine that strung wire between points on a board (don’t remember whether welded or soldered or what) that was subsequently potted, making kind of a thick circuit board. Their big selling point was rapid prototyping.

        1. Probably that is a different technology. The (also thick, around 3/8″) mainboards of the military/space-borne 920M indeed consists of 20-40 (not disassembled one) mylar sheets with the nickel traces on them which are stacked on top of each other and hidden in a white housing. Pins are sticking out where the modules with their pins paralleling the mainboard pins and wrapped together. Here you can see some wrappings removed (pins parallel) and some still wrapped together: http://www.baigar.de/TornadoComputerUnit/920M-33Funleashed.jpg

    2. Some materials are easier to weld than to solder and in a case like this crimping is not possible. Take a look at around the 8:20 mark to see what components they were talking about.

      I see welded leads on old components all the time. They tend to be things like old tantalum capacitors that have nickel leads welded on to allow them to be soldered.

    3. The Apollo computer developed at MIT used welded connections of very basic logic gates’ signal leads to X-Y wire grids in modules. The wire grids were separated with Mylar sheets having holes punched where vias and/or connections to the IC leads were to be made by welding. Once the circuits were welded, they were rolled up and potted in plastic. It was a very compact and robust construction method for its day in 1965.

      The TV series MIT Science Reporter did several Apollo related shows in the 1960s which aired on WGBH in Boston, MA, USA and possibly other public TV stations. The following show, Computer For Apollo, (~20 minutes) is a very good look at the computer. The Apollo computer was a true tour de force and performed a surprising amount of advanced processing on what is by current standards a laughably limited computing resource.

      https://m.youtube.com/watch?v=ndvmFlg1WmE

  1. 12 and 18 bit word sizes weren’t exotic at all in the 1960s and 70s. And when every bit cost, you didn’t use any more bits than you had to, so word size usually was a compromise between how much range you needed in your integers, and how many bits you needed to address all the memory you’ll ever be able to connect to it, and that sort of thing. And then there’s text. No, you didn’t need 8 bits to represent all of the characters you needed; six was plenty. For that, 12 and 18 bit words made more sense than say, 16.

    1. Absolutely right. The trouble in the Elliotts example just started later as they re-implemented the 18 bit architecture using AMD 29XX bit slice processors after the old RTL logic stuff and core memory where not obtainable any more: Now they had to take special care with using the 4bit wide slices to create a 18 bit (4.5*4) ALU ;-)

  2. The idea behind the selection of 18bit for the 920M was, that with standard coordinates any position on earth can be represented with better than 200m accuracy given 18bits. So a perfect compromise for jet aircraft or spaceflight. The 12bit 12/12 on the other hand was popular for autothrottle computers or gunnery applications.

    1. one must wonder about the 18bit representation of the old times vs multi million line source code of the current times, in conclusion men will be weaker with every generation and coders will be stupider also with every new generation

      1. Hard to know for sure regarding “stupidity”. But the 18bit machine in the navigation role used what is called “fractional integer”, so 2^17-1 represented e.g. 179.999° east, whereas 2^17 represented 180.000° west. So flying over the date line was taken care of automatically by the integers simply overflowing and doing the right thing without nasty if/then/else in the code. There is a very famous example what happened recently if floating point numbers are used and the +/-180° issue is not handled poperly (OK, in such complex systems that is a hard task to be honest!): https://www.defenseindustrydaily.com/f22-squadron-shot-down-by-the-international-date-line-03087/

        1. Brought to mind due to relevance to aircraft – old MicroProse combat flight simulators had some weird math that would cause your nose to dodge around +- 90 degrees attitude (i.e. it will never allow you to point straight up or down), perhaps trying to avoid some generally similar issue.

      2. The 18Bits is because the hardware is specifically designed for its usage case. Consequently coding is not excessively long and is far less complex reducing the opportunity for coding errors.

        Conversely, when you use a “general purpose” computer you end up with extremely long and complex code to do the same thing. So I don’t think it is “stupidity” per say but rather vastly increasing the opportunity for human error consequently increases the number of errors.

  3. Just watched the video. The voice sounds a lot like that of our typical German electronics students. :D A bit more emphasis/emotions would make sound everything more smooth, I guess. That being, the English itself was fine, as far as I can tell. Thumbs up.

  4. Everything had a basic unit of Bit by design so there was much flexibility.

    Core memory was one bit so for 12 bits you needed 12 sense amplifiers, 12 write-back drivers, one row decoder and one column decoder.

    Early DRAM was one bit to if you had 1024 bit chips you used 8 for a byte of 12 for 12 bit words. Later DRAM was 4 bit so you used 2 for bytes or 3 for 12 bit words, however anything but 8 bit words (bytes) was rare by then. I don’t think I ever saw 8 bit DRAM back then.

    Then With SRAM it was most often 8 bit but in odd cases it was 4 bit.

    It’s still more common than you may think.

    Many Harvard architecture MCUs like PIC use odd bit widths for their instruction memory.

    1. Yeah, always a question on what makes most sense in splitting up the data word into identical subunits. 16=4*4 in csase of the AMD29xx or 32=4*8 for the famous SIMM modules in the 386 PCs. In the 12 bit variant of the 920M computer for example they put 6 bits of accumulator and ALU on one PCB and used two identical ones to implement the 12 bit machine. See http://www.baigar.de/TornadoComputerUnit/ProgrammerElectronicControlPCBs.pdf, page 8/15.

      1. I assume you mean AM29xx made by AMD.

        In the case of a Computer Control Unit (AM2901 CCU) 4 Bit slices provide scalability which intern increases computational power without increasing clock speed. The same could be said for nBit ALUs (74181). The drawback is that you need complex fast carry external support that increases in complexity or reduces clock speed as the width expands.

        I wonder if the special Chuck Peddle Set Overflow SO (That was never externally implemented) was an effort to allow for paralleling two or more 6502 CPUs.

        Memory is a different story. The trade of is between power consumption and speed.

        If you have a 8192 x 8 Bit RAM chip then you have an address decoder and 64kBits RAM. If you have 8 of 1024 x 1 Bit chips then you have 8 address decoders and 64kBits of RAM. The addressing is faster but 8 address decoders obviously use 8 times as much power.

        Imaging you have an address decoder that addresses a 8 Bit block. The access time is roughly the decode time plus that cell access times.

        If instead you have a decoder that accesses a 64 Bit block it may decode faster (slightly). Then you still have the cell access time. At the other end you may have 8 of 8-to-1 de-multiplexers. The de-multiplexers will still take time but they can start at the same time as the decoders so the correct channels are already selected by the time the cell access time completes so it is significantly faster. This is the reason we had specific RAM layout parameters for faster RAM they were simply the formats that worked faster by balancing the number of gate stages at the beginning and end within practical limits for SRAM. DRAM on the other hand had extra design constraints relating to the refresh.

        1. Of course I meant the AM29xx as you suggested – sorry for the typo. Regarding memory – esp. core: For mobile applications, space and ruggedness are also important requirements. Core layout was very often optimized to have square matrices get along with as little X and Y drivers as possible. Therefore core most often came in multiples of 4k (64*64 matrix). Minimum configurations: Elliot 12/12: 4k, Elliott 920M: 8k, Rolm 1602, DEC PDP8 4k,…

  5. …more details on the 12bit Elliott 900 machine can be found here:
    http://www.programmer-electronic-control.de/index.html

    …of course the command set of these machines is quite limited/archaic as can be seen from the
    booklets on programming these machines:
    12 bit machine 102: http://www.baigar.de/TornadoComputerUnit/Elliott902FactsCard.pdf
    18 bitters like 920, 903: http://www.programmer-electronic-control.de/Elliott920FactsCard.pdf

    1. Very cool. Looking at the book for the 12-bit machine, it’s quite interesting that numeric values are treated as a fraction. To me this implies that they intended doing floating point math in software, storing the exponent in a separate register. But whoever made the card got their numbers wrong: 2^0 is not -1, but 1.

      This is not that primitive a machine – it has hardware multiply and divide, performing both of these in about 12μs, which is just one memory cycle per bit, so there’s some stuff going on in parallel there. It also has a real-time clock, DMA (“autonomous transfer unit), and A/D and D/A converters.

      Every time I see details about an ancient machine like this, I imagine implementing it in some mechanical form. Don’t ask why.

      1. ;-) You obviously had a detailed look and got the stuff – esp. the typeo!

        In my 12/12 (12bit) and 920M/ME, the AD/DA and DMA are not present and the divide command is indeed somewhat broken by design: The last bit is often not correct and the reminder is not delivered by the divide instruction. Therefore the programmers of the old days tended to use multiply instead whenever possible.

        Also the “realtime clock” in the 12/12 is really complex: It compares a counter vs. a settable register. Each time clock cycle counter>=register, an interrupt is issued and one has manually set the register to a new value to be calculated to get rid of the interrupt. Guess: It is hard to treat the overflow of the counter and it took me many hours of experimenting until I recognized that one of the serial links is driven by the counter as well and the interrupt of this serial link can be abused to treat the overflow ;-) So programming was a hard task those days – esp. for the 12 bit machine!

    1. …and there is also the famous series of AN/UYK-19 computers, also known as Rolm MIL-SPEC computers: https://computerhistory.org/blog/if-it-moves-it-should-be-ruggednova/. These had a big share in military computing in the 1970ties (on the sea, sub-sea, air borne and for navigation in one of the first satellite systems ever (NAVSTAR)). BTW: I have a ported version of SpaceWar! running on a 1602: http://www.baigar.de/TornadoComputerUnit/RolmSpaceWar-Setup.JPG – These machines where competing with the Elliott 920M and later models like the 920ATC…

      1. Based on the description, I think when my dad worked IBM, he was in charge of flying out to Air Force maintenance locations with replacement airborne computers. Might have even been ROLM versions, with IBM being a primary contractor for Electronic Warfare systems for B-52 among others. He told me they were the size of a shoebox and cost $50,000. (I guess that’s after IBM configured and programmed it).

        1. Yes, that might well be true. Rolm had at some time sold the mil-spec computing business to IBM. The price tag seems reasonable for earlier models 1602, 1666 (I have a factory repair and pricelist of the Rolm machines dating 1996) whereas later models (Hawk/32: 32 bit, MMU,…) where more in the 250k range. But part of the success of the Rolm machines was that they retained compatibility for the IO boards connecting to the expensive hardware (the aircraft, stores etc.) from 1970 up to the very end in the late 90ties.

        2. Shoebox-sized electronics modules are kind of the thing for military planes, so a lot of avionics modules all look the same – black rectangular boxes with round many-pinned connectors.

        1. The strange scope is a C1-122 made by the Soviet Research and Development Institute of Radio Measuring Instruments. It’s documented on TekWiki with the manuals. It’s sort of a loose intrepretation of a Tek 7603, nothing is actually Tek compatible.

          1. Amazing. Yeah, clearly the CRT doesn’t have the flat plate glass faceplate that all of the 7000 series had, but they got little details right, like the dyed anodized legends on the panels!

            Thanks for sharing your work with us.

          2. Yeah, these oscilloscopes are different stuff. I got mine after the reunification of East and West Germany as this was a cheap way of getting 100MHz analog bandwidth in 1990. But somehow they cause a headache because there are RF analog switches (hybrid part number 04KN009) which fail VERY often. So as a hobby, I rebuilt these on a small PCB (http://www.04kn009.de/) in 2010. Meanwhile there are lot of clones available on eBay ;-)

    2. Not only the US Navy but the canadian military had them too. I maintained them as well as taught courses on how to maintain and repair them which included machine language programming. Did it for 17 years. Loved it. By the way, I’m looking for some of the technical documentation for it like the logic diagrams, diagnostic program listings and the technical manuals.

  6. My friend Ann was one of the original 900 series programmers in the 60’s. I, a comparative youngster, did do a bit of programming on the 1212p – a cut down version. Until I was put to work on a Intel 8080, and the rest really is history.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.