If you are interested in historical big computers, you probably think of IBM, with maybe a little thought of Sperry Rand or, if you go smaller, HP, DEC, and companies like Data General. But you may not have heard of Tandem Computers unless you have dealt with systems where downtime was unacceptable. Printing bills or payroll checks can afford some downtime while you reboot or replace a bad board. But if your computer services ATM machines, cash registers, or a factory, that’s another type of operation altogether. That was where Tandem computers made their mark, and [Asianometry] recounts their history in a recent video that you can watch below.
When IBM was king, your best bet for having a computer running nonstop was to have more than one computer. But that’s pricey. Computers might have some redundancy, but it is difficult to avoid single points of failure. For example, if you have two computers with a single network connection and a single disk drive. Then failures in the network connection or the disk drive will take the system down.
The idea started with an HP engineer, but HP wasn’t interested. Tandem was founded on the idea of building a computer that would run continuously. In fact, the name was “the non-stop.” The idea was that smaller computer systems could be combined to equal the performance of a big computer, while any single constituent system failing would still allow the computer to function. It was simply slower. Even the bus that tied the computers together was redundant. Power supplies had batteries so the machines would keep working even through short power failures.
Not only does this guard against failures, but it also allows you to take a single computer down for repair or maintenance without stopping the system. You could also scale performance by simply adding more computers.
Citibank was the first customer, and the ATM industry widely adopted the system. The only issue was that Tandem programs required special handling to leverage the hardware redundancy. Competitors were able to eat market share by providing hardware-only solutions.
The changing computer landscape didn’t help Tandem, either. Tandem was formed at a time when computer hardware was expensive, so using a mostly software solution to a problem made sense. But over time, hardware became both more reliable and less expensive. Software, meanwhile, got more expensive. You can see where this is going.
The company flailed and eventually would try to reinvent itself as a software company. Before that transition could work or fail, Compaq bought the company in 1997. Compaq, of course, would also buy DEC, and then it was all bought up by HP — oddly enough, where the idea for Tandem all started.
There’s a lot of detail in the video, and if you fondly remember Tandem, you’ll enjoy all the photos and details on the company. If you need redundancy down at the component level, you’ll probably need voting.

At computer shows, they would sometimes let you pull out a random board from a running system.
They did that all the time at ITUG conferences. Open up the back of the rack and start pulling out cards. And the software vendors in the rest of the hall, whose software was running on the system, never noticed.
iir, they might even have had the various CPUs in the system cross-check each other’s work (common in safety-rated arm MCUs now, as a matter of fact). It was a single system not multiple separate computers like the article suggests.
My first professional job after university was writing SCOBOL code using the TEDIT editor on a Tandem. TEDIT didn’t have a “Save” button which I’ve always considered to be the coolest thing ever.
The company I was working for could only afford one Tandem system so it was not running in full fault-tolerant mode. The system crashed once and when they brought it back up, my TEDIT session hadn’t lost so much as a single character.
I spent 28 hours upgrading a Tandem EXT (2 processor system) from Guardian A30 to B30 in order to support new NCR ATMs my company was getting ready to deploy.
I recall these were being evaluated when I was at GPT in the 90’s for the Intelligent Network telephone system. I managed to blag one of their wonderful dual redundant coffee mugs with two handles, one on each side.
I worked with former employees of Stratus, a Tandem competitor. An analysis they shared with me was that more than doubling the amount of hardware (the arbiter + 2 computers) also more than doubled the risk of hardware failure, so you were worse off than buying a single, non-redundant computer of the era. VLSI CPU chips and the elimination of board-to-board connectors killed this market entirely. After that, engineers realized it is better to implement fault-tolerant algorithms and protocols instead. Redundancy has its place, but more to offer graceful degradation, and protection against power outages, etc. Though I’ve also heard of two failure cases where redundant generators… weren’t. One where my company had to pay a big SLA fine. We switched data centers after that.
And from that the world somehow downgraded to ATM’s getting stuck on a windows login screen.
The “if you go smaller” statement should include NCR Corporation. Even though the company is associated with the common cash register, its mini and mainframe computers were competitive. There were many technological innovations that were produced by the company over the decades (until the AT&T hostile takeover and later dusvarding).
Last word should be “discarding” .