How IBM Stumbled Onto RISC

There are a ton of inventions out in the world that are almost complete accidents, but are still ubiquitous in our day-to-day lives. Things like bubble wrap which was originally intended to be wallpaper, or even superglue, a plastic compound whose sticky properties were only discovered later on. IBM found themselves in a similar predicament in the 1970s after working on a type of mainframe computer made to be a phone switch. Eventually the phone switch was abandoned in favor of a general-purpose processor but not before they stumbled onto the RISC processor which eventually became the IBM 801.

As [Paul] explains, the major design philosophy at the time was to use a large amount of instructions to do specific tasks within the processor. When designing the special-purpose phone switch processor, IBM removed many of these instructions and then, after the project was cancelled, performed some testing on the incomplete platform to see how it performed as a general-purpose computer. They found that by eliminating all but a few instructions and running those without a microcode layer, the processor performance gains were much more than they would have expected at up to three times as fast for comparable hardware.

These first forays into the world of simplified processor architecture both paved the way for the RISC platforms we know today such as ARM and RISC-V, but also helped CISC platforms make tremendous performance gains as well. In fact, RISC-V is a direct descendant from these early RISC processors, with three intermediate designs between then and now. If you want to play with RISC-V yourself, our own [Jonathan Bennett] took a look at a recent RISC-V SBC and its software this past March.

Thanks to [Stephen] for the tip!

Photo via Wikimedia Commons

Apple Archeology: The Future Once Had Server Side Computing In It

To read the IT press in the early 1990s, those far-off days just before the Web was the go-to source of information, was to be fed a rosy vision of a future in which desktop and server computing would be a unified and powerful experience. IBM and Apple would unite behind a new OS called Taligent that would run Apple, OS/2, and 16-bit Windows code, and coupled with UNIX-based servers, this would revolutionise computing.

We know that this never quite happened as prophesied, but along the way, it did deliver a few forgotten but interesting technologies. [Old VCR] has a look at one of these, a feature of the IBM AIX, which shipped with mid-90s Apple servers as a result of this partnership, in which Mac client applications could have server-side components, allowing them to offload computing power to the more powerful machine.

The full article is very long but full of interesting nuggets of forgotten 1990s computing history, but it’s a reminder that DOS/Windows and Novell Netware weren’t the only games in town. The Taligent/AIX combo never happened, but its legacy found its way into the subsequent products of both companies. By the middle of the decade, even Microsoft had famously been caught out by the rapid rise of the Web. He finishes off by creating a simple sample application using the server-side computing feature, a native Mac OS application that calls a server component to grab the latest Hacker News stories. Unexpectedly, this wasn’t the only 1990s venture from Apple involving another company’s operating system. Sometimes, you just want to run Doom.

Restoration Of A Thinkpad 701C

This is like ASMR for Hackers: restoration specialist [Polymatt] has put together a video of his work restoring a 1995 IBM Thinkpad 701c, the famous butterfly keyboard laptop. It’s an incredible bit of restoration, with a complete teardown and rebuild, even including remaking the decals and rubber feet.

[Polymatt] runs Project Butterfly, an excellent site for those who love these iconic laptops, offering advice and spare parts for restoring them. In this video, he does a complete teardown, taking the restored laptop completely apart, cleaning it out, and replacing parts that are beyond salvaging, like the battery, and replacing them. Finally, he puts the whole thing back together again and watches it boot up. It’s a great video that we’ve put below the break and is well worth watching if you wonder about how much work this sort of thing involves: the entire process took him over two years.

We’ve covered some of his work in the past, including the surprisingly complicated business of analyzing and replacing the Ni-Cad battery that the original laptop used. Continue reading “Restoration Of A Thinkpad 701C”

Bus Sniffing The Model 5150 For Better Emulation

At the risk of stating the obvious, a PC is more than just its processor. And if you want to accurately emulate what’s going on inside the CPU, you’d do well to pay attention to the rest of the machine, as [GloriousCow] shows us by bus-sniffing the original IBM Model 5150.

A little background is perhaps in order. Earlier this year, [GloriousCow] revealed MartyPC, the cycle-accurate 8088 emulator written entirely in Rust. A cycle-accurate emulation of the original IBM PC is perhaps a bit overkill, unless of course you need to run something like Area 5150, a demo that stretches what’s possible with the original PC architecture but is notoriously finicky about what hardware it runs on.

Getting Area 5150 running on an emulator wasn’t enough for [GloriousCow], though, so a deep dive into exactly what’s happening on the bus of an original IBM Model 5150 was in order. After toying with and wisely dismissing several homebrew logic analyzer solutions, a DSLogic U3Pro32 logic analyzer was drafted into the project.

Fitting the probes for the 32-channel instrument could have been a problem except for the rarely populated socket for the 8087 floating-point coprocessor on the motherboard. A custom adapter gave access to most of the interesting lines, including address and data buses, while a few more signals, like the CGA sync lines, were tapped directly off the video card.

Capturing one second of operation yielded a whopping 1.48 GB CSV file, but a little massaging with Python trimmed the file considerably. That’s when the real fun began, strangely enough in Excel, which [GloriousCow] used as an ad hoc but quite effective visualization tool, thanks to the clever use of custom formatting. We especially like the column that shows low-to-high transitions as a square wave — going down the column, sure, but still really useful.

The whole thing is a powerful toolkit for exploring the action on the bus during the execution of Area 5150, only part of which [GloriousCow] has undertaken as yet. We’ll be eagerly awaiting the next steps on this one — maybe it’ll even help get the demo running as well as 8088MPH on a modded Book8088.

FPGA Runs IBM 5151 MDA Display

When it comes to driving a display, you can do all kinds of fancy tricks with microcontrollers to get an image up. Really, though, FPGAs are the weapon of choice for playing with these kinds of signals. [Ted Fried] put one to great work driving an ancient IBM 5151 MDA display, and shared his results on Hackaday.io.

The build relies on a Digilent Arty Z7-20 SOC FPGA development board, which has a beefy 600 MHz ARM processor on board. It also packs 500 MB of DRAM—more than enough for storing pixel data for an ancient display.

To drive the old display, [Ted] whipped up a state machine on the FPGA. It’s tasked with fetching display data from RAM and creating the appropriate timings for the MDA display interface. The images are stored directly in an array in C code running on the ARM core. From there, they are copied into the FPGA’s RAM for trucking out to the display. The 720×350 images are stored as 1 bit per pixel, and are created by converting the original JPEGs into single-bit bitmaps in GIMP, before final conversion into a C code array via utility of [Ted’s] own design.

If you’ve ever wanted to display your images in resplendent amber or green, then this could be the project for you. It’s also just a great way to learn about using FPGAs and interfacing with alternative display technologies. If you’ve been whipping up your own retro display hacks, don’t hesitate to drop us a line.

Retrotechtacular: The Computer Center Of 1973

You might expect Bell Labs would have state-of-the-art computers, and they did. But it is jarring to realize just how little that was in 1973, fifty years ago. If you started work at Bell’s Holmdel Computing Center back then, you might have watched one of the orientation videos below. Your first clue about how far things have come might be the reference to the IBM 370/165, which had “3 million bytes of core, 2 million of which are available for programmer use.” Even our laptops today have at least 8 gigabytes of RAM. There were at least two other smaller IBM 370s, too. Plenty of 029 card punches are visible.

If you were trying to run something between 8:00 AM and 5:30 PM, you had to limit your job run time to three minutes, 4,000 lines of output, and no more than 1,000 cards in and 5,000 cards out. Oh, and don’t use more than 384 kB of that core memory, either. If you fell within those limits, you could hand your card deck over at the express counter and get your results in only five or ten minutes. If you were not in the express line but still rated “premium” service, you could expect to wait a half hour.

Continue reading “Retrotechtacular: The Computer Center Of 1973”

Selectric Typewriter Goes From Trash Can To Linux Terminal

If there’s only lesson to be learned from [alnwlsn]’s conversion of an IBM Selectric typewriter into a serial terminal for Linux, it’s that we’ve been hanging around the wrong garbage cans. Because that’s where he found the donor machine for this project, and it wasn’t even the first one he’s come across in the trash. The best we’ve ever done is a nasty old microwave.

For being a dumpster find, the Selectric II was actually in pretty decent shape. The first couple of minutes of the video after the break show not only the minimal repairs needed to get the typewriter back on its feet, but also a whirlwind tour of the remarkably complex mechanisms that turn keypresses into characters on the page. As it turns out, knowing how the mechanical linkages work is the secret behind converting the Selectric into a teletype, entirely within the original enclosure and with as few modifications to the existing mechanism as possible.

Keypresses are mimicked with a mere thirteen solenoids — six for the “latch interposers” that interface with the famous whiffletree mechanism that converts binary input to a specific character on the typeball, and six more that control thinks like the cycle bail and control keys. The thirteenth solenoid controls an added bell, because every good teletype needs a bell. For sensing the keypresses — this is to be a duplex terminal, after all — [alnwlsn] pulled a page from the Soviet Cold War fieldcraft manual and used opto-interrupters to monitor the positions of the latch interposers as keys are pressed, plus more for the control keys.

The electronics are pretty straightforward — a bunch of MOSFETs to drive the solenoids, plus an AVR microcontroller. The terminal speaks RS-232, as one would expect, and within the limitations of keyboard and character set differences over the 50-odd years since the Selectric was introduced, it works fantastic as a Linux terminal. The back half of the video is loaded with demos, some of which aptly demonstrate why a lot of Unix commands look the way they do, but also some neat hybrid stuff, like a ChatGPT client.

Hats off to [alnwlsn] for tackling a difficult project while maintaining the integrity of the original hardware.

Continue reading “Selectric Typewriter Goes From Trash Can To Linux Terminal”