Upgrade Your Computer The 1985 Way

Today when you want to upgrade your computer you slap in a card, back in the early 80’s things were not always as simple.  When [Carsten] was digging around the house he found his old, and heavily modified Rockwell AIM 65 single board computer, flipped the switch and the primitive 6502 machine popped to life.

Added to the computer was a pile of wires and PCB’s in order to expand the RAM, the I/O to form a “crate bus” and of course tons of LED blinkenlights! On that bus a few cards were installed, including a decoder board to handle all the slots, a monitor controller, a massive GPIO card, and even a universal EEPROM programmer.

If that was not enough there was even a OS upgrade from the standard issue BASIC, to a dual-boot BASIC and FORTH. Though again unlike today where upgrading your OS requires a button click and a reboot, making all these upgrades are planned out on paper, which were scanned for any retro computer buff to pour through.

[Carsten] posted a video of this computer loading the CRT initilization program from a cassette. You can watch, but shouldn’t listen to that video here.

39 thoughts on “Upgrade Your Computer The 1985 Way

  1. Seems to be an area that’s dropped off the DIY radar, there were a few DIY ISA cards (notably the XT IDE adapter board) but I can’t help but think there’s some opportunity being missed to hack more modern stuff.

    1. Modern busses are generally too fast to support that type of hacking unfortunately also sticking stuff onto say a PCI-E bus takes a lot more overhead in supporting the protocol etc etc.. I suppose USB has taken over from that to a greater or lesser extent as it’s many times the speed of the bus in the article.

      1. It would be so nice if modern PCs would add a hackers slot. Maybe a few SPIs, GPIOs, i2c, USB and some power connectors.
        Not going to happen for the simple reason that it would cost a few cents more for board and would only sell a few dozen extra boards.

          1. HDMI also has an I2C bus. And on the few motherboards that still have them, parallel ports can be used as GPIOs.

            But it would be nice if someone made an affordable PCIe card with a prototyping area. As in with a FPGA or similar to convert the PCIe into something easier to connect to homemade circuits.

        1. Back in 1995, the BeBox (which ran BeOS) had a GeekPort

          “GeekPort” (37-pin D-shell) An experimental-electronic-development oriented port, backed by three fuses on the mainboard.
          Digital and analog I/O and DC power connector, 37-pin connector on the ISA bus.
          Two independent, bidirectional 8-bit ports
          Four A/D pins routing to a 12-bit A/D converter
          Four D/A pins connected to an independent 8-bit D/A converter
          Two signal ground reference pins
          Eleven power and ground pins: Two at +5 V, one at +12 V, one at -12 V, seven ground pins.

          This info was copied from
          https://en.wikipedia.org/wiki/BeBox

          It would be nice to see it as an option (such as a PCI board) on modern PCs.

    1. I wish AMD would stop trying to directly compete with Intel, and adopt this sort of an idea:

      Imagine if your computer could have a secondary processing plane., where cards containing multiple CPUs and their RAM could be added to increase parallel computation capability.

      Such a device might use a shared memory architecture, where the system RAM would have contacts on both the top and bottom, allowing the SPP device read/limited-write to the system RAM. Further, it could be tied into PCI-e bus to access secondary SATA controller, allowing the device to dump post-process data directly to disk.

      Before you dismiss the idea, Yes, there are other ways of accomplishing the same task. However, it costs a lot of money to purchase and operate cluster hardware. Instead, why not reduce these costs by eliminating duplicated hardware?

      If a business like AMD would put a little thought into it, they might realize that they could sell 10+ CPUs per PC, providing the end-user with man thousands of GPU cores; and that they could force a shift in several markets that are largely dominated by Intel and nVidia.

      1. You can’t just share modern RAM like that.

        Aside from that, you described the Opteron Hypertransport ccNUMA architecture. On paper at least, you can connect HT buses between CPUs, PCIe bridges, south bridges, and a small handful of other HT peripherals, and the fabric will be automatically routed by the CPUs on power-up. The RAM hooked to each CPU becomes part of the global memory space in the fabric.

        In practice, it’s much harder than that because of the signal integrity. There are some high density servers that have daughtercards with RAM and a CPU, which connect to a backplane motherboard with the NB and SB. But the cost for the connectors alone is prohibitive.

        1. Well, then they have a good head start…

          As for the RAM, why? It’s just bits stored in registers, its not like the bits are going to change if I copy a reserved block from the system RAM to the SPP RAM. So, why couldn’t they add a second set of contacts to the top of the DIMM?

      1. Ah…APL. Where programs are indistinguishable from line noise…on a Greek Teletype.
        I once designed a glass Teletype terminal to support backspaces and overstrikes for the APL character set. Good times.

Leave a Reply to TomCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.