SCSI: The Disk Bus For Everything

Early home PCs usually had a floppy disk and a simple hard drive controller. Later, IDE hard drives became the defacto standard. Of course, these days, you are more likely to find some version of SATA and — lately — NVME connectors. But a standard predating all of this was very common in high-end systems: SCSI. [RetroBytes] recently did a video on the bus which he calls the “USB of the 80s.”

Historically, Shugart — a maker of disks — was tired of producing custom drive electronics for each device they made. Instead, they made disks with a standard interface and then produced a single interface board for each computer they wanted to support. The interface was very generic, and they were able to get it standardized with ANSI — an early example of the benefit of opening up a standard.

SCSI could connect to many things besides disks, like scanners and tape drives. You could even find SCSI to network adapters. It was fast for its day,  too. There were also updated standards that pushed performance higher over time. In addition to a standard hardware interface, most SCSI devices didn’t need special device drivers.

There were a few cheap SCSI host adapters for the PC, like the Seagate ST01 and ST02, but they weren’t good performers. Fast interfaces were pretty expensive. The other hard drive connectors were cheaper and didn’t require complicated termination and expensive cables. So SCSI rapidly lost ground in the PC, and as the PC market grew, it started pushing out SCSI in personal computers. But high-end workstations used SCSI because it performed better, and [RetroBytes] has an extensive explanation of why it was faster than early hard drive standards.

The video does a nice job of showing off some grand old hardware and many use cases for SCSI like RAID arrays and shared storage. If you have the urge to walk down memory lane or you like learning about old technology, this video is worth a read. SCSI is still around, by the way, although now it is a serial standard that is very similar to SATA. It just isn’t nearly as prevalent as it used to be. You can bridge USB and SCSI, if you can find the hardware. You can also put today’s tiny computers on the bus and have them pretend to be disk drives.

66 thoughts on “SCSI: The Disk Bus For Everything

        1. Also, I believe the header on the right of the bus is for the address. It’s weird that it has no jumpers since address 0 was mostly used for the host card? (Disclaimer: didn’t watch the video, but own an Amiga 2000 and a Sun 3/60, both with SCSI)

          1. …and then things really became interesting when you started building clusters with multiple initiators, from physically seperate nodes, on the same electrical bus. If the sparkies didn’t do the mains wiring properly, you didn’t need caffeine or nicotine for the rest of the week.

            There was a reason why IT toolkits in those days were kitted with hammers and hacksaws…

          2. I had an initiator on 0 and 7 both on the same bus. Amiga and PC with a shared drive.
            Worked well.

            Header for address and termination. Many devices used a dip switch and only gave you a couple of address choices. Some gave you no choice at all (initiator often).
            I recall having a drive which used resistor packs for the termination instead of a switch.

            I’ve got easily 20kg worth of legacy scsi cables kicking around somewhere along with lots of other legacy PC junk I really should dispose of.
            Local computing museum may regret asking for donations :)

  1. I looked at the information from 2001 (22 years ago):
    Interface: Ultra-320 SCSI
    Width(bits): 16
    Clock: 80 MHz DDR
    Throughput: 320 MB/s (2560 Mbit/s)
    Length: 12 m

    And thought well what if SCSI was still parallel and was now 64 bits wide what would be the throughput if nothing else changed:
    Throughput: 1.28 GB/s (10.24 Gbit/s).

    The cables would be huge and expensive, but it is kind of cool that 20+ year old tech if only one physical layer part was modernised could potentially gives that performance.

    1. Some years ago as SATA was replacing PATA and USB was replacing almost every other interface I thought the computer hardware community had lost their minds. Speed was often mentioned as one of the advantages of the newer tech. But I had always been taught up to that point, and it certainly is logical that sending more bits at the same time gets your data there faster than sending only 1 bit at a time. Kind of a no-brainer right?

      I thought maybe it was a generational thing and the kids just had no concept of what parallel and serial actually meant besides older and bulkier vs newer and smaller.

      Then I finally found someone that explained it. As speeds increased keeping all those channels synchronized became harder. At those frequencies, a slight imperfection making one conductor longer or shorter than another by even a little bit was enough to take signals out of phase with one another. In other words you might be receiving bit 0 of one byte and bit 7 of the next byte at the same time rather than 0-7 of the same byte.

      Switching to serial of course divided the throughput by 8 but it also allowed them to multiply the speed by more than 8 thus more than making up for the loss.

      So in other words, yes, if an old parallel interface could be made to go at the same speeds as the new serial ones you would have something eight times as fast as what we have today. But it’s not a simple matter of just replacing the old silicon with new. There’s an underlying problem that would have to be solved to make that practical.

      1. The reason parallel was scraped in favour of serial might also have been ITAR/EAR restrictions.
        I thought about 12 meter long cables that could potentially be pulsed with with an accuracy and precision of at least 12.5 nanoseconds and my mind went to July 5:29 a.m. on the 1945-07-16.

        CATEGORY 3 – ELECTRONICS
        “e.1. Digital time delay generators with a
        resolution of 50 nanoseconds or less over time
        intervals of 1 microsecond or greater; or
        e.2. Multi-channel (three or more) or modular
        time interval meter and chronometry equipment
        with resolution of 50 nanoseconds or less over
        time intervals of 1 microsecond or greater;”

        1. That’s not actually parallel though. It’s multiple serial interfaces that can work on the payload in parallel. With traditional parallel interfaces, you need all signals to be perfectly in phase. This gets increasingly more difficult as speeds go up.

          With multiple serial lanes, you “only” need the ability to reassemble the datastream after it arrives spread out across two or more lanes. Slight differences in phase can be tolerated relatively easily.

          Historically, dis- and re-assembly would have been prohibitively compute intensive. These days, it’s a relatively easy task to implement in hardware.

          1. It’s parallel in all practical senses of the word, even if the implementation doesn’t follow the old-school parallel method. It just has a more robust way for dealing with bits not all arriving at the same time.

      2. Keeping parallel lines in phase could certainly be an issue, but it’s manageable. Ethernet manages to do it perfectly fine. Modern controllers can send known signal patterns during negotiation to discover and adapt to out-of-phase signals, if the cable element lengths are a potential source of phase shift.

        Here’s the real problem with parallel lines: EMF. As frequency (aka data rate) increases, the more cross talk you get between adjacent wires, and the stronger that cross talk gets. The reason we can get extremely high speeds with Ethernet, despite having multiple parallel signals is that we used twisted pair wiring, that keeps the actual data signals isolated between each line and a corresponding ground wire, so that there’s no interference with adjacent conductors. (At RF frequencies, it’s more useful to think of signals as existing between conductors rather than inside of them, because that’s where the actual electric field is. They typically exist between the signal wire and either a ground plane/bus or a power plane/bus. If there’s another signal wire between a high frequency wire and its reference plane, that wire in between will pick up interference from that high frequency wire’s signal, because it exists in the space where that signal actually is.)

        So basically, the problem is one of cost. A traditional parallel line might have only one or two ground wires and 7 to 40 or more signal wires. Once you get into higher frequencies (which are necessary for higher data rates) though, your signals will be totally garbled due to cross talk, unless you have a ground reference “plane” for every signal wire, where there is no signal wire between any other signal wire and ground. The ideal way to do this is actually to up a grounded shield around each signal wire, but that’s far too expensive. Twisted pair is close enough for pretty high speeds though. Ribbon cable might work with a solid copper ground plane that stretches the width of the cable parallel to all of the signal wires, but it would make it more expensive and less flexible (which is already a problem with ribbon cable). On top of that, there are also EMI regulations. Even if your signal wires don’t have enough cross talk to interfere with each other, they can still emit enough EMI to violate Federal regulations.

        In the end, it’s just way simpler and cheaper to do a single a serial connection with twisted pair wiring to minimize cross talk and shielding to stay within Federal EMI regulations, and if you do it just right, you might even be able to forego the shielding. The phase issue with high speed parallel communications is small beans compared to the RF issues.

        1. I don’t quibble with what you say, but I wonder: why not an insulated conductive sleeve over the parallel bus? One piece? We certainly see them in HDMI and USB 3.0 and faster.

          1. I use to have some some ribbon cable that had the signal ground pairs and a separate ground mesh on one side for hig freq transfers. Very expensive and hard to do fasten the connector.

        2. I thought Ethernet cables used balanced signals: that’s why the twisted pairs work well and it’s not multiple coaxial cables. Balanced signals do not normally have a ground referenced shield wire (cat7 does provide shielding between each pair for better isolation between pairs, and all STP cables have an overall shield around the whole cable but that helps emi from outside sources, not between the pairs)
          Balanced signals have signal (+) on one wire and a polarity reversed copy (-) on the other wire. At the receiver side of the circuit, the (-) wire’s signal gets reversed back to normal and is added to the (+) signal. This can be some using silicone amplifiers (transistors / ic’s) or a simple transformer that can transfer the frequency range in use
          When they are twisted around each other, if any emi/rfi “noise” gets into the cable, it is equally picked up by both wires (same phase differential in both wires) and when the signal reaches it’s destination, the polarity reversal of one wire puts the noise out of phase with the opposite quote and the addition of both signals cancels out the noise completely.
          This also offers the ability to add dc power to both conductors (technically called biasing) within a pair, and extract the voltage at the other end without affecting the data going through the pair… Of course alternate pairs can provide a reference (0v) and/or (-dc) power to make the bias’ed pair’s voltage effective. This is the basis of how poe transmits power.
          If the signals were unbalanced (1 signal wire and ground/shield) then you would need coax to properly transmit that any distance without picking up all types of emi noise from other nearby cables/sources; even power cables at 60hz next to your data cable would cause issues!

      3. The explanation I heard was that crosstalk and ringing had become a cause of errors at higher frequencies. Eliminate the parallel conductors and you limit that problem.

        I would love to know the real reason, other than the knobs in the Wintel world made another stupid decision (cough Intel) and the lemmings are numerous, and so, stupid also became cheaper due to economies of scale.

  2. The IBM PC was such an amazing example of standardization. Now it seems people are much more accepting of one off unique designs in almost every product. New standards seem to come around less often, existing standards get replaced with proprietary standards or worse, made to fit one-offs.

    But the standards we do have are just amazing so it makes up for it.

    1. What I liked about the PC/XT/AT generations was their openness.

      Not only did they use discreet parts freely available, but IBMs documents were quite comprehensive.

      This wasn’t exclusive to IBM perhaps, since it was still a common practice of the zeitgeist of that era to include nicely drawn schematics and listings in the manuals, but it wasn’t a matter of course, either. Not for a company that big.

      Also interesting were the many different workarounds to extend the platform.
      The PC platform was a living example of an interim solution that lasted.

      The evolution of the internet or www/Gopher was similar, perhaps. There were standards, but they weren’t set into stone. The early years were full of experimentation.

      Now its all fully commercialized and uniform, sadly.
      The chaotic days of flashing GIF animations and very individualistic web page design will be missed.

      Or more precisely, the open mind torwards crazy new ideas/concept. I would love to see a post-smartphone and post-app era.

  3. SCSI was an awesome bus, the article only touches on the bare minimum of uses but you could put pretty much any contemporary peripheral on a SCSI bus if you wanted to, heck I’ve got SCSI floppy drives somewhere, I had a SCSI document scanner (still have somewhere), I’ve seen SCSI to Ethernet/Token Ring and other more arcane networking technologies…

    I’m positive I’ve only touched on a very few examples and there are many more which will come back to my aged brain.

    1. You could have multiple computers attached to the same scsi device too. We shared a tape drive between our mirrored servers this way where I used to work. Not only that (with the right driver) you could do tcp/ip on the SCSI bus itself as a high speed network link.

      1. And now it’s the other way around: Earlier this week, I assisted a vendor with attaching an expansion shelf to one of [RedactedCo]’s storage appliances. It used two pairs of 100GbE QSFP twinax connectors as the shelf data interconnects.

  4. In 1996 I bought an HP Omnibook 800 notebook specifically because it had a SCSI port.
    A SCSI port on a 2.9 lb computer was faintly ridiculous because of the cable stiffness, but it worked very well for connecting to a desktop RAID.

    That was a wonderful little machine. I loved it. It came from HP in the years BC (before Carly), and they’ve never matched product quality since.

      1. SCSI : small computer system interface. It’s still a standard in use today , USB 3 Gen 2 and later comply to the standard it’s kind of a Serial SCSI instead of parallel cable SCSI.

  5. SCSI was so much better than IDE/ATA. People harp on setting SCSI IDs and using termination as being the reason for its failure for adoption. It was really poor support from M$ that caused its doom.

    Lots of SCSI adapters supported bus mastering. However to work with SMARTDRV.EXE you had to use double buffering. This not only negated the advantages of busmastering but intoduced a performance penalty. Then with Windows 3.1 SCSI was not supported with 32bit disk access meaning you had to use 16 bit code to do disk access. Other OSs like MAC OS or OS/2 Had no problems with SCSI.

    1. Why SCSI adoption failed?

      My understanding was that in datacenters SCSI was the majority. Was that not true?
      Did the makers of SCSI devices ever intend for it to be adopted more widely than that?

      It was at home and on workstations that SCSI failed to be adopted.
      Manufacturers have always marketed lesser tech to the home.

      SCSI drives and adapters were usually priced for the high end users in a datacenter.
      I’m not sure it’s necessary to look beyond that for a reason that IDE was king!

      I know I for one would have switched back in the day had it been priced comparably.

    2. I remember that OS/2 2.11 and 3.0 supported my Pro AudioSpectrum 16 soundcard and its Trantor SCSI controller.
      The attached CD-ROM drive worked flawlessly all the time. ^^

      “Then with Windows 3.1 SCSI was not supported with 32bit disk access meaning you had to use 16 bit code to do disk access.”

      That’s true for the out-of-box experience, yes.
      Windows 3.1x shipped with a WD1003 “Fast Disk” driver.

      Unfortunately, WD1003 was a controller standard from the 80s, when MFM/RLL HDDs with ST506 interface were still around.

      While ESDI, AT-Bus HDDs (IDE HDDs) were fully backwards compatible, the then new E-IDE or ATA-2 HDDs weren’t.
      They still speak the WD1003 language more or less, but their register behavior was slightly different in detail. DOS doesn’t care, but the Windows 3.1 HDD driver does disable itself for ATA-2, for safety reasons, because it thinks the HDD isn’t compatible.

      Luckily, a few third-party drivers existed at the time, like that Micro House driver (part of EZ-Drive Dynamic Disk Overlay). For SCSI, a few drivers existed, as well, I believe. By Future Domain, for example. But maybe they checked the HDD model brand and didn’t work with unsupported drives from othet manufacturers. Like those “free” backup or anti-virus programs for certain HDDs these days.

      http://files.mpoli.fi/hardware/HDD/OTHER/

    3. Another interesting feature of SCSI HDDs was support for TCQ.
      It allowed a form of multitasking for the HDD, so it could handle multiple requests. Technically, this would have been useful to a multi-tasking OS like OS/2.

      https://en.wikipedia.org/wiki/Tagged_Command_Queuing

      Well. Except on ISA bus, maybe, it was quite useful.
      Gratefully, there were other buses available in the 386/486 era on PC that were better suited for SCSI.
      EISA, MCA, VLB, PCI, Opti Local bus etc..

    4. Fun thing is that SCSI kind of won.

      IDE evolved into ATA, ATA added the ATAPI standard for CDROMs and other devices. ATAPI is SCSI carried over the ATA bus.

      Serial ATA is a new standard but largely based on the SCSI protocol once again. A good example of how this is the case is that you can connect SATA disks to a SAS controller, SAS being Serial Attached SCSI as used in servers etc.

      So in the end most of use are using SCSI daily and IDE is long dead and buried.

  6. I had a friend back in the 90’s with the hard-to-spell last name Skozelas(that’s not right – I think it was Greek). Everyone called him “Skuzzy”… of course, I always spelled it “SCSI”, being a BBS nerd.

  7. The Video Editing Rigs in the Library at Kent State University 2003-2005 were Dell Precision Workstation 650s, with Dual Xenon Processors, 10K RPM SCSI Drives, and Adobe CS2.

  8. Adaptec controllers pretty much became the standard for SCSI when I was in IT. I built a number of RAID 10 arrays using two controllers with seven drives on each controller.

  9. I haven’t WTFV’d yet, but I don’t see anyone in the comments talking about SCSI networking.

    Not a SCSI-to-Ethernet adapter, no, I mean IP-over-SCSI, to connect up to 8 machines in the same rack or room together, since that’s how many device addresses you could have on a bus. I remember talk of a 64-node Beowulf cluster, organized as an 8-by-8 grid with SCSI cables connecting the “rows” and “columns”.

    Of course sticking an IP layer in there was probably suboptimal performance-wise, but it allowed a lot of standard tools to be used.

  10. About SCSI devices, does anyone have heard about ATG Gigadisc 14” Magneto-Optical cartridges? ATG was a french company, and they developed MO storage starting from 1GB up to 16GB (per disk or side, i’m unsure). They went bankrupt quite a while ago. I’m looking for a drive to read the very first iteration GM-1001 cartridges. If anybody has any contact that could help, you’re very welcome

  11. SCSI is the USB of the 80s/90s for the rich people. It was super expensive. Optical discs drives actually use the SCSI protocol over ATAPI, and USB uses it with USB Attached SCSI.

    1. Rich people? I had 4 SCSI HDs, a SCSI cdrom, and scanner on a PC and wasn’t rich at that time. It was a bit expensive but if you knew where to shop there were bargains on even used equipment.

  12. I didn’t see mentioned… NCR (National Cash Register Corp.) teamed up with Shugart and Assoc. to develop an intelligent interface for hard drives in 1980. In 1981 they both convinced the X3T9 committee to adopt SCSI as a working document for an ANSI standard interface. In 1982 a subcommittee led by an NCR representative worked out the standards for the new interface naming it the Small Computer System Interface or SCSI. In 1982 in a design lab in Wichita, Kansas NCR engineers tasked with putting it on silicon produced a SCSI interface chip.

  13. I worked at Commodore on the Amiga, and was in charge of all the disk drivers for much of my time there (and qualifying drives).
    We used SCSI for most machines until the A4000/a600/a1200, which used ATA. I wrote the drivers for the A4090 and A4000T using an advanced NCR scsi chip, which had a rudimentary scsi-cpu, and could handle disconnect/reconnect without interrupting the main cpu. I also wrote the drivers for the A1200/a4000’s ATA interfaces, which basically made them look like SCSI devices. (This made later support for ATAPI CD-ROMs trivial.)
    The reason ATA won in PC’s at the time was simple: cost. The drives were cheaper (though some of that was just volume), and the interfaces were cheaper.

  14. ‘USB of the 80’s’ only if USB were patent locked down and any implementation would make any device using it orders of magnitude more expensive.

    Funnily enough we now have $300+ mice and $300+ keyboards, so we are sort of emulating how that would be.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.