Remembering Seymour Cray

If you think of supercomputers, it is hard not to think of Seymour Cray. He built giant computers at Control Data Corporation and went on to build the famous Cray supercomputers. While those computers aren’t especially amazing today, for their time, they were modern marvels. [Asianometry] has a great history of Cray, starting with his work at ERA, which would, of course, eventually produce the computer known as the Univac 1103.

ERA was bought up by Remington Rand, which eventually became Sperry Rand. Due to conflict, some of the ERA staff left to form Control Data Corporation, and Cray went with them. The new company decided to focus on computers to do simulations for things like nuclear test simulations.

To save money, the new company used out-of-spec transistors, pairing them so they’d work correctly. In 1960, the company delivered the CDC 1604, a million-dollar computer that ran at 200 kHz. It was the most powerful computer of its day. It was solid state with a 48-bit word. Core memory was 32K words (192 Kbytes). The company touted its “small size” (fits in a 20-foot by 20-foot room!).

Cray would eventually sour on CDC and founded Cray Research in 1972. Before long, though, Cray stepped down as CEO of Cray Research and founded the Cray Computer Corporation.

While early Cray designs were technically successful, growing technology allowed other companies to produce cheaper supercomputers. In addition, the need for supercomputers and how they were built was changing. Cray Computer Corporation went bankrupt in 1995. Cray Research continued without Cray at the helm, but attempts to access a broader market didn’t really work out.

Silicon Graphics bought Cray Research in 1996, selling some of it to Sun. That was the same year Seymour died in a traffic accident at age 71. By 2000, Cray Research was sold again to Tera Computer, which changed its name to Cray. However, they also had a rock road in the supercomputer market. They sold some assets to Intel in 2012 and in 2019 were bought by Hewlett Packard.

There is a lot of history in this video, and it would be amazing to see what Seymour Cray could have done with an unlimited budget and no business necessities.

Want to play with a Cray? Simulation is going to be easier than buying surplus. We’ve done our own biography of Mr. Cray, if you want some additional reading.

28 thoughts on “Remembering Seymour Cray

    1. I was part of a group of educators given access to the LNL Y-MP for a while about the same time. It was a math/science project to bring high end computing to classrooms. We ran ray tracing and climate models and more. The 64 bit OS and 3 terrabit tape drive memory towers– well, OK, 30 years later my laptop do all that. Cray led the way as far as I’m concerned.

    1. Tera-era Cray didn’t have a rock(y) road into the supercomputer market. They did very well over those almost 20 years, frequently placing #1 in the TOP500, making bank off the big data hype (and with retooled old hardware at that), and winning back a lot of sites that hadn’t owned a Cray since the ’80s.

      The 2012 transfer of the silicon IP team wasn’t due to hard times. It happened because Intel convinced Cray’s execs that their OPI fabric they would build into Xeons was the future of high speed interconnects. Cray was convinced that their custom network fabric would no longer be competitive, and they could make a few bucks getting rid of the IP and the people who understand it. After a couple of years, OPI fizzled out and Cray ended up buying a networking ASIC company to build the Ethernet-based Slingshot network that’s used in their current lineup.

  1. Maybe the silliest rundown on Cray i’ve ever read… disrespectfull even .. and yeah lived through it from the Digital Signal Processing side since the 70s.. (real time super computing).

  2. Our Boy Scout troop visited NCAR in Boulder, CO when my son was younger. They had a retired Cray 1 in the lobby. My wife thought it was place to sit and when she did one of the lower panels fell off. I gave her a hard time about breaking a multi-million dollar supercomputer.

  3. Early 70s, I was a Statistics and Operations Reasearch major at University of Texas at Austin. Spent many nights, hours running optimization models on the CDC 6600 there. Was a happy day when I was put on the early adopter group for the MNF Fortran compiler. The 6600 just ate it up. Fond memories

  4. We had a CDC 1604-A at Texas A&M University Chemistry department. Cornered the market on 2N1711As, you betcha! Was surplus from the Navy. They used 3 of them for tracking submarines in the Atlantic, according to lore. Amazingly advanced for its age. Tape OS, using CDC 606 tape drives. Fortran-63 for the main language. Talked to CDC 16-A peripheral processors over a DMA data channel off the tape drive controller, for remote entry. Still got a lot of the manuals. Wonderful beastie. Couldn’t add, used a borrow pyramid subtraction scheme instead. Took a lot of beer to understand that one! Had a memory problem crop up. Split the core stack, and found an unsoldered connection, denks Gott! This was in 1974! We sure took his name in vain many times, but that box was amazing! First job when we got it up was the “Girl on the Barstool” printer program. Did a lot of crysallography processing. The A register had a speaker across the first 3 bits. You could listen and tell where it was in the program. Good for telling the wonks when they were about to get a printout pass. The OS had several musical prompts to tell us when we needed to do things. Got surplused and scrapped. Truly a tragedy.

  5. I always wonder how industry/domain leaders can go bankrupt/become irrelevant. Sears, once the undisputed king of mail-order-everything could have become what Amazon is now. Kodak and Polaroid, names once synonymous with photography, could have been titans in the digital photography realm, but are now largely irrelevant and forgotten. DEC and Cray, creators of ground-breaking and historical computer equipment, both now footnotes in time. The list goes on.

    I suppose reason vary, depending on specifics. However, it seems that in each instance the story starts with a visionary who creates something that did not exist before, followed by business/MBA types who ultimately drive it into the ground.

    I’m reminded of complex math functions with local minima and maxima. Business-types seek to “maximize” profits, but are generally too narrow-minded or risk adverse to bother looking past the nearest local maximum on the graph. So a business initially grows and thrives, but ultimately remains anchored to one segment on the graph, eventually leading to atrophy.

    I would not describe myself as a Musk fan-boy, but it’s evident he’s not afraid of incurring huge losses on blown-up rockets, so long as the lessons learned can slide his position up the graph to the next-higher maximum. Fearlessness is required for leadership.

    That said, based upon numerous historical examples, I would predict that after Musk retires, SpaceX, like the other companies I mentioned, will ossify in the hands of MBA-types. Like the other examples (and for the same reasons) it will first cede industry leadership, then become second-tier, and may eventually cease to exist altogether. Wash-rinse-repeat.

    1. Nice point.
      Somewhere reading about electric cars like Tesla (article about how VW is buying Rivians IP for $Billions ) that new industries are not constrained with their own legacy products and can therefore innovate better than established companies. Specifically VW has to still make all its ICE cars while Rivian can focus exclusively on their solitary task.
      I’m sure many have analyzed the Rise and Fall of Companies. At some point I guess it becomes abandon your aging no-longer-bleeding-edge product and roll the dice again, incur huge losses you hope to makeup and also you just fired the people working for you the last 20-30yrs or simply keep doing what you’re doing and go out a long slow petering death a whimper.

    2. That’s basically what the “Innovator’s Dilemma” thing is all about. Companies that have a highly successful business rarely invest in developing alternatives to their core products. Which basically means that other companies develop the alternatives instead, and the market shifts.

      If you’re making big money on film cameras (and the film!) why would you spend a ton of R&D on digital photography technology that will ruin your core business and enable new competitors? Having seen what happened to Kodak it’s easy to see why…. But at the time, to the people on the inside, it wasn’t.

      With decades of examples to learn from, you’d think that modern business leaders would have figured this out. And yet Tesla was able to make electric cars viable years before Ford / GM / Chrysler. Perplexity made AI-based “search” useful months ago, and Google is still just embarrassing itself.

      1. “Which basically means that other companies develop the alternatives instead”

        Your company should always be willing to undermine your existing cash-cow with a new product, because if you don’t your competitors surely will.

    3. The amount of noise you use matters, and it’s better if you’re smart enough to figure out where there’s a maxima/minima you want to reach and tunnel to it, or find a catalyst to reduce the activation energy required to get over the hump and into the next slot. Just being too aggressively different can kick you out of the existing slot without letting you fall into a better one, and makes it harder to even fall back into the same one after you do what you can to try and reach any others. If you imagine a bouncy ball vibrating all over the graph, it has to bounce around some to get out of its original spot, but it can’t go too fast or it won’t end up in any of the minima, because they’re not deep enough to catch it otherwise.

    1. Early transistors were like cookies some of a batch were off more than others and two or more grades were sorted out. Aerospace mil $ got the best of the picks. Using balanced design Cray could use the cheaper or even “floor sweepings” grade.

    2. I am with you on that, perhaps to play with but when they came to any kind of production they would need a larger supply. Also, that was still the era of “Made in Japan” being a stigma, and to be fair, a lot of junk was made in japan for export. I suspect a lot of sub standard parts were used in consumer goods where tight specs were not a big deal. The thing is if you are using sub standard parts, there needs to be a giant market for the real parts to generate that much defect, and they try very hard to tune the process to minimize off spec parts.
      There is also the old mantra in the electronics assembly industry I was involved in for many many years, a defective or wrong part that is caught “on the line” that is post board stuffing costs 2 cents to fix, if caught on the spot. That jumps up to 2 bucks to fix if optical QC catches it pre soldering. It is still a bit of a pain as the defect needs to be labeled and the board either routed back to the line making them as the factory, contrary to popular belief, is not a motherload of parts, it is more like a restaurant that has specific parts for specific jobs. If that line had been “pulled” the board would have to go to someone in rework who would go and “pull” the job out of the warehouse (usually a few tubs of parts and a folder with all the details) and get the correct part. Some of the jobs, aerospace, military and some industrial things they were very picky about using the exact part. If an Alan Bradly resistor was called out, that better be in the slot. That same defect if it was soldered, it became a 20 dollar issue if it was caught in optical or auto circuit testing QA. That same issue became a 200 issue if it wound up on one of the hand test/troubleshoot benches. Not only expensive because of the humans and time involved, but it got kicked back so it had to pass all the other QA again as well. And god only knows how expensive a field failure would be. And these places are factories, the idea is to keep the lines flowing all the time.
      One thing that used to really irk me was I was good with the universal parts placement machines. They got the least expensive option set for them, so we would have to send a new board off to be laser scanned and they would send us a paper tape back for our machine. And the results were always off. I would spend half a night working on little tweaks and have the machines hitting as close to 100% as they could, and come in the next day to see they PM stuck a new job on the thing. “Oh, we only has a few thousand of them to do so we just wanted to bang them out”. That was about what it took to get the process well tuned. It was even worse when it was just a few hundred or less. They had both hand stuffing lines and these things that were like pantographs that took the reeled up parts and had a bunch of paper arrows with sequential numbers on them, and you stuck a blank board on the pins on one side and put the pointer of the pantograph in hole 1 and pressed the button on the pointer and poof, it was stuffed. And the girls who ran that line were crazy last, like it was a video game fast. That was great for stuffing a few hundred boards..

      But I really digress here. The bottom line is, if you are using sub standard pieces and they fail, especially once the end user has the thing, that is a vey very expensive fix.

      Again digressing but there used to be places like poly pacs that would sell infamous assortments of things like diodes, including some monodes, and transistors. Like the parts sweepings from some big factory. 300 diodes for a buck, untested… OK to play with but I would not wanna use them on production nor anything critical.

  6. Growing up near Chippewa Falls, Wisconsin in the 90’s, Cray was a well-recognized name and left quite a legacy for tech manufacturing. Thanks for the trip down memory lane!

  7. I remember back in the late seventies Our research group had some Cray Time up at NCAR in Boulder. We would process our Balloon Borne IR FFT spectrometer data (million point FFT). You would submit your data (Our’s were Mag Tape) and then watch the input Que monitor in the user computer room and watch an hour tick by as it was converted from the CDC computer format to Cray’s format which then took about 256 milliseconds t o run on the Cray and then another hour to get the results from the CDC computer. The other option was doing the FFT on our Lab’s Data general NOVA minicoumputer that had hardware vector processing. The issue was it could only do 8K FFT’s so had to break apart the data into 8K chunks do an FFT and then restitch the FFT’s back together which was about a 2 day process…

  8. I used a Cray Y-MP as an intern working for the Atmospheric Prediction Branch of the US Air Force back in 1992. I was running a weather model that took 18 hours to run and modeled 36 hours in the future. It cost $1500/hour for computer time. At the time it was one of the fastest computers in the world. The funny thing was that we got a Sun Ultra 10 (~$10k) that could run the model in 24 hours. Since we were just tuning the model rather than using it in production it ended up making a lot more sense to use the comparably tiny Sun box. I wonder what it would be like to run that model these days?

Leave a Reply

Your email address will not be published. Required fields are marked *

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.