Seymour Cray, Father Of The Supercomputer

Somewhere in the recesses of my memory there lives a small photograph, from one of the many magazines that fed my young interests in science and electronics – it was probably Popular Science. In my mind I see a man standing before a large machine. The man looks awkward; he clearly didn’t want to pose for the magazine photographer. The machine behind him was an amazing computer, its insides a riot of wires all of the same color; the accompanying text told me each piece was cut to a precise length so that signals could be synchronized to arrive at their destinations at exactly the right time.

My young mind was agog that a machine could be so precisely timed that a few centimeters could make a difference to a signal propagating at the speed of light. As a result, I never forgot the name of the man in the photo – Seymour Cray, the creator of the supercomputer. The machine was his iconic Cray-1, the fastest scientific computer in the world for years, which would go on to design nuclear weapons, model crashes to make cars safer, and help predict the weather.

Very few people get to have their name attached so firmly to a product, let alone have it become a registered trademark. The name Cray became synonymous with performance computing, but Seymour Cray contributed so much more to the computing industry than just the company that bears his name that it’s worth taking a look at his life, and how his machines created the future.

Continue reading “Seymour Cray, Father Of The Supercomputer”

ILLIAC Was HAL 9000’s Granddaddy

Science fiction is usually couched in fact, and it’s fun to look at an iconic computer like HAL 9000 and trace the origins of this artificial intelligence gone wrong. You might be surprised to find that you can trace HAL’s origins to a computer built for the US Army in 1952.

If you are a fan of the novel and movie 2001: A Space Oddessy, you may recall that the HAL 9000 computer was “born” in Urbana, Illinois. Why pick such an odd location? Urbana is hardly a household name unless you know the Chicago area well. But Urbana has a place in real-life computer history. As the home of the University of Illinois at Urbana–Champaign, Urbana was known for producing a line of computers known as ILLIAC, several of which had historical significance. In particular, the ILLIAC IV was a dream of a supercomputer that — while not entirely successful — pointed the way for later supercomputers. Sometimes you learn more from failure than you do successes and at least one of the ILLIAC series is the poster child for that.

The Urbana story starts in the early 1950s. This was a time when the 1945 book “First Draft of a Report on the EDVAC” was sweeping through the country from its Princeton origins. This book outlined the design and construction of the Army computer that succeeded ENIAC. In it, Von Neumann proposed changes to EDVAC that would make it a stored program computer — that is, a computer that treats data and instructions the same.

Continue reading “ILLIAC Was HAL 9000’s Granddaddy”

Everyone Needs A Personal Supercomputer

When you think of supercomputers, visions of big boxes and blinkenlights filling server rooms immediately appear. Since the 90s or thereabouts, these supercomputers have been clusters of computers, all working together on a single problem. For the last twenty years, people have been building their own ‘supercomputers’ in their homes, and now we have cheap ARM single board computers to play with. What does this mean? Personal supercomputers. That’s what [Jason Gullickson] is building for his entry to the Hackaday Prize.

The goal of [Jason]’s project isn’t to break into the Top 500, and it’s doubtful it’ll be more powerful than a sufficiently modern desktop workstation. The goal for this project is to give anyone a system that has the same architecture as a large-scale cluster to facilitate learning about high-performance applications. It also has a front panel covered in LEDs.

The design of this system is built around a the PINE64 SOPINE module, or basically a 64-bit quad-core CPU stuck onto a board that fits in an SODIMM socket. If that sounds like the Raspberry Pi Computer Module, you get a cookie. Unlike the Pi Compute Module, the people behind the SOPINE have created something called a ‘Clusterboard’, or eight vertical SODIMM sockets tied together with a single controller, power supply, and an Ethernet jack. Yes, it’s a board meant for cluster computing.

To this, [Jason] is adding his own twist on a standard, off-the-shelf breakout board. This Clusterboard is mounted to a beautiful aluminum enclosure, and the front panel is loaded up with a whole bunch of almost vintage-looking red LEDs. These LEDs indicate the current load on each bit of the cluster, providing immediate visual feedback on how those computations are going. With the right art — perhaps something in harvest gold, brown, and avocado — this supercomputer would look like it’s right out of the era of beautiful computers. At any rate, it’s a great entry for the Hackaday Prize.

CUDA Is Like Owning A Supercomputer

The word supercomputer gets thrown around quite a bit. The original Cray-1, for example, operated at about 150 MIPS and had about eight megabytes of memory. A modern Intel i7 CPU can hit almost 250,000 MIPS and is unlikely to have less than eight gigabytes of memory, and probably has quite a bit more. Sure, MIPS isn’t a great performance number, but clearly, a top-end PC is way more powerful than the old Cray. The problem is, it’s never enough.

Today’s computers have to processes huge numbers of pixels, video data, audio data, neural networks, and long key encryption. Because of this, video cards have become what in the old days would have been called vector processors. That is, they are optimized to do operations on multiple data items in parallel. There are a few standards for using the video card processing for computation and today I’m going to show you how simple it is to use CUDA — the NVIDIA proprietary library for this task. You can also use OpenCL which works with many different kinds of hardware, but I’ll show you that it is a bit more verbose.
Continue reading “CUDA Is Like Owning A Supercomputer”

Just In Time For The Holidays: Give The Gift Of Cray

The name Cray, as in [Seymour Cray] is synonymous with supercomputing. If you hurry, you can bid on a Cray J90/J916 on eBay. You might want to think about where to put it though. It is mounted on a trailer, requires 480V, and the shipping is $3,000!

First introduced in 1994, the J90 was an “entry level” machine. This particular machine supported up to 16 CPUs (each CPU was actually two chips) running at a blazing 100 MHz. The memory system was more impressive, achieving 48 GB/s.

The Cray T90 computer was much faster (and more expensive) but none of these computers had the performance of a typical PC’s graphics card these days. Even your phone may have more raw computing power, depending on how you choose to measure. Don’t fear, though. Cray Research still makes supercomputers that can eat your phone for lunch.

Still, at the time, this was big iron. The I/O system used SPARC processors that would have been entire workstations in that era. The eBay listing says it might need a little work — we weren’t clear if the seller meant in general or just the cooling system, but you can assume this is a fixer-upper. Apparently, the Retro-Computing Society of Rhode Island restored a similar beast so it can be done.

If your holiday budget doesn’t have room for a real supercomputer, here’s one that is 1/10 the size and much less expensive. Or, you could just pretend.

Designing A High Performance Parallel Personal Cluster

Kristina Kapanova is a PhD student at the Bulgarian Academy of Sciences. Her research is taking her to simulations of quantum effects in semiconductor devices, but this field of study requires a supercomputer for billions of calculations. The college had a proper supercomputer, and was getting a new one, but for a while, Kristina and her fellow ramen-eating colleagues were without a big box of computing. To solve this problem, Kristina built her own supercomputer from off-the-shelf ARM boards.

Continue reading “Designing A High Performance Parallel Personal Cluster”

Moore’s Law Of Raspberry Pi Clusters

[James J. Guthrie] just published a rather formal announcement that his 4-node Raspberry Pi cluster greatly outperforms a 64-node version. Of course the differentiating factor is the version of the hardware. [James] is using the Raspberry Pi 2 while the larger version used the Model B.

We covered that original build almost three years ago. It’s a cluster called the Iridris Pi supercomputer. The difference is a 700 MHz single core versus the 900 Mhz quad-core with double-the ram. This let [James] benchmark his four-node-wonder at 3.048 gigaflops. You’re a bit fuzzy about what a gigaflops is exactly? So were we… it’s a billion floating point operations per second… which doesn’t matter to your human brain. It’s a ruler with which you can take one type of measurement. This is triple the performance at 1/16th the number of nodes. The cost difference is staggering with the Iridris ringing in at around £2500 and the light-weight 4-node built at just £120. That’s more than an order of magnitude.

Look, there’s nothing fancy to see in [James’] project announcement. Yet. But it seems somewhat monumental to stand back and think that a $35 computer aimed at education is being used to build clusters for crunching Ph.D. level research projects.