The Apple Silicon That Never Was

Over Apple’s decades-long history, they have been quick to adapt to new processor technology when they see an opportunity. Their switch from PowerPC to Intel in the early 2000s made Apple machines more accessible to the wider PC world who was already accustomed to using x86 processors, and a decade earlier they moved from Motorola 68000 processors to take advantage of the scalability, power-per-watt, and performance of the PowerPC platform. They’ve recently made the switch to their own in-house silicon, but, as reported by [The Chip Letter], this wasn’t the first time they attempted to design their own chips from the ground up rather than using chips from other companies like Motorola or Intel.

In the mid 1980s, Apple was already looking to move away from the Motorola 68000 for performance reasons, and part of the reason it took so long to make the switch is that in the intervening years they launched Project Aquarius to attempt to design their own silicon. As the article linked above explains, they needed a large amount of computing power to get this done and purchased a Cray X-MP/48 supercomputer to help, as well as assigning a large number of engineers and designers to see the project through to the finish. A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC. Eventually they switched to a RISC design, though, but the project still had struggled to ever get a prototype working. Eventually the entire project was scrapped and the company eventually moved on to PowerPC, but not without a tremendous loss of time and money.

Interestingly enough, another team were designing their own architecture at about the same time and ended up creating what would eventually become the modern day ARM architecture, which Apple was involved with and currently licenses to build their M1 and M2 chips as well as their mobile processors. It was only by accident that Apple didn’t decide on a RISC design in time for their personal computers. The computing world might look a lot different today if Apple hadn’t languished in the early 00s as the ultimate result of their failure to develop a competitive system in the mid 80s. Apple’s distance from PowerPC now doesn’t mean that architecture has been completely abandoned, though.

Thanks to [Stephen] for the tip!

A History Of NASA Supercomputers, Among Others

The History Guy on YouTube has posted an interesting video on the history of the supercomputer, with a specific focus on their use by NASA for the implementation of computational fluid dynamics (CFD) models of aeronautical assemblies.

The aero designers of the day were quickly finding out the limitations of the wind tunnel testing approach, especially for so-called transonic flow conditions. This occurs when an object moving through a fluid (like air can be modeled) produces regions of supersonic flow mixed in with subsonic flow and makes for additional drag scenarios. This severely impacts aircraft performance. Not accounting for these effects is not an option, hence the great industry interest in CFD modeling. But the equations for which (usually based around the Navier-Stokes system) are non-linear, and extremely computationally intensive.

Obviously, a certain Mr. Cray is a prominent player in this story, who, as the story goes, exhausted the financial tolerance of his employer, CDC, and subsequently formed Cray Research Inc, and the rest is (an interesting) history. Many Cray machines were instrumental in the development of the space program, and now adorn computing museums the world over. You simply haven’t lived until you’ve sipped your weak lemon drink whilst sitting on the ‘bench’ around an early Cray machine.

You see, supercomputers are a different beast from those machines mere mortals have access to, or at least the earlier ones were. The focus is on pure performance, ideally for floating-point computation, with cost far less of a concern, than getting to the next computational milestone. The Cray-1 for example, is a 64-bit machine capable of 80 MIPS scalar performance (whilst eating over 100 kW of juice), and some very limited parallel processing ability.

While this was immensely faster than anything else available at the time, the modern approach to supercomputing is less about fancy processor design and more about the massive use of parallelism of existing chips with lots of local fast storage mixed in. Every hacker out there should experience these old machines if they can, because the tricks they used and the lengths the designers went to get squeeze out every ounce of processing grunt, can be a real eye-opener.

Want to see what happens when you really push out the boat and use the whole wafer for parallel computation? Checkout the Cerberus. If your needs are somewhat less, but dabbling in parallel computing gets you all pumped, you could build a small array out of Pine64s. Finally, the story wouldn’t be complete without talking about the life and sad early demise of Seymour Cray.
Continue reading “A History Of NASA Supercomputers, Among Others”

The 13.5 Million Core Computer

Having a dual- or quad-core CPU is not very exotic these days and CPUs with 12 or even 16 cores aren’t that rare. The Andromeda from Cerebras is a supercomputer with 13.5 million cores. The company claims it is one of the largest AI supercomputers ever built (but not the largest) and can perform 120 Petaflops of “dense compute.”

We aren’t sure about the methodology, but they also claim more than one exaflop of “AI computing.” The computer has a fabric backplane that can handle 96.8 terabits per second between nodes. According to a post on Extreme Tech, the core technology is a 3-plane wafer processor, WSE-2. One plane is for communications, one holds 40 GB of static RAM, and the math plane has 850,000 independent cores and 3.4 million floating point units.

The data is sent to the cores and collected by a bank of 64-core AMD EPYC 3 processors. Andromeda is optimized to handle sparse matrix computations. The company claims that the performance scales “almost linearly.” That is, as you double the number of cores used, you roughly half the total run time.

The machine is available for remote use and cost about $35 million to build. Since it uses 500 kW at peak run times, it isn’t free to operate, either. Extreme Tech notes that the Frontier computer at Oak Ridge National Labs is both larger and more precise, but it cost $600 million, so you’d expect it to be more capable.

Most homebrew “supercomputers” we see are more for learning how to work with clusters than trying to hit this sort of performance. Of course, if you have a modern graphics card, OpenCL and CUDA will let you do some of this, too, but at a much lesser scale.

The Descendants Of Ancient Computers

Building computers from discrete components is a fairly common hobby project, but it used to be the only way to build a computer until integrated circuits came on the scene. If you’re living in the modern times, however, you can get a computer like this running easily enough, but if you want to dive deep into high performance you’ll need to understand how those components work on a fundamental level.

[Tim] and [Yann] have been working on replicating circuitry found in the CDC6600, the first Cray supercomputer built in the 1960s. Part of what made this computer remarkable was its insane (for the time) clock speed of 10 MHz. This was achieved by using bipolar junction transistors (BJTs) that were capable of switching much more quickly than typical transistors, and by making sure that the support circuitry of resistors and capacitors were tuned to get everything working as efficiently as possible.

The duo found that not only are the BJTs used in the original Cray supercomputer long out of production, but the successors to those transistors are also out of production. Luckily they were able to find one that meets their needs, but it doesn’t seem like there is much demand for a BJT with these characteristics anymore.

[Tim] also posted an interesting discussion about some other methods of speeding up circuitry like this, namely by using reach-through capacitors and Baker clamps. It’s worth a read in its own right, but if you want to see some highlights be sure to check out this 16-bit computer built from individual transistors.

Hackaday Links Column Banner

Hackaday Links: March 29, 2020

It turns out that whacking busted things to fix them works as well on Mars as it does on Earth, as NASA managed to fix its wonky “mole” with a little help from the InSight lander’s robotic arm. Calling it “percussive maintenance” is perhaps a touch overwrought; as we explained last week, NASA prepped carefully for this last-ditch effort to salvage the HP³ experiment, and it was really more of a gentle nudge that a solid smack with the spacecraft’s backhoe bucket. From the before and after pictures, it still looks like the mole is a little off-kilter, and there was talk that the shovel fix was only the first step in a more involved repair. We’ll keep an ear open for more details — this kind of stuff is fascinating, and beats the news from Earth these days by a long shot.

Of course, the COVID-19 pandemic news isn’t all bad. Yes, the death toll is rising, the number of cases is still growing exponentially, and billions of people are living in fear and isolation. But ironically, we’re getting good at community again, and the hacker community is no exception. People really want to pitch in and do something to help, and we’ve put together some resources to help. Check out our Hackaday How You Can Help spreadsheet, a comprehensive list of what efforts are currently looking for help, plus what’s out there in terms of Discord and Slack channels, lists of materials you might need if you choose to volunteer to build something, and even a list of recent COVID-19 Hackaday articles if you need inspiration. You’ll also want to check out our calendar of free events and classes, which might be a great way to use the isolation time to better your lot.

Individual hackers aren’t the only ones pitching in, of course. Maybe of the companies in the hacker and maker space are doing what they can to help, too. Ponoko is offering heavy discounts for hardware startups to help them survive the current economic pinch. They’ve also enlisted other companies, like Adafruit and PCBWay, to join with them in offering similar breaks to certain customers.

More good news from the fight against COVID-19. Folding@Home, the distributed computing network that is currently working on folding models from many of the SARS-CoV-2 virus proteins, has broken the exaFLOP barrier and is now the most powerful computer ever built. True, not every core is active at any given time, but the 4.6 million cores and 400,000-plus GPUs in the network pushed it over from the petaFLOP range of computers like IBM’s Summit, until recently the most powerful supercomputer ever built. Also good news is that Team Hackaday is forming a large chunk of the soul of this new machine, with 3,900 users and almost a million work units completed. Got an old machine around? Read Mike Sczcys’ article on getting started and join Team Hackaday.

And finally, just because we all need a little joy in our lives right now, and because many of you are going through sports withdrawal, we present what could prove to be the new spectator sports sensation: marble racing. Longtime readers will no doubt recognize the mad genius of Martin and his Marble Machine X, the magnificent marble-dropping music machine that’s intended as a follow-up to the original Marble Machine. It’s also a great racetrack, and Martin does an amazing job doing both the color and turn-by-turn commentary in the mock race. It’s hugely entertaining, and a great tour of the 15,000-piece contraption. And when you’re done with the race, it’s nice to go back to listen to the original Marble Machine tune — it’s a happy little song for these trying times.

The New Xbox: Just How Fast Is 12 TeraFLOPS?

Microsoft’s new Xbox Series X, formerly known as Project Scarlet, is slated for release in the holiday period of 2020. Like any new console release, it promises better graphics, more immersive gameplay, and all manner of other superlatives in the press releases. In a sharp change from previous generations, however, suddenly everybody is talking about FLOPS. Let’s dive in and explore what this means, and what bearing it has on performance.

Continue reading “The New Xbox: Just How Fast Is 12 TeraFLOPS?”

Add A Bit Of Soviet-Era Super-Computing To Your FPGA

The MESM-6 project is focused on bringing the 1960s Soviet BESM-6 computer to the modern age of FPGAs and HDLs. At the moment the team behind this preservation effort consists out of [Evgeniy Khaluev], [Serge Vakulenko] and [Leo Broukhis], who are covering the efforts on the Russian-language project page.

The BESM-6 (in Russian: БЭСМ-6, ‘Bolshaya Elektronno-Schetnaya Mashina’ or ‘large electronic computing machine’) was a highly performing Soviet super computer that was first launched in 1968 and in production for the next 19 years. Its system clock ran at 9 MHz using an astounding number of discrete components, like 60,000 transistors and 170,000 diodes, capable of addressing 192 kB of memory in total. Of the 355 built, a few survive to this day, with one on display at the London Science Museum (pictured above). Many more images and information can be found on its Russian Wikipedia page.

For those not gifted with knowledge of the Russian language, the machine-translated summary reveals that the project goal is to make a softcore in SystemVerilog that is compatible with user mode BESM-6, using the same Pascal compiler as originally used with that system. Further goals include at least 24 kB of data memory, 96 kB of command memory and the addition of modern peripherals such as SPI and I2C.

The system is meant to be integrated with the Arduino IDE, using the Pascal compiler to make it highly accessible to anyone with an interest in programming a system like this. Considering the MIT license for the project, one could conceivably use a bit of Soviet-era computing might in one’s future FPGA efforts.

If after watching the BESM-6 video — included below — you feel inspired to start your own Soviet-computing project, we’d like to wish you luck the Russian way: Ни пуха ни пера!

Continue reading “Add A Bit Of Soviet-Era Super-Computing To Your FPGA”