Hackaday Links: March 29, 2020

It turns out that whacking busted things to fix them works as well on Mars as it does on Earth, as NASA managed to fix its wonky “mole” with a little help from the InSight lander’s robotic arm. Calling it “percussive maintenance” is perhaps a touch overwrought; as we explained last week, NASA prepped carefully for this last-ditch effort to salvage the HP³ experiment, and it was really more of a gentle nudge that a solid smack with the spacecraft’s backhoe bucket. From the before and after pictures, it still looks like the mole is a little off-kilter, and there was talk that the shovel fix was only the first step in a more involved repair. We’ll keep an ear open for more details — this kind of stuff is fascinating, and beats the news from Earth these days by a long shot.

Of course, the COVID-19 pandemic news isn’t all bad. Yes, the death toll is rising, the number of cases is still growing exponentially, and billions of people are living in fear and isolation. But ironically, we’re getting good at community again, and the hacker community is no exception. People really want to pitch in and do something to help, and we’ve put together some resources to help. Check out our Hackaday How You Can Help spreadsheet, a comprehensive list of what efforts are currently looking for help, plus what’s out there in terms of Discord and Slack channels, lists of materials you might need if you choose to volunteer to build something, and even a list of recent COVID-19 Hackaday articles if you need inspiration. You’ll also want to check out our calendar of free events and classes, which might be a great way to use the isolation time to better your lot.

Individual hackers aren’t the only ones pitching in, of course. Maybe of the companies in the hacker and maker space are doing what they can to help, too. Ponoko is offering heavy discounts for hardware startups to help them survive the current economic pinch. They’ve also enlisted other companies, like Adafruit and PCBWay, to join with them in offering similar breaks to certain customers.

More good news from the fight against COVID-19. Folding@Home, the distributed computing network that is currently working on folding models from many of the SARS-CoV-2 virus proteins, has broken the exaFLOP barrier and is now the most powerful computer ever built. True, not every core is active at any given time, but the 4.6 million cores and 400,000-plus GPUs in the network pushed it over from the petaFLOP range of computers like IBM’s Summit, until recently the most powerful supercomputer ever built. Also good news is that Team Hackaday is forming a large chunk of the soul of this new machine, with 3,900 users and almost a million work units completed. Got an old machine around? Read Mike Sczcys’ article on getting started and join Team Hackaday.

And finally, just because we all need a little joy in our lives right now, and because many of you are going through sports withdrawal, we present what could prove to be the new spectator sports sensation: marble racing. Longtime readers will no doubt recognize the mad genius of Martin and his Marble Machine X, the magnificent marble-dropping music machine that’s intended as a follow-up to the original Marble Machine. It’s also a great racetrack, and Martin does an amazing job doing both the color and turn-by-turn commentary in the mock race. It’s hugely entertaining, and a great tour of the 15,000-piece contraption. And when you’re done with the race, it’s nice to go back to listen to the original Marble Machine tune — it’s a happy little song for these trying times.

The New Xbox: Just How Fast Is 12 TeraFLOPS?

Microsoft’s new Xbox Series X, formerly known as Project Scarlet, is slated for release in the holiday period of 2020. Like any new console release, it promises better graphics, more immersive gameplay, and all manner of other superlatives in the press releases. In a sharp change from previous generations, however, suddenly everybody is talking about FLOPS. Let’s dive in and explore what this means, and what bearing it has on performance.

Continue reading “The New Xbox: Just How Fast Is 12 TeraFLOPS?”

Add A Bit Of Soviet-Era Super-Computing To Your FPGA

The MESM-6 project is focused on bringing the 1960s Soviet BESM-6 computer to the modern age of FPGAs and HDLs. At the moment the team behind this preservation effort consists out of [Evgeniy Khaluev], [Serge Vakulenko] and [Leo Broukhis], who are covering the efforts on the Russian-language project page.

The BESM-6 (in Russian: БЭСМ-6, ‘Bolshaya Elektronno-Schetnaya Mashina’ or ‘large electronic computing machine’) was a highly performing Soviet super computer that was first launched in 1968 and in production for the next 19 years. Its system clock ran at 9 MHz using an astounding number of discrete components, like 60,000 transistors and 170,000 diodes, capable of addressing 192 kB of memory in total. Of the 355 built, a few survive to this day, with one on display at the London Science Museum (pictured above). Many more images and information can be found on its Russian Wikipedia page.

For those not gifted with knowledge of the Russian language, the machine-translated summary reveals that the project goal is to make a softcore in SystemVerilog that is compatible with user mode BESM-6, using the same Pascal compiler as originally used with that system. Further goals include at least 24 kB of data memory, 96 kB of command memory and the addition of modern peripherals such as SPI and I2C.

The system is meant to be integrated with the Arduino IDE, using the Pascal compiler to make it highly accessible to anyone with an interest in programming a system like this. Considering the MIT license for the project, one could conceivably use a bit of Soviet-era computing might in one’s future FPGA efforts.

If after watching the BESM-6 video — included below — you feel inspired to start your own Soviet-computing project, we’d like to wish you luck the Russian way: Ни пуха ни пера!

Continue reading “Add A Bit Of Soviet-Era Super-Computing To Your FPGA”

Seymour Cray, Father Of The Supercomputer

Somewhere in the recesses of my memory there lives a small photograph, from one of the many magazines that fed my young interests in science and electronics – it was probably Popular Science. In my mind I see a man standing before a large machine. The man looks awkward; he clearly didn’t want to pose for the magazine photographer. The machine behind him was an amazing computer, its insides a riot of wires all of the same color; the accompanying text told me each piece was cut to a precise length so that signals could be synchronized to arrive at their destinations at exactly the right time.

My young mind was agog that a machine could be so precisely timed that a few centimeters could make a difference to a signal propagating at the speed of light. As a result, I never forgot the name of the man in the photo – Seymour Cray, the creator of the supercomputer. The machine was his iconic Cray-1, the fastest scientific computer in the world for years, which would go on to design nuclear weapons, model crashes to make cars safer, and help predict the weather.

Very few people get to have their name attached so firmly to a product, let alone have it become a registered trademark. The name Cray became synonymous with performance computing, but Seymour Cray contributed so much more to the computing industry than just the company that bears his name that it’s worth taking a look at his life, and how his machines created the future.

Continue reading “Seymour Cray, Father Of The Supercomputer”

ILLIAC Was HAL 9000’s Granddaddy

Science fiction is usually couched in fact, and it’s fun to look at an iconic computer like HAL 9000 and trace the origins of this artificial intelligence gone wrong. You might be surprised to find that you can trace HAL’s origins to a computer built for the US Army in 1952.

If you are a fan of the novel and movie 2001: A Space Oddessy, you may recall that the HAL 9000 computer was “born” in Urbana, Illinois. Why pick such an odd location? Urbana is hardly a household name unless you know the Chicago area well. But Urbana has a place in real-life computer history. As the home of the University of Illinois at Urbana–Champaign, Urbana was known for producing a line of computers known as ILLIAC, several of which had historical significance. In particular, the ILLIAC IV was a dream of a supercomputer that — while not entirely successful — pointed the way for later supercomputers. Sometimes you learn more from failure than you do successes and at least one of the ILLIAC series is the poster child for that.

The Urbana story starts in the early 1950s. This was a time when the 1945 book “First Draft of a Report on the EDVAC” was sweeping through the country from its Princeton origins. This book outlined the design and construction of the Army computer that succeeded ENIAC. In it, Von Neumann proposed changes to EDVAC that would make it a stored program computer — that is, a computer that treats data and instructions the same.

Continue reading “ILLIAC Was HAL 9000’s Granddaddy”

Everyone Needs A Personal Supercomputer

When you think of supercomputers, visions of big boxes and blinkenlights filling server rooms immediately appear. Since the 90s or thereabouts, these supercomputers have been clusters of computers, all working together on a single problem. For the last twenty years, people have been building their own ‘supercomputers’ in their homes, and now we have cheap ARM single board computers to play with. What does this mean? Personal supercomputers. That’s what [Jason] is building for his entry to the Hackaday Prize.

The goal of [Jason]’s project isn’t to break into the Top 500, and it’s doubtful it’ll be more powerful than a sufficiently modern desktop workstation. The goal for this project is to give anyone a system that has the same architecture as a large-scale cluster to facilitate learning about high-performance applications. It also has a front panel covered in LEDs.

The design of this system is built around a the PINE64 SOPINE module, or basically a 64-bit quad-core CPU stuck onto a board that fits in an SODIMM socket. If that sounds like the Raspberry Pi Computer Module, you get a cookie. Unlike the Pi Compute Module, the people behind the SOPINE have created something called a ‘Clusterboard’, or eight vertical SODIMM sockets tied together with a single controller, power supply, and an Ethernet jack. Yes, it’s a board meant for cluster computing.

To this, [Jason] is adding his own twist on a standard, off-the-shelf breakout board. This Clusterboard is mounted to a beautiful aluminum enclosure, and the front panel is loaded up with a whole bunch of almost vintage-looking red LEDs. These LEDs indicate the current load on each bit of the cluster, providing immediate visual feedback on how those computations are going. With the right art — perhaps something in harvest gold, brown, and avocado — this supercomputer would look like it’s right out of the era of beautiful computers. At any rate, it’s a great entry for the Hackaday Prize.

CUDA Is Like Owning A Supercomputer

The word supercomputer gets thrown around quite a bit. The original Cray-1, for example, operated at about 150 MIPS and had about eight megabytes of memory. A modern Intel i7 CPU can hit almost 250,000 MIPS and is unlikely to have less than eight gigabytes of memory, and probably has quite a bit more. Sure, MIPS isn’t a great performance number, but clearly, a top-end PC is way more powerful than the old Cray. The problem is, it’s never enough.

Today’s computers have to processes huge numbers of pixels, video data, audio data, neural networks, and long key encryption. Because of this, video cards have become what in the old days would have been called vector processors. That is, they are optimized to do operations on multiple data items in parallel. There are a few standards for using the video card processing for computation and today I’m going to show you how simple it is to use CUDA — the NVIDIA proprietary library for this task. You can also use OpenCL which works with many different kinds of hardware, but I’ll show you that it is a bit more verbose.
Continue reading “CUDA Is Like Owning A Supercomputer”