Kristina Kapanova is a PhD student at the Bulgarian Academy of Sciences. Her research is taking her to simulations of quantum effects in semiconductor devices, but this field of study requires a supercomputer for billions of calculations. The college had a proper supercomputer, and was getting a new one, but for a while, Kristina and her fellow ramen-eating colleagues were without a big box of computing. To solve this problem, Kristina built her own supercomputer from off-the-shelf ARM boards.
Continue reading “Designing a High Performance Parallel Personal Cluster”
The AlphaGo computer has been in the news recently for beating the top Go player in the world in four out of five games. This evolution in computing is a giant leap from the 90s when computers were still struggling to beat humans at chess. The landscape has indeed changed, as [Folkert] shows us with his chess computer based on a Raspberry Pi 3 and (by his own admission) too many LEDs.
The entire build is housed inside a chess board with real pieces (presumably to aid the human player) and an LED on every square. When the human makes a move, he or she inputs it into the computer via a small touch screen display. After that, the computer makes a move, indicated by lighting up the LEDs on the board and printing the move on the display. The Raspberry Pi is running the embla chess program, which has an Elo strength of about 1600.
While the computer isn’t quite powerful enough to beat Magnus Carlsen, we can only imagine how much better computers will be in the future. After all, this credit-card sized computer is doing what supercomputers did only a few decades ago. With enough Raspberry Pis, you might even be able to beat a grandmaster with your chess computer. Computer power aside, think of the advancements in fabrication technology (and access to it) which would have made this mechanical build a wonder back in the 90s too.
Continue reading “Chess Computers Improve Since 90s”
[James J. Guthrie] just published a rather formal announcement that his 4-node Raspberry Pi cluster greatly outperforms a 64-node version. Of course the differentiating factor is the version of the hardware. [James] is using the Raspberry Pi 2 while the larger version used the Model B.
We covered that original build almost three years ago. It’s a cluster called the Iridris Pi supercomputer. The difference is a 700 MHz single core versus the 900 Mhz quad-core with double-the ram. This let [James] benchmark his four-node-wonder at 3.048 gigaflops. You’re a bit fuzzy about what a gigaflops is exactly? So were we… it’s a billion floating point operations per second… which doesn’t matter to your human brain. It’s a ruler with which you can take one type of measurement. This is triple the performance at 1/16th the number of nodes. The cost difference is staggering with the Iridris ringing in at around £2500 and the light-weight 4-node built at just £120. That’s more than an order of magnitude.
Look, there’s nothing fancy to see in [James’] project announcement. Yet. But it seems somewhat monumental to stand back and think that a $35 computer aimed at education is being used to build clusters for crunching Ph.D. level research projects.
Noting that funding for science has run dry for many researchers, [Gaurav] has built a supercomputer from 200 Playstation 3 consoles housed and chilled inside an old refrigerated shipping trailer. His mission at UMass Dartmouth from the National Science Foundation is simulating black hole collisions with an eye on learning something about gravitational waves.
Dr. [Gaurav Khanna] is no stranger to using PS3 supercomputers to do meaningful science. Seven years ago he proposed a 16-PS3 supercomputer running Linux and managed to convince Sony to donate four consoles. The university kicked in funding for another 8 and [Gaurav] ponied up for the last four out of his own pocket. He dubbed it the “PS3 Gravity Grid” and received international attention for the cluster. For equivalent performance, it cost him only 10% the price of a real supercomputer. This led to published papers on both hacked supercomputers and gravity waves. But that rig is looking a little old today. Enter the Air Force.
Dr. [Khanna] was not the only one using PS3s to crunch data – back in 2010 the US Air Force built the “Condor Cluster” of 1,760 PS3s to perform radar imaging of entire cities and do neuromorphic AI research. With their hardware now expired, the Air Force donated 200 of the PS3s to [Gaurav] for his new build. Now that he has wired them up, the Air Force is donating another 220 for a not-snicker-proofed total of 420.
For those sceptical that the now 8-year-old hardware is still cost-effective, even with free consoles it is marginal. RAM is an issue and modern graphics cards are each equivalent to 20 PS3s. Ever the popular target these days, Sony has the PS4 OS locked down from the get go – thanks Sony. The next cluster planned will be with PCs and graphics cards. For now, [Gaurav] has plenty of calculations that need crunching and a queue of colleagues have formed behind him.
Multi-node RasPi clusters seem to be a rite of passage these days for hackers working with distributed computing. [Dave’s] 40-node cluster is the latest of the super-Pi creations, and while it’s not the biggest we’ve featured here, it may be the sleekest.
The goal of this project—aside from the obvious desire to test distributed software—was to keep the entire package below the size of a full tower desktop. [Dave’s] design packs the Pi’s in groups of 4 across ten individual cards that easily slide out for access. Each is wired (through beautiful cable management, we must say) to one of the 2 24-port switches at the bottom of the case. The build uses an ATX power supply up top that feeds into individual power for the Pi’s and everything else, including his HD array—5 1TB HD’s, expandable to 12—a wireless router, and a hefty fan assembly.
Perhaps the greatest achievement is the custom acrylic case, which [Dave] lasered out at the Dallas Makerspace (we featured it here last month). Each panel slides off with the press of a button, and the front/back panels provide convenient access to the internal network via some jacks. If you’ve ever been remotely curious about a build like this one, you should cruise over to [Dave’s] page immediately: it’s one of the most meticulously well-documented projects we’ve seen in a long time. Videos after the break.
Continue reading “40-Node Raspi Cluster”
In retrospect, it was only a matter of time before someone turned a bunch of Raspberry Pis into a supercomputer.
The Raspi supercomputer is the result of a project headed up by University of Southampton professor [Simon Cox]. Included in the team are a gaggle of grad students and [Simon]’s 6-year-old son who graciously provided the material, design, and logistics for the custom LEGO case.
The Iridris-Pi supercomputer, as the team calls their creation, consists of 64 Raspberry Pis, all configured for parallel processing using a lightweight version of MPI. [Simon] was kind enough to put up an excellent guide for turning two (or more) Raspberry Pis into a supercomputer.
The machine has a full 1 TB of disk space provided by a 16 GB SD card in each node. Although the press release doesn’t go over the computational capabilities of the Iridris-Pi, the entire system can be powered from a single 13 A supply.
If you’re wondering what it would take to get a Raspberry Pi supercomputer into the TOP500 list of supercomputers, a bit of back-of-the-envelope computation given the Raspi’s performance and the fact the 500th fastest computer can crank out about 60 TeraFLOPS/s, we’ll estimate about 1.4 Million Raspis would be needed. At least it’s a start.
In August, 2010, [Alexander Yee] and [Shigeru Kondo] won a respectable amount of praise for calculating pi to more digits than anyone else. They’re back again, this time doubling the number of digits to 10 Trillion.
The previous calculation of 5 Trillion digits of Pi took 90 days to calculate on a beast of a workstation. The calculations were performed on 2x Xeon processors running at 3.33 GHz, 96 Gigabytes of RAM, and 32 Terabytes worth of hard drives. The 10 Trillion digit attempt used the same hardware, but needed 48 Terabytes of disk to store everything.
Unfortunately, the time needed to calculate 10 Trillion digits didn’t scale linearly. [Alex] and [Shigeru] waited three hundred and seventy-one days for the computer to finish the calculations. The guys used y-cruncher, a multithreaded pi benchmarking tool written by [Alex]. y-cruncher calculates hexadecimal digits of pi; conveniently, it’s fairly easy to find the nth hex digit of pi for verification.
If you’re wondering if it would be faster to calculate pi on a top 500 supercomputer, you’d be right. Those boxes are a little busy predicting climate change, nuclear weapons yields, and curing cancer, though. Doing something nobody else has ever done is still an admirable goal, especially if it means building an awesome computer.