Folding@Home And Rosetta, For ARM

Most readers will be aware of the various distributed computing projects that provide supercomputer-level resources to researchers by farming out the computing tasks across a multitude of distributed CPUs and GPUs. The best known of these are probably Folding@Home and Rosetta, which have both this year been performing sterling service in the quest to understand the mechanisms of the SARS COVID-19 virus. So far these two platforms have remained available nearly exclusively for Intel-derived architectures, leaving the vast number of ARM-based devices out in the cold. It’s something the commercial distributed-computing-on-your-phone company Neocortix have addressed, as they have successfully produced ARM64 clients for both platforms that will be incorporated into the official clients in due course.

So it seems that mundane devices such as mobile phones and the more capable Raspberry Pi boards will now be able to fold proteins like a boss, and the overall efforts to deliver computational research will receive a welcome boost. But will there be any other benefits? It’s a Received Opinion that ARM chips are more power-efficient than their Intel-derived cousins, but will this deliver more energy-efficient distributed computing? The answer is “probably”, but the jury’s out on that one as computationally intensive tasks are said to erode the advantage significantly.

Folding@Home was catapulted by the influx of COVID-19 volunteers into first place as the world’s largest supercomputer earlier this year, and we’re pleased to say that Hackaday readers have played their part in that story. As this is being written the July 2020 stats show our team ranked at #39 worldwide, having racked up 14,005,664,882 points across 824,842 work units. Well done everybody, and we look forward to your ARM phones and other devices boosting that figure. If you haven’t done so yet, download the client and join us..

Via HPCwire. Thanks to our colleague [Sophi] for the tip.

19 thoughts on “Folding@Home And Rosetta, For ARM

  1. I can see a lot of people installing it on their phones.
    Then uninstalling it when their battery life suddenly is best measured in minutes rather than hours.

    Then there will be a whole bunch of really slick builds of Pi-Zero clusters.
    Which will have circles ran around them by outdated Intel machines rescued for free from people’s closets.

    And finally a few people will build clusters from Pi 3s and 4s. Those will actually have reasonable performance.
    But as those clusters grow it starts getting expensive.

    When the second generation of Arm office workstations comes out causing the first to hit the closets, that’s when this finally becomes exciting. But when should we expect that first generation?

    I wonder if the real reason for this has something to do with Apple’s recent announcement?

    Then again, maybe if it could be set to only come on when the phone is plugged into the charger AND the battery is already at 100%. Then they might be on to something. Better make that the default option though if they want masses of users. I know my phone stays plugged in all night long as I sleep and is usually plugged in when I am driving my car.

    1. Depending on the device I have used root scripts, smart switches with tasker routines, and a product called ‘Chargie’ all to get my different Androids to stop charging around 78%. I do this for battery health. Requiring a phone or tablet to have 100% battery to begin computing would not work for me. I like how the (sadly very out of date) Android BOINC client allows me to choose it’s run conditions.

    2. I wonder how well this would perform on the NVIDIA Tegra X1. It’s the chip found in the NVIDIA Shield TV and the Nintendo Switch. The Shield TV is physically bigger than a phone, has a built in fan, and doesn’t have a battery. It also sits there asleep for hours at a time until the owners want to watch something. It seems like the perfect platform for this kind of thing.

      1. I have concerns that this might not work well or at all unless the app allows customization of run conditions. The Shield TV will not report battery status (because it hasn’t got one) and a hard requirement for some battery % will cause the app to refuse to compute. The system may or may not report charging either, I’m not sure of that.

      1. The selected processor in the SoC of RPi were originally licensed mostly for the mobile handset market (eg. and ).

        So obviously it will better than some, worse than others. It is at it’s heart a mobile phone chip.

  2. I run BOINC on my phone. By default it only runs when it’s plugged in and the battery charge level is above 90%, so most users won’t notice a difference. It does heat up the phone when it’s running but there’s also a cutoff of 45C by default for battery temperature, so it’s not an issue. I think it’s pretty well made and it’s constantly being refined. I feel like phone isn’t doing much work though compared to the pcs.

  3. Exciting times for the distribited computing volunteer projects. I first ran the original Berkeley Universitie’s BOINC Seti@home project way back in 1999 on my custom built Pentium 3 desktop machine. Workunits took days to complete back in those days. No GPU support was available back on those days either.

    As the years passed there were many great improvements and exanded hardware and operating system support. Some of the ones that come to mind were: User created teams, point system and badges which really brought excitement to the volunteers. Aftermarket optimized software add-ons (Lunatics) for specific multi-core and hyper-threading processors. GPU graphics card support for both Nvidia and AMD (formerly ATi at the time.) This eventually shrunk the multi-day long workunit runtimes to a handfull of minutes. This also allowed the SLI and Crossfire multi-GPU Desktop gaming sector to join.

    Support for the Playstation 3 gaming system finally allowerd the console gaming sector to join and donate computing power. Support for the apple operating system eventually was developed along with Lynux and Unix.

    The growing number and acceleratingly powerful multi-core android ARM based devices were eventually supported. This brought support for tablets, smartphones and set-top media devices. Raspberry Pi boards were supported soon after.

    To me personally the support for the android and ARM cpu was the most exciting. Followed secondly by GPU graphics card support. I find it so exciting to use such incredibbly efficient and portable decices which are pretty much available in all households. I love to repurpose my old tablets and smartphones as dedicated distributed computing machines. I apply Enzotech copper finned heatsinks (made for desktop northbridge) on the octacore ARM processors. I use Arctic Silver thermally conductive non electrically conductive 2 part epoxy. I also use a Noctua 12volt 40mm fan on the copper heatsink in a bottom up configuration. This keeps temperatures down while allowing all 8 ARM processor cores to compute at 100% time.

    I really look forward to running Folding@home and Rosetta on my ARM processor based machines.

    1. The important part is the GPU, not the CPU. These projects are doing simple fast trivially parallelizable computation, which GPUs are much better at. The GPU on a phone or RasPi are a lot more efficient than the basic CPU, just as they are on a PC.

      1. Rosetta@home is entirely CPU based however and doesnt distribute GPU jobs. Folding@home does run on the gpu but from what I understand not all jobs can run on the gpu, and the cpu is still required for some jobs

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.