A Xilinx Zynq Linux FPGA Board For Under $20? The Windfall Of Decommissioned Crypto Mining

One of the exciting trends in hardware availability is the inexorable move of FPGA boards and modules towards affordability. What was once an eye-watering price is now merely an expensive one, and no doubt in years to come will become a commodity. There’s still an affordability gap at the bottom of the market though, so spotting sub-$20 Xilinx Zynq boards on AliExpress that combine a Linux-capable ARM core and an FPGA on the same silicon is definitely something of great interest. A hackerspace community friend of mine ordered one, and yesterday it arrived in the usual anonymous package from China.

There’s a Catch, But It’s Only A Small One

The heftier of the two boards, in all its glory.
The heftier of the two boards, in all its glory.

There are two boards to be found for sale, one featuring the Zynq 7000 and the other the 7010, which the Xilinx product selector tells us both have the same ARM Cortex A9 cores and Artix-7 FPGA tech on board. The 7000 includes a single core with 23k logic cells, and there’s a dual-core with 28k on the 7010. It was the latter that my friend had ordered.

So there’s the good news, but there has to be a catch, right? True, but it’s not an insurmountable one. These aren’t new products, instead they’re the controller boards for an older generation of AntMiner cryptocurrency mining rigs. The components have 2017 date codes, so they’ve spent the last three years hooked up to a brace of ASIC or GPU boards in a mining data centre somewhere. The ever-changing pace of cryptocurrency tech means that they’re now redundant, and we’re the lucky beneficiaries via the surplus market.

Getting To The Linux Shell Is This Easy!

Linux, in minutes!
Linux, in minutes!

On the PCB is the Zynq chip in a hefty BGA with its I/O lines brought out to a row of sockets for the miner boards, Ethernet, an SD card slot, a few LEDs and buttons, and an ATX 12V power socket. The serial and JTAG ports are easily identifiable and readily accessible, and connecting a USB-to-serial adapter to the former brought us to a Linux login prompt. A little bootloader shell wizardry allowed the password to be reset, and there we were with a usable shell on the thing. Changing a jumper allows booting from the SD card, so it would be extremely straightforward to bring your own ARM Linux build onto the device to replace the AntMiner one, and since the Zynq can load its FPGA code from within Linux this makes for an extremely accessible FPGA dev board for the price.

These boards seem to be offered by multiple vendors, which indicates that there must be quite a few in the supply chain. Stocks will inevitably run out though so don’t despair if you fail to snag one. Instead they are indicative of a growing trend of application specific FPGA boards being reimagined as general purpose dev boards by our community (for example the Lattice FPGA in a hackable LED driver board we featured back in January). It’s a fair certainty that they’ll be joined by others as their generation of FPGA tech starts to be replaced.

We’ll be keeping our eye out for any others and we’re sure you’ll drop us a tip if you see any.

80 thoughts on “A Xilinx Zynq Linux FPGA Board For Under $20? The Windfall Of Decommissioned Crypto Mining

  1. a word of warning on these antminer rigs, they have a tendency to “dew up” in humid environments and the end result is corrosion around the FPGA, i think rossman did a video on one (or it might have been eevblog 2 min tear down)

    1. Ha! There’s not even a Zynq 7007. It’s actually a 7007s. (The ‘s’ suffix stands for “single” meaning one of the two ARM cores has been disabled by a fuse.) It’s the same die as the 7010. All 28k logic cells are there, but the software (not the chip) limits you to using only 23k of them at a time. The ‘7007s block RAM is limited to 5/6 of the ‘7010 capacity in the same way.

      1. A lot of jokes here but the best use of an FPGA is robotics or controlling other complex hardware. The other use is processing that is simple but needs to be done faster then the CPU can do. typically this is signal processing like video decoding or encrypting a high speed interface.

        Use the FPGA any place you wish you have specialized hardware.

        1. With this sheet… sure. It’s un-lined paper, so it might be harder than dealing with a RasPi though…
          There are probably IP blocks for SATA, USB, etc, but I don’t know how far 28k is going to go… but with a 12-bit 1 MSPS ADC with differential inputs, I suppose you could even try to rig up an analog MFM RAID setup…

    1. Except this new one has *no* technical info other than the aliexpress link. It talks about the JTAG, serial connections, but *not* how. There aren’t any links on reverse engineering. So this is a weak one compared to the original.

    2. Good point, but not quite. This is a different board, and it only serves to further underline what I said at the end of the piece, that we’re in an age of cheap second hand FPGA boards.

    1. Yes, as you can load the bitfile from the arm under linux, use part of DDR for your risc-v core while having access from the arm (requires some work). And ~20k lut allows you to have a good risc-v core (but probably not big 64bits ones, need to check).

  2. My question is when is appropriate to use an FPGA vs a general purpose compute platform. The difference to me seems to be higher performance running on an FPGA at the cost of higher programming complexity. Seems similar to the old RISC vs CISC debate which was settled when CISC processor became fast enough for 99% of compute needs. Or am I completely wrong about the use cases?

    1. The traditional rule for CPU vs FPGA is that a CPU can have software developed for it 10x faster, but you are limited to the sequential speed of the CPU at processing data and instructions. With a FPGA gateware you are literally designing circuits, so in a FPGA everything can happen at once and everything can all happen in parallel, multiple times (depending on the available resources). And that difference is how a slow (few hundred MHz) FPGA can win a race against a much faster CPU (GHz) or multiple CPU’s, while using much less power.

      If you are using a FPGA as a general purpose compute platform, then you are disabling most of the advantages of a FPGA. The key advantage of a FPGA is flexibility, but the major disadvantage is the development time.

        1. [citation needed]

          If HDL programming is easier than procedural programming, why is “the next big thing” in the FPGA world supposed to be high-level synthesis (HLS), where you can write sequential C/C++ code and have it translated into hardware?

          1. HLS is a big win, but not an easy win. A company can reduce the size of their FPGA development team from say 5 down to 1, but they’ll need to add someone who can handle the complexity of HLS tools. It’s not enough to know C or C++ to write these languages for high-level synthesis. One must have deep architectural knowledge of the target device, the bandwidth of the target device, and enough understanding of what the HLS compiler can and cannot do.

            FPGAs can get you multiple of orders of magnitude better performance per watt over the best CPUs, but only for suitable problems and only if the developers know what they’re doing. Very tantalizing!

        2. The complexity of FPGA development is no myth. For large projects, there are many aspects to this. Getting the design to fit the device, achieving timing closure, working around bugs in 3rd party IP, working around vendor toolchain issues, troubleshooting PCB designs. The list goes on and on. It’s all the stuff they don’t have time to get into in school. It can even come down to hardware errors in the FPGA itself, like signal integrity issues between the stuff that monitors the FPGA configuration for soft errors and some high-speed transceiver.

          Many of these problems are not unique to FPGAs, but tend to be addressed upstream of typical application developers developing on established microprocessors.

      1. +1 and especially where there is hard real time, for example in mobile comms where timing constraints are very tight and we can’t put up with the vagaries of a CPU’s interrupt latency.

    2. As someone who’s worked with CPU/GPU/FPGA professionally for a few years now here is the rule.

      Does the task require many instances of the same operation to happen at the same time? Use a GPU. These are ideal because of how they manage computations in a sort of block fassion where data is provided an operation is selected and results are all calculated in parallel.

      Does the task require performing many different tasks all at the same time? Use an FPGA. In an ideal world you would do as much as possible via this method. Think this way, if the CPU runs 10x faster then the FPGA but the FPGA is performing 100 jobs a cycle the FPGA is 10x faster in the end. In relaity FPGA development is very complex as making sure all these parallel things line up and don’t run over eachother is very difficult and certain computations simply don’t attribute themselves to parallel operations.

      Else: CPU. CPU is the most general purpose solution and works when all else fails. It is by far the easiest to program but I feel this is mostly because it’s the oldest and so languages have become well flushed out. I think languages like VHDL are very good once you understand them but they are clunky at first and most lessons online do not help. Trying to learn FPGA programming by translating it to CPU equivalents is a BIG mistake. They are not the same and there is no equivalence, you are designing circuits so you must thing in terms of circuits. Trying to translate is kind of like learning a second language, the people I know who do it best express that they think in the second language they do not try and translate to the first.

    3. FPGA programming complexity is overrated. It is just different, but not more complex.

      And it is not about performance primarily, it is about real time and predictability. Yes, you can bit bang, say, a VGA image out of an MCU. But it is hard. Bit banging on an FPGA is natural and easy.

  3. What tool chain are people planning on using with these? Vivado licenses aren’t cheap and the ‘free’ version doesn’t generate configuration bitstreams for the FPGA last I checked.

    1. You don’t exactly overclock a FPGA. There are trade offs between complexity vs speed. You set up the constraints on how fast you want you design and the place & fit would *try* to meet the timing needed assuming that there is sufficient on-chip resources it can use and the constraint isn’t out of whack with reality.

      As for thermals, the FPGA is only used to control the data flow of the mining and nlt the actual computation. There isn’t much going on judging from the amount of I/O going to each mining card – just I2C and a single ended Tx/Rx. i.e. they don’t even need high speed differential pais.

  4. I’ve been running a class on top of Artix7 dev boards for a couple years so I have a WebPack-licensed Vivado install on the machine I’m sitting at. I just re-targed a project to generate for a xc7z010clg400-1 (which I _think_ is what I read off the seller picture on the one I just impulse bought off Aliexpress) and it looks like it’s fully supported.

  5. The best part is that this is Xilinx, which is now AMD, who are starting to release open source support for FPGA cores. Cross your fingers, and maybe you’ll see an open toolchain before long for it.

    1. Had quite a few wondering why a GPU/CPU company would be buying an FPGA company. Doesn’t really fit into anything they do, and most likely not help in the multi-GPU efforts which historically has been a dead-end. Doesn’t really work for CPU either (chiplets and high-speed fabric is the current trend).

      1. I can think of two reasons. One, because Intel owns Altera and has started building FPGAs into their data center chips. And two, because an FPGA can make for an interesting accelerator for certain tasks.

        Maybe they’ll eventually make an FPGA chiplet that can sit beside a zen chiplet.

      2. Both FPGA companies have experience in integrating chiplets into their product. Intel got EMIB from Altera. Xilinx announced its Virtix-7 2000T in or around 2011 with TSV (Through Silicon Via) when there was still much debate about the cost and commercial viability of TSV technologies. Xilinx has a proposal on HBI (High Bandwidth Interconnect) a general standard for die-to-die (or d2d) communication. OpenHBI looks to have over 1.5 Tb/s (aggregating Tx and Rx), with an energy budget of under 0.8pJ/bit.

        There are other areas for datacenter that AMD get get a foot into with Xilinx in its profoilo.

        1. This would be an astonishingly expensive way of getting better FPGAs. ASIC emulation platforms aren’t cheap, but they’re many orders of magnitude less expensive than buying what was then one of the top two FPGA companies.

          There are some good reasons to buy a successful FPGA company. Better performance per watt for compute acceleration. Better high speed interfaces and networking. Access to customers. Access to a large library of performance-optimized debugged functions (that the industry confusingly refers to as IP). Nowadays, add AI to the list. Security. Altera was almost as good as Xilinx and Xilinx has extraordinarily good security features. I’m probably forgetting a few.

          [Apologies for accidentally clicking “Report comment” rather than “Reply”.]

      3. Very possibly, AMD bought Xilinx mainly to get Solarflare, the NIC used throughout the fintech world, and that (not concidentally) puts Xilinx chips on their NICs. Xilinx bought SF quite recently.

        NVidia recently picked up Mellanox, a previous fintech darling.

Leave a Reply to RenCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.