A Xilinx Zynq Linux FPGA Board For Under $20? The Windfall Of Decommissioned Crypto Mining

One of the exciting trends in hardware availability is the inexorable move of FPGA boards and modules towards affordability. What was once an eye-watering price is now merely an expensive one, and no doubt in years to come will become a commodity. There’s still an affordability gap at the bottom of the market though, so spotting sub-$20 Xilinx Zynq boards on AliExpress that combine a Linux-capable ARM core and an FPGA on the same silicon is definitely something of great interest. A hackerspace community friend of mine ordered one, and yesterday it arrived in the usual anonymous package from China.

There’s a Catch, But It’s Only A Small One

The heftier of the two boards, in all its glory.
The heftier of the two boards, in all its glory.

There are two boards to be found for sale, one featuring the Zynq 7000 and the other the 7010, which the Xilinx product selector tells us both have the same ARM Cortex A9 cores and Artix-7 FPGA tech on board. The 7000 includes a single core with 23k logic cells, and there’s a dual-core with 28k on the 7010. It was the latter that my friend had ordered.

So there’s the good news, but there has to be a catch, right? True, but it’s not an insurmountable one. These aren’t new products, instead they’re the controller boards for an older generation of AntMiner cryptocurrency mining rigs. The components have 2017 date codes, so they’ve spent the last three years hooked up to a brace of ASIC or GPU boards in a mining data centre somewhere. The ever-changing pace of cryptocurrency tech means that they’re now redundant, and we’re the lucky beneficiaries via the surplus market.

Continue reading “A Xilinx Zynq Linux FPGA Board For Under $20? The Windfall Of Decommissioned Crypto Mining”

Exploring Custom Firmware On Xiaomi Thermometers

If we’ve learned anything over the years, it’s that hackers love to know what the temperature is. Seriously. A stroll through the archives here at Hackaday uncovers an overwhelming number of bespoke gadgets for recording, displaying, and transmitting the current conditions. From outdoor weather stations to an ESP8266 with a DHT11 soldered on, there’s no shortage of prior art should you want to start collecting your own environmental data.

Now obviously we’re big fans of DIY it here, that’s sort of the point of the whole website. But there’s no denying that it can be hard to compete with the economies of scale, especially when dealing with imported goods. Even the most experienced hardware hacker would have trouble building something like the Xiaomi LYWSD03MMC. For as little as $4 USD each, you’ve got a slick energy efficient sensor with an integrated LCD that broadcasts the current temperature and humidity over Bluetooth Low Energy.

You could probably build your own…but why?

It’s pretty much the ideal platform for setting up a whole-house environmental monitoring system except for one detail: it’s designed to work as part of Xiaomi’s home automation system, and not necessarily the hacked-together setups that folks like us have going on at home. But that was before Aaron Christophel got on the case.

We first brought news of his ambitious project to create an open source firmware for these low-cost sensors last month, and unsurprisingly it generated quite a bit of interest. After all, folks taking existing pieces of hardware, making them better, and sharing how they did it with the world is a core tenet of this community.

Believing that such a well crafted projected deserved a second look, and frankly because I wanted to start monitoring the conditions in my own home on the cheap, I decided to order a pack of Xiaomi thermometers and dive in.

Continue reading “Exploring Custom Firmware On Xiaomi Thermometers”

How To Get Into Lost Wax Casting (with A Dash Of 3D Printing)

I’ve always thought that there are three things you can do with metal: cut it, bend it, and join it. Sure, I knew you could melt it, but that was always something that happened in big foundries- you design something and ship it off to be cast in some large angular building churning out smoke. After all, melting most metals is hard. Silver melts at 1,763 °F. Copper at 1,983 °F. Not only do you need to create an environment that can hit those temperatures, but you need to build it from materials that can withstand them.

Turns out, melting metal is not so bad. Surprisingly, I’ve found that the hardest part of the process for an engineer like myself at least, is creating the pattern to be replicated in metal. That part is pure art, but thankfully I learned that we can use technology to cheat a bit.

When I decided to take up casting earlier this year, I knew pretty much nothing about it. Before we dive into the details here, let’s go through a quick rundown to save you the first day I spent researching the process. At it’s core, here are the steps involved in lost wax, or investment, casting:

  1. Make a pattern: a wax or plastic replica of the part you’d like to create in metal
  2. Make a mold: pour plaster around the pattern, then burn out the wax to leave a hollow cavity
  3. Pour the metal: melt some metal and pour it into the cavity

I had been kicking around the idea of trying this since last fall, but didn’t really know where to begin. There seemed to be a lot of equipment involved, and I’m no sculptor, so I knew that making patterns would be a challenge. I had heard that you could 3D-print wax patterns instead of carving them by hand, but the best machine for the job is an SLA printer which is prohibitively expensive, or so I thought. Continue reading “How To Get Into Lost Wax Casting (with A Dash Of 3D Printing)”

Vacuum Tube Logic Hack Chat

Join us on Wednesday, December 9th at noon Pacific for the Vacuum Tube Logic Hack Chat with David Lovett!

For most of us, circuits based on vacuum tubes are remnants of a technological history that is rapidly fading from our collective memory. To be sure, there are still applications for thermionic emission, especially in power electronics and specialized switching applications. But by and large, progress has left vacuum tubes in a cloud of silicon dust, leaving mainly audiophiles and antique radio enthusiasts to figure out the hows and whys of plates and grids and filaments.

But vacuum tubes aren’t just for the analog world. Some folks like making tubes do tricks they haven’t had to do in a long, long time, at least since the birth of the computer age. Vacuum tube digital electronics seems like a contradiction in terms, but David Lovett, aka Usagi Electric on YouTube, has fallen for it in a big way. His channel is dedicated to working through the analog building blocks of digital logic circuits using tubes almost exclusively. He has come up with unique circuits that don’t require the high bias voltages typically needed, making the circuits easy to work with using equipment likely to be found in any solid-state experimenter’s lab.

David will drop by the Hack Chat to share his enthusiasm for vacuum tube logic and his tips for exploring the sometimes strange world of flying electrons. Join us as we discuss how to set up your own vacuum tube experiments, learn what thermionic emission can teach us about solid-state electronics, and maybe even get a glimpse of what lies ahead in his lab.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, December 9 at 12:00 PM Pacific time. If time zones have you tied up, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Continue reading “Vacuum Tube Logic Hack Chat”

Hackaday Links Column Banner

Hackaday Links: December 6, 2020

By now you’ve no doubt heard of the sudden but not unexpected demise of the iconic Arecibo radio telescope in Puerto Rico. We have been covering the agonizing end of Arecibo from almost the moment the first cable broke in August to a eulogy, and most recently its final catastrophic collapse this week. That last article contained amazing video of the final collapse, including up-close and personal drone shots of the cable breaking. For a more in-depth analysis of the collapse, it’s hard to beat Scott Manley’s frame-by-frame analysis, which really goes into detail about what happened. Seeing the paint spalling off the cables as they stretch and distort under loads far greater than they were designed for is both terrifying and fascinating.

Exciting news from Australia as the sample return capsule from JAXA’s Hayabusa2 asteroid explorer returned safely to Earth Saturday. We covered Hayabusa2 in our roundup of extraterrestrial excavations a while back, describing how it used both a tantalum bullet and a shaped-charge penetrator to blast regolith from the surface of asteroid 162173 Ryugu. Samples of the debris were hoovered up and hermetically sealed for the long ride back to Earth, which culminated in the fiery re-entry and safe landing in the midst of the Australian outback. Planetary scientists are no doubt eager to get a look inside the capsule and analyze the precious milligrams of space dust. In the meantime, Hayabusa2, with 66 kilograms of propellant remaining, is off on an extended mission to visit more asteroids for the next eleven years or so.

The 2020 Remoticon has been wrapped up for most of a month now, but one thing we noticed was how much everyone seemed to like the Friday evening Bring-a-Hack event that was hosted on Remo. To kind of keep that meetup momentum going and to help everyone slide into the holiday season with a little more cheer, we’re putting together a “Holiday with Hackaday & Tindie” meetup on Tuesday, December 15 at noon Pacific time. The details haven’t been shared yet, but our guess is that this will certainly be a “bring-a-hack friendly” event. We’ll share more details when we get them this week, but for now, hop over to the Remo event page and reserve your spot.

On the Buzzword Bingo scorecard, “Artificial Intelligence” is a square that can almost be checked off by default these days, as companies rush to stretch the definition of the term to fit almost every product in the neverending search for market share. But even those products that actually have machine learning built into them are only as good as the data sets used to train them. That can be a problem for voice-recognition systems; while there are massive databases of utterances in just about every language, the likes of Amazon and Google aren’t too willing to share what they’ve leveraged from their smart speaker using customer base. What’s the little person to do? Perhaps the People’s Speech database will help. Part of the MLCommons project, it has 86,000 hours of speech data, mostly derived from audiobooks, a clever source indeed since the speech and the text can be easily aligned. The database also pulls audio and the corresponding text from Wikipedia and other random sources around the web. It’s a small dataset, to be sure, but it’s a start.

And finally, divers in the Baltic Sea have dredged up a bit of treasure: a Nazi Enigma machine. Divers in Gelting Bay near the border of Germany and Denmark found what appeared to be an old typewriter caught in one of the abandoned fishing nets they were searching for. When they realized what it was — even crusted in 80-years-worth of corrosion and muck some keys still look like they’re brand new — they called in archaeologists to take over recovery. Gelting Bay was the scene of a mass scuttling of U-boats in the final days of World War II, so this Engima may have been pitched overboard before by a Nazi commander before pulling the plug on his boat. It’ll take years to restore, but it’ll be quite a museum piece when it’s done.

Sufficiently Advanced Technology And Justice

Imagine that you’re serving on a jury, and you’re given an image taken from a surveillance camera. It looks pretty much like the suspect, but the image has been “enhanced” by an AI from the original. Do you convict? How does this weigh out on the scales of reasonable doubt? Should you demand to see the original?

AI-enhanced, upscaled, or otherwise modified images are tremendously realistic. But what they’re showing you isn’t reality. When we wrote about this last week, [Denis Shiryaev], one of the authors of one of the methods we highlighted, weighed in the comments to point out that these modifications aren’t “restorations” of the original. While they might add incredibly fine detail, for instance, they don’t recreate or restore reality. The neural net creates its own reality, out of millions and millions of faces that it’s learned.

And for the purposes of identification, that’s exactly the problem: the facial features of millions of other people have been used to increase the resolution. Can you identify the person in the pixelized image? Can you identify that same person in the resulting up-sampling? If the question put before the jury was “is the defendant a former president of the USA?” you’d answer the question differently depending on which image you were presented. And you’d have a misleading level of confidence in your ability to judge the AI-retouched photo. Clearly, informed skepticism on the part of the jury is required.

Unfortunately, we’ve all seen countless examples of “zoom, enhance” in movies and TV shows being successfully used to nab the perps and nail their convictions. We haven’t seen nearly as much detailed analysis of how adversarial neural networks create faces out of a scant handful of pixels. This, combined with the almost magical resolution of the end product, would certainly sway a jury of normal folks. On the other hand, the popularity of intentionally misleading “deep fakes” might help educate the public to the dangers of believing what they see when AI is involved.

This is just one example, but keeping the public interested in and educated on the deep workings and limitations of the technology that’s running our world is more important than ever before, but some of the material is truly hard. How do we separate the science from the magic?

Remoticon Video: How To Use Machine Learning With Microcontrollers

Going from a microcontroller blinking an LED, to one that blinks the LED using voice commands based on a data set that you trained on a neural net work is a “now draw the rest of the owl” problem. Lucky for us, Shawn Hymel walks us through the entire process during his Tiny ML workshop from the 2020 Hackaday Remoticon. The video has just now been published and can be viewed below.

This is truly an end-to-end Hello World for getting machine learning up and running on a microcontroller. Shawn covers the process of collecting and preparing the audio samples, training the data set, and getting it all onto the microcontroller. At the end of two hours, he’s able to show the STM32 recognizing and responding to two different spoken words. Along the way he pauses to discuss the context of what’s happening in every step, which will help you go back and expand in those areas later to suit your own project needs.

Continue reading “Remoticon Video: How To Use Machine Learning With Microcontrollers”