Mining And Refining: Lead, Silver, And Zinc

If you are in need of a lesson on just how much things have changed in the last 60 years, an anecdote from my childhood might suffice. My grandfather was a junk man, augmenting the income from his regular job by collecting scrap metal and selling it to metal recyclers. He knew the current scrap value of every common metal, and his garage and yard were stuffed with barrels of steel shavings, old brake drums and rotors, and miles of copper wire.

But his most valuable scrap was lead, specifically the weights used to balance car wheels, which he’d buy as waste from tire shops. The weights had spring steel clips that had to be removed before the scrap dealers would take them, which my grandfather did by melting them in a big cauldron over a propane burner in the garage. I clearly remember hanging out with him during his “melts,” fascinated by the flames and simmering pools of molten lead, completely unconcerned by the potential danger of the situation.

Fast forward a few too many decades and in an ironic twist I find myself living very close to the place where all that lead probably came from, a place that was also blissfully unconcerned by the toxic consequences of pulling this valuable industrial metal from tunnels burrowed deep into the Bitterroot Mountains. It didn’t help that the lead-bearing ores also happened to be especially rich in other metals including zinc and copper. But the real prize was silver, present in such abundance that the most productive silver mine in the world was once located in a place that is known as “Silver Valley” to this day. Together, these three metals made fortunes for North Idaho, with unfortunate side effects from the mining and refining processes used to win them from the mountains.

Continue reading “Mining And Refining: Lead, Silver, And Zinc”

Fukushima Daiichi: Cleaning Up After A Nuclear Accident

On 11 March, 2011, a massive magnitude 9.1 earthquake shook the west coast of Japan, with the epicenter located at a shallow depth of 32 km,  a mere 72 km off the coast of Oshika Peninsula, of the Touhoku region. Following this earthquake, an equally massive tsunami made its way towards Japan’s eastern shores, flooding many kilometers inland. Over 20,000 people were killed by the tsunami and earthquake, thousands of whom were dragged into the ocean when the tsunami retreated. This Touhoku earthquake was the most devastating in Japan’s history, both in human and economic cost, but also in the effect it had on one of Japan’s nuclear power plants: the six-unit Fukushima Daiichi plant.

In the subsequent Investigation Commission report by the Japanese Diet, a lack of safety culture at the plant’s owner (TEPCO) was noted, along with significant corruption and poor emergency preparation, all of which resulted in the preventable meltdown of three of the plant’s reactors and a botched evacuation. Although afterwards TEPCO was nationalized, and a new nuclear regulatory body established, this still left Japan with the daunting task of cleaning up the damaged Fukushima Daiichi nuclear plant.

Removal of the damaged fuel rods is the biggest priority, as this will take care of the main radiation hazard. This year TEPCO has begun work on removing the damaged fuel inside the cores, the outcome of which will set the pace for the rest of the clean-up.

Continue reading “Fukushima Daiichi: Cleaning Up After A Nuclear Accident”

A Field Guide To The North American Substation

Drive along nearly any major road in the United States and it won’t be long before you see evidence of the electrical grid. Whether it’s wooden poles strung along the right of way or a line of transmission towers marching across the countryside in the distance, signs of the grid are never far from view but often go ignored, blending into the infrastructure background and becoming one with the noise of our built environment.

But there’s one part of the electrical grid that, despite being more widely distributed and often relegated to locations off the beaten path, is hard to ignore. It’s the electrical substation, more than 55,000 of which dot the landscape of the US alone. They’re part of a continent-spanning machine that operates as one to move electricity from where it’s produced to where it’s consumed, all within the same instant of time. These monuments of galvanized steel are filled with strange, humming equipment of inscrutable purpose, seemingly operating without direct human intervention. But if you look carefully, there’s a lot of fascinating engineering going on behind those chain-link fences with the forbidding signage, and the arrangement of equipment within them tells an interesting story about how the electrical grid works, and what the consequences are when it doesn’t.

Continue reading “A Field Guide To The North American Substation”

Undersea Cable Repair

The bottom of the sea is a mysterious and inaccessible place, and anything unfortunate enough to slip beneath the waves and into the briny depths might as well be on the Moon. But the bottom of the sea really isn’t all that far away. The average depth of the ocean is only about 3,600 meters, and even at its deepest, the bottom is only about 10 kilometers away, a distance almost anyone could walk in a couple of hours.

Of course, the problem is that the walk would be straight down into one of the most inhospitable environments our planet has to offer. Despite its harshness, that environment is home to hundreds of undersea cables, all of which are subject to wear and tear through accidents and natural causes. Fixing broken undersea cables quickly and efficiently is a highly specialized field, one that takes a lot of interesting engineering and some clever hacks to pull off.

Continue reading “Undersea Cable Repair”

AMD Returns To 1996 With Zen 5’s Two-Block Ahead Branch Predictor

An interesting finding in fields like computer science is that much of what is advertised as new and innovative was actually pilfered from old research papers submitted to ACM and others. Which is not to say that this is necessarily a bad thing, as many of such ideas were not practical at the time. Case in point the new branch predictor in AMD’s Zen 5 CPU architecture, whose two-block ahead design is based on an idea coined a few decades ago. The details are laid out by [George Cozma] and [Camacho] in a recent article, which follows on a recent interview that [George] did with AMD’s [Mike Clark].

The 1996 ACM paper by [André Seznec] and colleagues titled “Multiple-block ahead branch predictors” is a good start before diving into [George]’s article, as it will help to make sense of many of the details. The reason for improving the branch prediction in CPUs is fairly self-evident, as today’s heavily pipelined, superscalar CPUs rely heavily on branch prediction and speculative execution to get around the glacial speeds of system memory once past the CPU’s speediest caches. While predicting the next instruction block after a branch is commonly done already, this two-block ahead approach as suggested also predicts the next instruction block after the first predicted one.

Perhaps unsurprisingly, this multi-block ahead branch predictor by itself isn’t the hard part, but making it all fit in the hardware is. As described in the paper by [Seznec] et al., the relevant components are now dual-ported, allowing for three prediction windows. Theoretically this should result in a significant boost in IPC and could mean that more CPU manufacturers will be looking at adding such multi-block branch prediction to their designs. We will just have to see how Zen 5 works once released into the wild.

Carbon–Cement Supercapacitors Proposed As An Energy Storage Solution

Although most energy storage solutions on a grid-level focus on batteries, a group of researchers at MIT and Harvard University have proposed using supercapacitors instead, with their 2023 research article by [Nicolas Chanut] and colleagues published in Proceedings of the National Academy of Sciences (PNAS). The twist here is that rather than any existing supercapacitors, their proposal involves conductive concrete (courtesy of carbon black) on both sides of the electrolyte-infused insulating membrane. They foresee this technology being used alongside green concrete to become part of a renewable energy transition, as per a presentation given at the American Concrete Institute (ACI).

Functional carbon-cement supercapacitors (connected in series) (Credit: Damian Stefaniuk et al.)

Putting aside the hairy issue of a massive expansion of grid-level storage, could a carbon-cement supercapacitor perhaps provide a way to turn the concrete foundation of a house into a whole-house energy storage cell for use with roof-based PV solar? While their current prototype isn’t quite building-sized yet, in the research article they provide some educated guesstimates to arrive at a very rough 20 – 220 Wh/m3, which would make this solution either not very great or somewhat interesting.

The primary benefit of this technology would be that it could be very cheap, with cement and concrete being already extremely prevalent in construction due to its affordability. As the researchers note, however, adding carbon black does compromise the concrete somewhat, and there are many questions regarding longevity. For example, a short within the carbon-cement capacitor due to moisture intrusion and rust jacking around rebar would surely make short work of these capacitors.

Swapping out the concrete foundation of a building to fix a short is no small feat, but maybe some lessons could be learned from self-healing Roman concrete.

The Flash Memory Lifespan Question: Why QLC May Be NAND Flash’s Swan Song

The late 1990s saw the widespread introduction of solid-state storage based around NAND Flash. Ranging from memory cards for portable devices to storage for desktops and laptops, the data storage future was prophesied to rid us of the shackles of magnetic storage that had held us down until then. As solid-state drives (SSDs) took off in the consumer market, there were those who confidently knew that before long everyone would be using SSDs and hard-disk drives (HDDs) would be relegated to the dust bin of history as the price per gigabyte and general performance of SSDs would just be too competitive.

Fast-forward a number of years, and we are now in a timeline where people are modifying SSDs to have less storage space, just so that their performance and lifespan are less terrible. The reason for this is that by now NAND Flash has hit a number of limits that prevent it from further scaling density-wise, mostly in terms of its feature size. Workarounds include stacking more layers on top of each other (3D NAND) and increasing the number of voltage levels – and thus bits – within an individual cell. Although this has boosted the storage capacity, the transition from single-level cell (SLC) to multi-level (MLC) and today’s TLC and QLC NAND Flash have come at severe penalties, mostly in the form of limited write cycles and much reduced transfer speeds.

So how did we get here, and is there life beyond QLC NAND Flash?

Continue reading “The Flash Memory Lifespan Question: Why QLC May Be NAND Flash’s Swan Song”