Fukushima Daiichi at night

A Tritium Story: How Afraid Should You Be Of Hydrogen’s Big Brother?

Despite being present in everything that contains water, tritium is not an isotope that many people were that familiar with outside of select (geeky) channels, such as DEF CON with a tritium-containing badge, the always excellent NurdRage’s assembly of a tritium-based atomic battery, or the creation of a tritium-phosphor-based glow-in-the-dark tesseract cube.

Tritium is a hydrogen isotope that shares a lot of characteristics with its two siblings: 1H (protium) and 2H (deuterium), with the main distinction being that tritium (3H) is not a stable isotope, with a half-life of ~12.32 years that sees it decay into 3He. Most naturally occurring tritium on Earth originates from interactions between fast neutrons (>4.0 MeV) from cosmic radiation and atmospheric nitrogen.

Recently tritium has become a politically hot topic on account of the announced release of treated water at the Japanese Fukushima Daiichi nuclear plant. This has raised for many the question of just how much tritium is ‘too much’ and what we’re likely to notice from this treated — but still tritium-containing water — being released into the ocean.

Continue reading “A Tritium Story: How Afraid Should You Be Of Hydrogen’s Big Brother?”

Creating Methane From Captured Carbon Dioxide And The Future Of Carbon Capture

There’s something intrinsically simple about the concept of carbon (CO2) capture: you simply have the CO2 molecules absorbed or adsorbed by something, after which you separate the thus captured CO2 and put it somewhere safe. Unfortunately, in physics and chemistry what seems easy and straightforward tends to be anything but simple, let alone energy efficient. While methods for carbon capture have been around for decades, making it economically viable has always been a struggle.

This is true both for carbon capture and storage/sequestration (CCS) as well as carbon capture and utilization (CCU). Whereas the former seeks to store and ideally permanently remove (sequester) carbon from the atmosphere, the latter captures carbon dioxide for use in e.g. industrial processes.

Recently, Pacific Northwest National Laboratory (PNNL) has announced a breakthrough CCU concept, involving using a new amine-based solvent (2-EEMPA) that is supposed to be not only more efficient than e.g. the previously commonly used MEA, but also compatible with directly creating methane in the same process.

Since methane forms the major component in natural gas, might this be a way for CCU to create a carbon-neutral source of synthetic natural gas (SNG)? Continue reading “Creating Methane From Captured Carbon Dioxide And The Future Of Carbon Capture”

The Coming Copper Shortage: Aluminium Or Carbon Nanotubes To The Rescue?

The use of aluminium in wiring is unlikely to bring a smile to the face of anyone who has had to deal with it in a 1960s, or early 1970s-era house. The causes behind the fires and other accidents were myriad, including failure to deal with the higher thermal expansion of aluminium, the electrically insulating nature of aluminium oxide, and the general brittleness of aluminium when twisted.

Yet while copper is superior to aluminium in terms of electrical conductivity and ease of installation, copper prices have skyrocketed since the 1970s, and are on the verge of taking off to the moon. A big part of the reason is the increased use of copper in everything from electronics and electrical motors to generators, driven by large-scale deployment of wind turbines and electrical vehicles.

As the world moves to massively expand the use of electrical cars and installation of wind turbines, copper demand is predicted to outstrip current copper supply. With aluminium likely to make a big return as a result, it’s worth taking a look at modern-day aluminium-based wiring, including copper-clad aluminium and the use of carbon-based replacements. Continue reading “The Coming Copper Shortage: Aluminium Or Carbon Nanotubes To The Rescue?”

Image of CFS's SPARC reactor

Commonwealth Fusion’s 20 Tesla Magnet: A Bright SPARC Towards Fusion’s Future

After decades of nuclear fusion power being always ten years away, suddenly we are looking at a handful of endeavours striving to be the first to Q > 1, the moment when a nuclear fusion reactor will produce more power than is required to drive the fusion process in the first place. At this point the Joint European Torus (JET) reactor holds the world record with a Q of 0.67.

At the same time, a large international group is busily constructing the massive ITER tokamak test reactor in France, although it won’t begin fusion experiments until the mid-2030s. The idea is that ITER will provide the data required to construct the first DEMO reactors that might see viable commercial fusion as early as the 2040s, optimistically.

And then there’s Commonwealth Fusion Systems (CFS), a fusion energy startup.  Where CFS differs is that they don’t seek to go big, but instead try to make a tokamak system that’s affordable, compact and robust. With their recent demonstration of a 20 Tesla (T) high-temperature superconducting (HTS) rare-earth barium copper oxide (ReBCO) magnet field coil, they made a big leap towards their demonstration reactor: SPARC.

A Story of Tokamaks

CFS didn’t appear out of nowhere. Their roots lie in the nuclear fusion research performed since the 1960s at MIT, when a scientist called Bruno Coppi was working on the Alcator A (Alto Campo Toro being Italian for High Field Torus) tokamak, which saw first plasma in 1972. After a brief period with a B-revision of Alcator, the Alcator C was constructed with a big power supply upgrade. Continue reading “Commonwealth Fusion’s 20 Tesla Magnet: A Bright SPARC Towards Fusion’s Future”

Mechanisms Behind Vaccine Side-Effects: The Science That Causes That Sore Arm

After receiving a vaccination shot, it’s likely that we’ll feel some side-effects. These can range from merely a sore arm to swollen lymph nodes and even a fever. Which side-effects to expect depend on the exact vaccine, with each type and variant coming with its own list of common side-effects. Each person’s immune system will also react differently, which makes it hard to say exactly what one can expect after receiving the vaccination.

What we can do is look closer at the underlying mechanisms that cause these side-effects, to try and understand why they occur and how to best deal with them. Most relevant here for the initial response is the body’s innate immune system, with dendritic cells generally being among the first to come into contact with the vaccine and to present the antigen to the body’s adaptive immune system.

Key to the redness, swelling, and fever are substances produced by the body which include various cytokines as well as prostaglandin, producing the symptoms seen with inflammation and injury.

Continue reading “Mechanisms Behind Vaccine Side-Effects: The Science That Causes That Sore Arm”

Powering Up With USB: Untangling The USB Power Delivery Standards

Powering external devices directly from a PC’s I/O ports has been a thing long before USB was even a twinkle in an engineer’s eye. Some of us may remember the all too common PS/2 pass-through leads that’d tap into the 275 mA that is available via these ports. When USB was first released, it initially provided a maximum of 500 mA which USB 3.0 increased to 900 mA.

For the longest time, this provided power was meant only to provide a way for peripherals like keyboards, mice and similar trivial devices to be powered rather than require each of these to come with its own power adapter. As the number of  computer-connected gadgets increased USB would become the primary way to not only power small devices directly, but to also charge battery-powered devices and ultimately deliver power more generally.

Which brings us to the USB Power Delivery (USB-PD) protocol. Confusingly, USB-PD encompasses a number of different standards, ranging from fixed voltage charging to Programmable Power Supply and Adjustable Voltage Supply. What are the exact differences between these modes, and how does one go about using them? Continue reading “Powering Up With USB: Untangling The USB Power Delivery Standards”

Lithium Mine To Battery Line: Tesla Battery Day And The Future Of EVs

After last year’s Tesla Battery Day presentation and the flurry of information that came out of it, [The Limiting Factor] spent many months researching the countless topics behind Tesla’s announced plans, the resource markets for everything from lithium to copper and cobalt, and what all of this means for electrical vehicles (EVs) as well as batteries for both battery-electric vehicles (BEVs) and power storage.

A number of these changes are immediate, such as the use of battery packs as a structural element to save the weight of a supporting structure, while others such as the shift away from cobalt in battery cathodes being a more long-term prospective, along with the plans for Tesla to set up its own lithium clay mining operation in the US. Also impossible to pin down: when the famous ‘tabless’ 4680 cells that Tesla plans to use instead of the current 18650 cells will be mass-produced and when they will enable the promised 16% increase.

Even so, in the over 1 hour long video (also linked below after the break), the overall perspective seems fairly optimistic, with LFP (lithium iron phosphate) batteries also getting a shout out. One obvious indication of process to point out is that the cobalt-free battery is already used in Model 3 Teslas, most commonly in Chinese models.

Continue reading “Lithium Mine To Battery Line: Tesla Battery Day And The Future Of EVs”