The BiVACOR Total Artificial Heart: A Maglev Bridge To Life

The BiVACOR THA hooked up with the CTO Dianiel Timms in the background. (Credit: BiVACOR)
The BiVACOR THA hooked up with the CTO Dianiel Timms in the background. (Credit: BiVACOR)

Outside of the brain, the heart is probably the organ that you miss the most when it ceases to function correctly. Unfortunately, as we cannot grow custom replacement hearts yet, we have to keep heart patients alive long enough for them to receive a donor heart. Yet despite the heart being essentially a blood pump, engineering even a short-term artificial replacement has been a struggle for many decades. A new contender has now arrived in the BiVACOR TAH (total artificial heart), which just had the first prototype implanted in a human patient.

Unlike the typical membrane-based pumps, the BiVACOR TAH is a rotary pump that uses an impeller-based design with magnetic levitation replacing bearings and theoretically minimizing damage to the blood. This design should also mean a significant flowrate, enough even for an exercising adult. Naturally, this TAH is only being tested as a bridge-to-transplant solution, for patients with a failing heart who do not qualify for a ventricular assist device. This may give more heart patients a chance to that donor heart transplant, even if a TAH as a destination therapy could save so many more lives.

The harsh reality is that the number of donor hearts decreases each year while demand increases, leading to unconventional approaches like xenotransplantation using specially bred pigs as donor, as well as therapeutic cloning to grow a new heart from the patient’s own cells. Having a universal TAH that could be left in-place (destination therapy) for decades would offer a solid option next to the latter, but remains elusive. As shown by e.g. the lack of progress with a TAH like the ReinHeart despite a promising 2014 paper in a bovine model.

Hopefully before long we’ll figure out a reliable way to fix this ‘just a blood pump’ in our bodies, regardless of whether it’s a biological or mechanical solution.

AMD Returns To 1996 With Zen 5’s Two-Block Ahead Branch Predictor

An interesting finding in fields like computer science is that much of what is advertised as new and innovative was actually pilfered from old research papers submitted to ACM and others. Which is not to say that this is necessarily a bad thing, as many of such ideas were not practical at the time. Case in point the new branch predictor in AMD’s Zen 5 CPU architecture, whose two-block ahead design is based on an idea coined a few decades ago. The details are laid out by [George Cozma] and [Camacho] in a recent article, which follows on a recent interview that [George] did with AMD’s [Mike Clark].

The 1996 ACM paper by [André Seznec] and colleagues titled “Multiple-block ahead branch predictors” is a good start before diving into [George]’s article, as it will help to make sense of many of the details. The reason for improving the branch prediction in CPUs is fairly self-evident, as today’s heavily pipelined, superscalar CPUs rely heavily on branch prediction and speculative execution to get around the glacial speeds of system memory once past the CPU’s speediest caches. While predicting the next instruction block after a branch is commonly done already, this two-block ahead approach as suggested also predicts the next instruction block after the first predicted one.

Perhaps unsurprisingly, this multi-block ahead branch predictor by itself isn’t the hard part, but making it all fit in the hardware is. As described in the paper by [Seznec] et al., the relevant components are now dual-ported, allowing for three prediction windows. Theoretically this should result in a significant boost in IPC and could mean that more CPU manufacturers will be looking at adding such multi-block branch prediction to their designs. We will just have to see how Zen 5 works once released into the wild.

Analyzing Feature Learning In Artificial Neural Networks And Neural Collapse

Artificial Neural Networks (ANNs) are commonly used for machine vision purposes, where they are tasked with object recognition. This is accomplished by taking a multi-layer network and using a training data set to configure the weights associated with each ‘neuron’. Due to the complexity of these ANNs for non-trivial data sets, it’s often hard to make head or tails of what the network is actually matching in a given (non-training data) input. In a March 2024 study (preprint) by [A. Radhakrishnan] and colleagues in Science an approach is provided to elucidate and diagnose this mystery somewhat, by using what they call the average gradient outer product (AGOP).

Defined as the uncentered covariance matrix of the ANN’s input-output gradients averaged over the training dataset, this property can provide information on the data set’s features used for predictions. This turns out to be strongly correlated with repetitive information, such as the presence of eyes in recognizing whether lipstick is being worn and star patterns in a car and truck data set rather than anything to do with the (highly variable) vehicles. None of this was perhaps too surprising, but a number of the same researchers used the same AGOP for elucidating the mechanism behind neural collapse (NC) in ANNs.

NC occurs when an ANN gets overtrained (overparametrized). In the preprint paper by [D. Beaglehole] et al. the AGOP is used to provide evidence for the mechanism behind NC during feature learning. Perhaps the biggest take-away from these papers is that while ANNs can be useful, they’re also incredibly complex and poorly understood. The more we learn about their properties, the more appropriately we can use them.

Pnut: A Self-Compiling C Transpiler Targeting Human-Readable POSIX Shell

Shell scripting is one of those skills that are absolutely invaluable on especially UNIX and BSD-based systems like the BSDs, the two zillion Linux distributions as well as MacOS. Yet not every shell is the same, and not everybody can be bothered to learn the differences between the sh, bash, ksh, zsh, dash, fish and other shells, which can make a project like Pnut seem rather tempting. Rather than dealing with shell scripting directly, the user writes their code in the Lingua Franca of computing, AKA C, which is then transpiled into a shell script that should run in any POSIX-compliant shell.

The transpiler can be used both online via the main Pnut website, as well as locally using the (BSD 2-clause) open source code on GitHub. Here the main limitations are also listed, which mostly concern the C constructs that do not map nicely to a POSIX shell. These are: no support for floating point numbers and unsigned integers, no goto and switch nor taking the address of a variable with &. These and preprocessor-related limitations and issues are largely to be expected, as especially POSIX shells are hardly direct replacements for full-blown C code.

As a self-professed research project, Pnut seems like an interesting project, although if you are writing shell scripts for anything important, you probably just want to buckle down and learn the ins and outs of POSIX shell scripting and beyond. Although it’s a bit of a learning curve, we’d be remiss if we said that it’s not totally worth it, if only because it makes overall shell usage even beyond scripting so much better.

Manually Computing Logarithms To Grok Calculators

Logarithms are everywhere in mathematics and derived fields, but we rarely think about how trigonometric functions, exponentials, square roots and others are calculated after we punch the numbers into a calculator of some description and hit ‘calculate’. How do we even know that the answer which it returns is remotely correct? This was the basic question that [Zachary Chartrand] set out to answer for [3Blue1Brown]’s Summer of Math Exposition 3 (SoME-3). Inspired by learning to script Python, he dug into how such calculations are implemented by the scripting language, which naturally led to the standard C library. Here he found an interesting implementation for the natural algorithm and the way geometric series convergence is sped up.

The short answer is that fundamental properties of these series are used to decrease the number of terms and thus calculations required to get a result. One example provided in the article reduces the naïve approach from 36 terms down to 12 with some optimization, while the versions used in the standard C library are even more optimized. This not only reduces the time needed, but also the memory required, both of which makes many types of calculations more feasible on less powerful systems.

Even if most of us are probably more than happy to just keep mashing that ‘calculate’ button and (rightfully) assume that the answer is correct, such a glimpse at the internals of the calculations involved definitely provides a measure of confidence and understanding, if not the utmost appreciation for those who did the hard work to make all of this possible.

Could Carbon Fiber Be The New Asbestos?

Could carbon fiber inflict the same kind of damage on the human body as asbestos? That’s the question which [Nathan] found himself struggling with after taking a look at carbon fiber-reinforced filament under a microscope, revealing a sight that brings to mind fibrous asbestos samples. Considering the absolutely horrifying impact that asbestos exposure can have, this is a totally pertinent question to ask. Fortunately, scientific studies have already been performed on this topic.

Example SEM and TEM images of the released particles following the rupture of CFRP cables in the tensile strength test. (Credit: Jing Wang et al, Journal of Nanobiotechnology, 2017)
Example SEM and TEM images of the released particles following the rupture of CFRP cables in the tensile strength test. (Credit: Jing Wang et al, Journal of Nanobiotechnology, 2017)

While [Nathan] demonstrated that the small lengths of carbon fiber (CF) contained in some FDM filaments love to get stuck in your skin and remain there even after washing one’s hands repeatedly, the aspect that makes asbestos such a hazard is that the mineral fibers are easily respirable due to their size. It is this property which allows asbestos fibers to nestle deep inside the lungs, where they pierce cell membranes and cause sustained inflammation, DNA damage and all too often lung cancer or worse.

Clearly, the 0.5 to 1 mm sized CF strands in FDM filaments aren’t easily inhaled, but as described by [Jing Wang] and colleagues in a 2017 Journal of Nanobiotechnology paper, CF can easily shatter into smaller, sharper fragments through mechanical operations (cutting, sanding, etc.) which can be respirable. It is thus damaged carbon fiber, whether from CF reinforced thermal polymers or other CF-containing materials, that poses a potential health risk. This is not unlike asbestos — which when stable in-situ poses no risk, but can create respirable clouds of fibers when disturbed. When handling CF-containing materials, especially for processing, wearing an effective respirator (at least N95/P2) that is rated for filtering out asbestos fibers would thus seem to be a wise precaution.

The treacherous aspect of asbestos and kin is that diseases like lung cancer and mesothelioma are not immediately noticeable after exposure, but can take decades to develop. In the case of mesothelioma, this can be between 15 and 30 years after exposure, so protecting yourself today with a good respirator is the only way you can be relatively certain that you will not be cursing your overconfident young self by that time.

Continue reading “Could Carbon Fiber Be The New Asbestos?”

Brain Implant Uses Graphene Instead Of Metal Probes

Implantable electrodes for the (human) brain have been around for a many decades in the form of Utah arrays and kin, but these tend to be made out of metal, which can cause issues when stimulating the surrounding neurons with an induced current. This is due to faradaic processes between the metal probe and an electrolyte (i.e. the cerebrospinal fluid). Over time this can result in insulating deposits forming on the probe’s surface, reducing their effectiveness.

Graphene-based, high-resolution cortical brain interface (Credit: Inbrain Neuroelectronics)
Graphene-based, high-resolution cortical brain interface (Credit: Inbrain Neuroelectronics)

Now a company called InBrain claims to have cracked making electrodes out of graphene, following a series of tests on non-human test subjects. Unlike metal probes, these carbon-based probes should be significantly more biocompatible even when used for brain stimulation as with the target goal of treating the symptoms associated with Alzheimer’s.

During the upcoming first phase human subjects would have these implants installed where they would monitor brain activity in Alzheimer’s patients, to gauge how well their medication is helping with the symptoms like tremors. Later these devices would provide deep-brain stimulation, purportedly more efficiently than similar therapies in use today. The FDA was impressed enough at least to give it the ‘breakthrough device’ designation, though it is hard to wade through the marketing hype to get a clear picture of the technology in question.

In their most recently published paper (preprint) in Nature Nanotechnology, [Calia] and colleagues describe flexible graphene depth neural probes (gDNP) which appear to be what is being talked about. These gDNP are used in the experiment to simultaneously record infraslow (<0.1 Hz) and higher frequencies, a feat which metal microelectrodes are claimed to struggle with.

Although few details are available right now, we welcome any brain microelectrode array improvements, as they are incredibly important for many types of medical therapies and research.