Pnut: A Self-Compiling C Transpiler Targeting Human-Readable POSIX Shell

Shell scripting is one of those skills that are absolutely invaluable on especially UNIX and BSD-based systems like the BSDs, the two zillion Linux distributions as well as MacOS. Yet not every shell is the same, and not everybody can be bothered to learn the differences between the sh, bash, ksh, zsh, dash, fish and other shells, which can make a project like Pnut seem rather tempting. Rather than dealing with shell scripting directly, the user writes their code in the Lingua Franca of computing, AKA C, which is then transpiled into a shell script that should run in any POSIX-compliant shell.

The transpiler can be used both online via the main Pnut website, as well as locally using the (BSD 2-clause) open source code on GitHub. Here the main limitations are also listed, which mostly concern the C constructs that do not map nicely to a POSIX shell. These are: no support for floating point numbers and unsigned integers, no goto and switch nor taking the address of a variable with &. These and preprocessor-related limitations and issues are largely to be expected, as especially POSIX shells are hardly direct replacements for full-blown C code.

As a self-professed research project, Pnut seems like an interesting project, although if you are writing shell scripts for anything important, you probably just want to buckle down and learn the ins and outs of POSIX shell scripting and beyond. Although it’s a bit of a learning curve, we’d be remiss if we said that it’s not totally worth it, if only because it makes overall shell usage even beyond scripting so much better.

Manually Computing Logarithms To Grok Calculators

Logarithms are everywhere in mathematics and derived fields, but we rarely think about how trigonometric functions, exponentials, square roots and others are calculated after we punch the numbers into a calculator of some description and hit ‘calculate’. How do we even know that the answer which it returns is remotely correct? This was the basic question that [Zachary Chartrand] set out to answer for [3Blue1Brown]’s Summer of Math Exposition 3 (SoME-3). Inspired by learning to script Python, he dug into how such calculations are implemented by the scripting language, which naturally led to the standard C library. Here he found an interesting implementation for the natural algorithm and the way geometric series convergence is sped up.

The short answer is that fundamental properties of these series are used to decrease the number of terms and thus calculations required to get a result. One example provided in the article reduces the naïve approach from 36 terms down to 12 with some optimization, while the versions used in the standard C library are even more optimized. This not only reduces the time needed, but also the memory required, both of which makes many types of calculations more feasible on less powerful systems.

Even if most of us are probably more than happy to just keep mashing that ‘calculate’ button and (rightfully) assume that the answer is correct, such a glimpse at the internals of the calculations involved definitely provides a measure of confidence and understanding, if not the utmost appreciation for those who did the hard work to make all of this possible.

Could Carbon Fiber Be The New Asbestos?

Could carbon fiber inflict the same kind of damage on the human body as asbestos? That’s the question which [Nathan] found himself struggling with after taking a look at carbon fiber-reinforced filament under a microscope, revealing a sight that brings to mind fibrous asbestos samples. Considering the absolutely horrifying impact that asbestos exposure can have, this is a totally pertinent question to ask. Fortunately, scientific studies have already been performed on this topic.

Example SEM and TEM images of the released particles following the rupture of CFRP cables in the tensile strength test. (Credit: Jing Wang et al, Journal of Nanobiotechnology, 2017)
Example SEM and TEM images of the released particles following the rupture of CFRP cables in the tensile strength test. (Credit: Jing Wang et al, Journal of Nanobiotechnology, 2017)

While [Nathan] demonstrated that the small lengths of carbon fiber (CF) contained in some FDM filaments love to get stuck in your skin and remain there even after washing one’s hands repeatedly, the aspect that makes asbestos such a hazard is that the mineral fibers are easily respirable due to their size. It is this property which allows asbestos fibers to nestle deep inside the lungs, where they pierce cell membranes and cause sustained inflammation, DNA damage and all too often lung cancer or worse.

Clearly, the 0.5 to 1 mm sized CF strands in FDM filaments aren’t easily inhaled, but as described by [Jing Wang] and colleagues in a 2017 Journal of Nanobiotechnology paper, CF can easily shatter into smaller, sharper fragments through mechanical operations (cutting, sanding, etc.) which can be respirable. It is thus damaged carbon fiber, whether from CF reinforced thermal polymers or other CF-containing materials, that poses a potential health risk. This is not unlike asbestos — which when stable in-situ poses no risk, but can create respirable clouds of fibers when disturbed. When handling CF-containing materials, especially for processing, wearing an effective respirator (at least N95/P2) that is rated for filtering out asbestos fibers would thus seem to be a wise precaution.

The treacherous aspect of asbestos and kin is that diseases like lung cancer and mesothelioma are not immediately noticeable after exposure, but can take decades to develop. In the case of mesothelioma, this can be between 15 and 30 years after exposure, so protecting yourself today with a good respirator is the only way you can be relatively certain that you will not be cursing your overconfident young self by that time.

Continue reading “Could Carbon Fiber Be The New Asbestos?”

Brain Implant Uses Graphene Instead Of Metal Probes

Implantable electrodes for the (human) brain have been around for a many decades in the form of Utah arrays and kin, but these tend to be made out of metal, which can cause issues when stimulating the surrounding neurons with an induced current. This is due to faradaic processes between the metal probe and an electrolyte (i.e. the cerebrospinal fluid). Over time this can result in insulating deposits forming on the probe’s surface, reducing their effectiveness.

Graphene-based, high-resolution cortical brain interface (Credit: Inbrain Neuroelectronics)
Graphene-based, high-resolution cortical brain interface (Credit: Inbrain Neuroelectronics)

Now a company called InBrain claims to have cracked making electrodes out of graphene, following a series of tests on non-human test subjects. Unlike metal probes, these carbon-based probes should be significantly more biocompatible even when used for brain stimulation as with the target goal of treating the symptoms associated with Alzheimer’s.

During the upcoming first phase human subjects would have these implants installed where they would monitor brain activity in Alzheimer’s patients, to gauge how well their medication is helping with the symptoms like tremors. Later these devices would provide deep-brain stimulation, purportedly more efficiently than similar therapies in use today. The FDA was impressed enough at least to give it the ‘breakthrough device’ designation, though it is hard to wade through the marketing hype to get a clear picture of the technology in question.

In their most recently published paper (preprint) in Nature Nanotechnology, [Calia] and colleagues describe flexible graphene depth neural probes (gDNP) which appear to be what is being talked about. These gDNP are used in the experiment to simultaneously record infraslow (<0.1 Hz) and higher frequencies, a feat which metal microelectrodes are claimed to struggle with.

Although few details are available right now, we welcome any brain microelectrode array improvements, as they are incredibly important for many types of medical therapies and research.

Credit: Daniel Baxter

Mechanical Intelligence And Counterfeit Humanity

It would seem fair to say that the second half of last century up till the present day has been firmly shaped by our relation with technology and that of computers in particular. From the bulking behemoths at universities, to microcomputers at home, to today’s smartphones, smart homes and ever-looming compute cloud, we all have a relationship with computers in some form. One aspect of computers which has increasingly become underappreciated, however, is that the less we see them as physical objects, the more we seem inclined to accept them as humans. This is the point which [Harry R. Lewis] argues in a recent article in Harvard Magazine.

Born in 1947, [Harry R. Lewis] found himself at the forefront of what would become computer science and related disciplines, with some of his students being well-know to the average Hackaday reader, such as [Bill Gates] and [Mark Zuckerberg]. Suffice it to say, he has seen every attempt to ‘humanize’ computers, ranging from ELIZA to today’s ChatGPT. During this time, the line between humans and computers has become blurred, with computer systems becoming increasingly more competent at imitating human interactions even as they vanished into the background of daily life.

These counterfeit ‘humans’ are not capable of learning, of feeling and experiencing the way that humans can, being at most a facsimile of a human for all but that what makes a human, which is often referred to as ‘the human experience’. More and more of us are communicating these days via smartphone and computer screens with little idea or regard for whether we are talking to a real person or not. Ironically, it seems that by anthropomorphizing these counterfeit humans, we risk becoming less human in the process, while also opening the floodgates for blaming AI when the blame lies square with the humans behind it, such as with the recent Air Canada chatbot case. Equally ridiculous, [Lewis] argues, is the notion that we could create a ‘superintelligence’ while training an ‘AI’ on nothing but the data scraped off the internet, as there are many things in life which cannot be understood simply by reading about them.

Ultimately, the argument is made that it is humanistic learning that should be the focus point of artificial intelligence, as only this way we could create AIs that might truly be seen as our equals, and beneficial for the future of all.

Reviewing Nuclear Accidents: Separating Fact From Fiction

Few types of accidents speak as much to the imagination as those involving nuclear fission. From the unimaginable horrors of the nuclear bombs on Nagasaki and Hiroshima, to the fever-pitch reporting about the accidents at Three Mile Island, Chernobyl and Fukushima, all of these have resulted in many descriptions and visualizations which are merely imaginative flights of fancy, with no connection to physical reality. Due to radiation being invisible with the naked eye and the interpretation of radiation measurements in popular media generally restricted to the harrowing noise from a Geiger counter, the reality of nuclear power accidents in said media has become diluted and often replaced with half-truths and outright lies that feed strongly into fear, uncertainty, and doubt.

Why is it that people are drawn more to nuclear accidents than a disaster like that at Bhopal? What is it that makes the one nuclear bomb on Hiroshima so much more interesting than the firebombing of Tokyo or the flattening of Dresden? Why do we fear nuclear power more than dam failures and the heavy toll of air pollution? If we honestly look at nuclear accidents, it’s clear that invariably the panic afterwards did more damage than the event itself. One might postulate that this is partially due to the sensationalist vibe created around these events, and largely due to a poorly informed public when it comes to topics like nuclear fission and radiation. A situation which is worsened by harmful government policies pertaining to things like disaster response, often inspired by scientifically discredited theories like the Linear No-Threshold (LNT) model which killed so many in the USSR and Japan.

In light of a likely restart of Unit 1 of the Three Mile Island nuclear plant in the near future, it might behoove us to wonder what we might learn from the world’s worst commercial nuclear power disasters. All from the difficult perspective of a world where ideology and hidden agendas do not play a role, as we ask ourselves whether we really should fear the atom.

Continue reading “Reviewing Nuclear Accidents: Separating Fact From Fiction”

Sketch of the UED setup at EPFL, 1) Electron gun, 2) High-Voltage connector, 3) Photo-cathode, 4) Anode, 5) Collimating solenoid, 6) Steering plates, 7) Focusing solenoid, 8) RF cavity, 9) Sample holder, 10) Cryostat, 11) Electron detector, 12) Turbo pump, 13) Ion gauge. Credit: Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2316438121

Using Femtosecond Laser Pulses To Induce Metastable Hidden States In Magnetite

Hidden states are a fascinating aspect of matter, as these can not normally be reached via natural processes (i.e. non-ergodic), but we can establish them using laser photoexcitation. Although these hidden states are generally very unstable and will often decay within a nanosecond, there is evidence for more persistent states in e.g. vanadates. As for practical uses of these states, electronics and related fields are often mentioned. This is also the focus in the press release by the Ecole Polytechnique Federale de Lausanne (EPFL) when reporting on establishing hidden states in magnetite (Fe3O4), with the study published in PNAS (Arxiv preprint link).

[B. Truc] and colleagues used two laser frequencies to either make the magnetite more conductive (800 nm) or a better insulator (400 nm). The transition takes on the order of 50 picoseconds, allowing for fairly rapid switching between these metastable states. Naturally, turning this into practical applications will require a lot more work, especially considering the need for femtosecond pulsed lasers to control the process, which makes it significantly more cumbersome than semiconductor technology. Its main use at this point in time will remain a fascinating demonstration of these hidden states of matter.