The experimental setup – a Commodore 64 is connected to a monitor through a composite video to HDMI converter, with the code cartridge inserted into the expansion port.

Trolling IBM’s Quantum Processor Advantage With A Commodore 64

The memory map ofthe implementation, as set within the address space of the Commodore 64 - about 15kB of the accessible 64kB RAM is used. 8kB of this is reserved for code, although most of this is unused. Each of the two bitstrings for each Pauli string is stored separately (labeled as Pauli String X/Z) for more efficient addressing.
The memory map of
the implementation, as set within the address space of the Commodore 64 – about 15kB of the accessible 64kB RAM is used.

There’s been a lot of fuss about the ‘quantum advantage’ that would arise from the use of quantum processors and quantum systems in general. Yet in this high-noise, high-uncertainty era of quantum computing it seems fair to say that the advantage part is a bit of a stretch. Most recently an anonymous paper (PDF, starts at page 199) takes IBM’s claims with its 127-bit Eagle quantum processor to its ludicrous conclusion by running the same Trotterized Ising model on the ~1 MHz MOS 6510 processor in a Commodore 64. (Worth noting: this paper was submitted to Sigbovik, the conference of the Association for Computational Heresy.)

We previously covered the same claims by IBM already getting walloped by another group of researchers (Tindall et al., 2024) using a tensor network on a classical computer. The anonymous submitter of the Sigbovik paper based their experiment on a January 2024 research paper by [Tomislav Begušić] and colleagues as published in Science Advances. These researchers also used a classical tensor network to run the IBM experiment many times faster and more accurately, which the anonymous researcher(s) took as the basis for a version that runs on the C64 in a mere 15 kB of RAM, with the code put on an Atmel AT28C256 ROM inside a cartridge which the C64 then ran from.

The same sparse Pauli dynamics algorithm was used as by [Tomislav Begušić] et al., with some limitations due to the limited amount of RAM, implementing it in 6502 assembly. Although the C64 is ~300,000x slower per datapoint than a modern laptop, it does this much more efficiently than the quantum processor, and without the high error rate. Yes, that means that a compute cluster of Commodore 64s can likely outperform a ‘please call us for a quote’ quantum system depending on which linear algebra problem you’re trying to solve. Quantum computers may yet have their application, but this isn’t it, yet.

Thanks to [Stephen Walters] and [Pio] for the tip.

Mechanisms of pulse current charging for stabilizing the cycling performance of commercial NMC/graphite LIBs. (Credit: Jia Guo et al., 2024)

Why Pulse Current Charging Lithium-Ion Batteries Extends Their Useful Lifespan

For as much capacity lithium-ion batteries have, their useful lifespan is generally measured in the hundreds of cycles. This degradation is caused by the electrodes themselves degrading, including the graphite anode in certain battery configurations fracturing. For a few years it’s been known that pulsed current (PC) charging can prevent much of this damage compared to constant current (CC) charging. The mechanism behind this was the subject of a recent research article by [Jia Guo] and colleagues as published in Advanced Energy Materials.

Raman spectra of a) as-cycled and b) surface-removed graphite anodes aged under CC and Pulse-2000 charging. FE-SEM images of the cross-sections of graphite electrodes aged with CC (c,d) and Pulse-2000 (e,f) charging. d,f) are edge-magnified images of (c,e). g) shows the micrograph and O and C element mapping of the surface of CC-aged graphite electrode. TEM images of h) fresh, i) CC, and j) Pulse-2000 aged graphite anodes. (Credit: Jia Guo et al., 2024)
Raman spectra of a) as-cycled and b) surface-removed graphite anodes aged under CC and Pulse-2000 charging. FE-SEM images of the cross-sections of graphite electrodes aged with CC (c,d) and Pulse-2000 (e,f) charging. d,f) are edge-magnified images of (c,e). g) shows the micrograph and O and C element mapping of the surface of CC-aged graphite electrode. TEM images of h) fresh, i) CC, and j) Pulse-2000 aged graphite anodes. (Credit: Jia Guo et al., 2024)

The authors examined the damage to the electrodes after multiple CC and PC cycles using Raman and X-ray absorption spectroscopy along with lifecycle measurements for CC and PC charging at 100 Hz (Pulse-100) and 2 kHz (Pulse-2000). Matching the results from the lifecycle measurements, the electrodes in the Pulse-2000 sample were in a much better state, indicating that the mechanical stress from pulse current charging is far less than that from constant current charging. A higher frequency with the PC shows increased improvements, though as noted by the authors, it’s not known yet at which frequencies diminishing returns will be observed.

The use of PC vs CC is not a new thing, with the state-of-the-art in electric vehicle battery charging technology being covered in a 2020 review article by [Xinrong Huang] and colleagues as published in Energies. A big question with the many different EV PC charging modes is what the optimum charging method is to maximize the useful lifespan of the battery pack. This also applies to lithium-metal batteries, with a 2017 research article by [Zi Li] and colleagues in Science Advances providing a molecular basis for how PC charging suppresses the formation of dendrites .

What this demonstrates quite well is that the battery chemistry itself is an important part, but the way that the cells are charged and discharged can be just as influential, with the 2 kHz PC charging in the research by [Jia Guo] and colleagues demonstrating a doubling of its cycle life over CC charging. Considering the amount of Li-ion batteries being installed in everything from smartphones and toys to cars, having these last double as long would be very beneficial.

Thanks to [Thomas Yoon] for the tip.

Remembering Peter Higgs And The Gravity Of His Contributions To Physics

There are probably very few people on this globe who at some point in time haven’t heard the term ‘Higgs Boson’  zip past, along with the term ‘God Particle’. As during the 2010s the scientists at CERN were trying to find evidence for the existence of this scalar boson and with it evidence for the existence of the Higgs field that according to the Standard Model gives mass to gauge bosons like photons, this effort got communicated in the international media and elsewhere in a variety of ways.

Along with this media frenzy, the physicist after whom the Higgs boson was named also gained more fame, despite Peter Higgs already having been a well-known presence in the scientific community for decades by that time until his retirement in 1996. With Peter Higgs’ recent death after a brief illness at the age of 94, we are saying farewell to one of the big names in physics. Even if not a household name like Einstein and Stephen Hawking, the photogenic hunt for the Higgs boson ended up highlighting a story that began in the 1960s with a series of papers.

Continue reading “Remembering Peter Higgs And The Gravity Of His Contributions To Physics”

A Bend Sensor Developed With 3D Printer Filament

PhD students spend their time pursuing whatever general paths their supervisor has given them, and if they are lucky, it yields enough solid data to finally write a thesis without tearing their hair out. Sometimes along the way they result in discoveries with immediate application outside academia, and so it was for [Paul Bupe Jr.], whose work resulted in a rather elegant and simple bend sensor.

The original research came when shining light along flexible media, including a piece of transparent 3D printer filament. He noticed that when the filament was bent at a point that it was covered by a piece of electrical tape there was a reduction in transmission, and from this he was able to repeat the effect with a piece of pipe over a narrow air gap in the medium.

Putting these at regular intervals and measuring the transmission for light sent along it, he could then detect a bend. Take three filaments with  the air-gap-pipe sensors spaced to form a Gray code, and he could digitally read the location.

He appears to be developing this discovery into a product. We’re not sure which is likely to be more stress, writing up his thesis, or surviving a small start-up, so we wish him luck.

Beating IBM’s Eagle Quantum Processor On An Ising Model With A Classical Tensor Network

The central selling point of qubit-based quantum processors is that they can supposedly solve certain types of tasks much faster than a classical computer. This comes however with the major complication of quantum computing being ‘noisy’, i.e. affected by outside influences. That this shouldn’t be a hindrance was the point of an article published last year by IBM researchers where they demonstrated a speed-up of a Trotterized time evolution of a 2D transverse-field Ising model on an IBM Eagle 127-qubit quantum processor, even with the error rate of today’s noisy quantum processors. Now, however, [Joseph Tindall] and colleagues have demonstrated with a recently published paper in Physics that they can beat the IBM quantum processor with a classical processor.

In the IBM paper by [Yougseok Kim] and colleagues as published in Nature, the essential take is that despite fault-tolerance heuristics being required with noisy quantum computers, this does not mean that there are no applications for such flawed quantum systems in computing, especially when scaling and speeding up quantum processors. In this particular experiment it concerns an Ising model, a statistical mechanical model, which has many applications in physics, neuroscience, etc., based around phase transitions.

Unlike the simulation running on the IBM system, the classical simulation only has to run once to get accurate results, which along with other optimizations still gives classical systems the lead. Until we develop quantum processors with built-in error-tolerance, of course.

Cryo-EM: Freezing Time To Take Snapshots Of Myosin And Other Molecular Systems

Using technologies like electron microscopy (EM) it is possible to capture molecular mechanisms in great detail, but not when these mechanisms are currently moving. The field of cryomicroscopy circumvents this limitation by freezing said mechanism in place using cryogenic fluids. Although initially X-ray crystallography was commonly used, the much more versatile EM is now the standard approach in the form of cryo-EM, with recent advances giving us unprecedented looks at the mechanisms that quite literally make our bodies move.

Myosin-5 working stroke and walking on F-actin. (Credit: Klebl et al., 2024)
Myosin-5 working stroke and walking on F-actin. (Credit: Klebl et al., 2024)

The past years has seen many refinements in cryo-EM, with previously quite manual approaches shifting to microfluidics to increase the time resolution at which a molecular process could be frozen, enabling researchers to for example see the myosin motor proteins go through their motions one step at a time. Research articles on this were published previously, such as by [Ahmet Mentes] and colleagues in 2018 on myosin force sensing to adjust to dynamic loads. More recently, [David P. Klebl] and colleagues published a research article this year on the myosin-5 powerstroke through ATP hydrolysis, using a modified (slower) version of myosin-5. Even so, the freezing has to be done with millisecond accuracy to capture the myosin in the act of priming (pre-powerstroke).

The most amazing thing about cryo-EM is that it allows us to examine processes that used to be the subject of theory and speculation as we had no means to observe the motion and components involved directly. The more we can increase the time resolution on cryo-EM, the more details we can glimpse, whether it’s the functioning of myosins in muscle tissue or inside cells, the folding of proteins, or determining the proteins involved in a range of diseases, such as the role of TDP-43 in amytrophic lateral sclerosis (ALS) in a 2021 study by [Diana Arseni] and colleagues.

As our methods of freezing these biomolecular moments in time improve, so too will our ability to validate theory with observations. Some of these methods combine cryogenic freezing with laser pulses to alternately freeze and resume processes, allowing processes to be recorded in minute detail in sub-millisecond resolution. One big issue that remains yet is that although some of these researchers have even open sourced their cryo-EM methods, commercial vendors have not yet picked up this technology, limiting its reach as researchers have to cobble something together themselves.

Hopefully before long (time-resolved) cryo-EM will be as common as EM is today, to the point where even a hobby laboratory may have one lounging around.

Heating Mars On The Cheap

Mars is fairly attractive as a potential future home for humanity. It’s solid, with firm land underfoot. It’s able to hang on to a little atmosphere, which is more than you can say about the moon. It’s even got a day/night cycle remarkably close to our own. The only problem is it’s too darn cold, and there’s not a lot of oxygen to breathe, either.

Terraforming is the concept of fixing problems like these on a planet-wide scale. Forget living in domes—let’s just make the whole thing habitable!

That’s a huge task, so much current work involves exploring just what we could achieve with today’s technology. In the case of Mars, [Casey Handmer] doesn’t have a plan to terraform the whole planet. But he does suggest we could potentially achieve significant warming of the Red Planet for $10 billion in just 10 years. Continue reading “Heating Mars On The Cheap”