IBM’s Latest Quantum Supercomputer Idea: The Hybrid Classical-Quantum System

Although quantum processors exist today, they are still a long way off from becoming practical replacements for classical computers. This is due to many practical considerations, not the least of which are factors such as the need for cryogenic cooling and external noise affecting the system necessitating a level of error-correction which does not exist yet. To somewhat work around these limitations, IBM has now pitched the idea of a hybrid quantum-classical computer (marketed as ‘quantum-centric supercomputing’), which as the name suggests combines the strengths of both to create a classical system with what is effectively a quantum co-processor.

IBM readily admits that nobody has yet demonstrated quantum advantage, i.e. that a quantum computer is actually better at tasks than a classical computer, but they figure that by aiming for quantum utility (i.e. co-processor level), it could conceivably accelerate certain tasks for a classical computer much like how a graphics processing unit (GPU) is used to offload everything from rendering graphics to massively parallel computing tasks courtesy of its beefy vector processing capacity. IBM’s System Two is purported to demonstrate this when it releases.

What the outcome here will be is hard to say, as the referenced 2023 quantum utility demonstration paper involving an Ising model was repeatedly destroyed by classical computers and even trolled by a Commodore 64-based version. Thus, at the very least IBM’s new quantum utility focus ought to keep providing us with more popcorn moments like those, and maybe a usable quantum system will roll out by the 2030s if IBM’s projected timeline holds up.

The experimental setup – a Commodore 64 is connected to a monitor through a composite video to HDMI converter, with the code cartridge inserted into the expansion port.

Trolling IBM’s Quantum Processor Advantage With A Commodore 64

The memory map ofthe implementation, as set within the address space of the Commodore 64 - about 15kB of the accessible 64kB RAM is used. 8kB of this is reserved for code, although most of this is unused. Each of the two bitstrings for each Pauli string is stored separately (labeled as Pauli String X/Z) for more efficient addressing.
The memory map of
the implementation, as set within the address space of the Commodore 64 – about 15kB of the accessible 64kB RAM is used.

There’s been a lot of fuss about the ‘quantum advantage’ that would arise from the use of quantum processors and quantum systems in general. Yet in this high-noise, high-uncertainty era of quantum computing it seems fair to say that the advantage part is a bit of a stretch. Most recently an anonymous paper (PDF, starts at page 199) takes IBM’s claims with its 127-bit Eagle quantum processor to its ludicrous conclusion by running the same Trotterized Ising model on the ~1 MHz MOS 6510 processor in a Commodore 64. (Worth noting: this paper was submitted to Sigbovik, the conference of the Association for Computational Heresy.)

We previously covered the same claims by IBM already getting walloped by another group of researchers (Tindall et al., 2024) using a tensor network on a classical computer. The anonymous submitter of the Sigbovik paper based their experiment on a January 2024 research paper by [Tomislav Begušić] and colleagues as published in Science Advances. These researchers also used a classical tensor network to run the IBM experiment many times faster and more accurately, which the anonymous researcher(s) took as the basis for a version that runs on the C64 in a mere 15 kB of RAM, with the code put on an Atmel AT28C256 ROM inside a cartridge which the C64 then ran from.

The same sparse Pauli dynamics algorithm was used as by [Tomislav Begušić] et al., with some limitations due to the limited amount of RAM, implementing it in 6502 assembly. Although the C64 is ~300,000x slower per datapoint than a modern laptop, it does this much more efficiently than the quantum processor, and without the high error rate. Yes, that means that a compute cluster of Commodore 64s can likely outperform a ‘please call us for a quote’ quantum system depending on which linear algebra problem you’re trying to solve. Quantum computers may yet have their application, but this isn’t it, yet.

Thanks to [Stephen Walters] and [Pio] for the tip.

Beating IBM’s Eagle Quantum Processor On An Ising Model With A Classical Tensor Network

The central selling point of qubit-based quantum processors is that they can supposedly solve certain types of tasks much faster than a classical computer. This comes however with the major complication of quantum computing being ‘noisy’, i.e. affected by outside influences. That this shouldn’t be a hindrance was the point of an article published last year by IBM researchers where they demonstrated a speed-up of a Trotterized time evolution of a 2D transverse-field Ising model on an IBM Eagle 127-qubit quantum processor, even with the error rate of today’s noisy quantum processors. Now, however, [Joseph Tindall] and colleagues have demonstrated with a recently published paper in Physics that they can beat the IBM quantum processor with a classical processor.

In the IBM paper by [Yougseok Kim] and colleagues as published in Nature, the essential take is that despite fault-tolerance heuristics being required with noisy quantum computers, this does not mean that there are no applications for such flawed quantum systems in computing, especially when scaling and speeding up quantum processors. In this particular experiment it concerns an Ising model, a statistical mechanical model, which has many applications in physics, neuroscience, etc., based around phase transitions.

Unlike the simulation running on the IBM system, the classical simulation only has to run once to get accurate results, which along with other optimizations still gives classical systems the lead. Until we develop quantum processors with built-in error-tolerance, of course.

Impact Of Imperfect Timekeeping On Quantum Control And Computing

In classical control theory, both open-loop and closed-loop control systems are commonly used. These systems are well understood and rather straightforward, controlling everything from washing machines to industrial equipment to the classical computing devices that make today’s society work. When trying to transfer this knowledge to the world of quantum control theory, however, many issues arise. The most pertinent ones involve closed-loop quantum control and the clocking of quantum computations. With physical limitations on the accuracy and resolution of clocks, this would set hard limits on the accuracy and speed of quantum computing.

The entire argument is covered in two letters to Physical Review Letters, by Florian Meier et al. titled Fundamental Accuracy-Resolution Trade-Off for Timekeeping Devices (Arxiv preprint), and by Jake Xuereb et al. titled Impact of Imperfect Timekeeping on Quantum Control (Arxiv preprint). The simple version is that by simply increasing the clock rate, accuracy suffers, with dephasing and other issues becoming more frequent.

Solving the riddle of closed-loop quantum control theory is a hard one, as noted by Daoyi Dong and Ian R Peterson in 2011. In their paper titled Quantum control theory and applications: A survey, the most fundamental problem with such a closed-loop quantum control system lies with aspects such as the uncertainty principle, which limits the accuracy with which properties of the system can be known.

In this regard, an accurately clocked open-loop system could work better, except that here we run into other fundamental issues. Even though this shouldn’t phase us, as with time solutions may be found to the timekeeping and other issues, it’s nonetheless part of the uncertainties that keep causing waves in quantum physics.

Top image: Impact of timekeeping error on quantum gate fidelity & independent clock dephasing (Xuereb et al., 2023)

Quantum Computing On A Commodore 64 In 200 Lines Of BASIC

The term ‘quantum computer’ gets usually tossed around in the context of hyper-advanced, state-of-the-art computing devices. But much as how a 19th century mechanical computer, a discrete computer created from individual transistors, and a human being are all computers, the important quantifier is how fast and accurate the system is at the task. This is demonstrated succinctly by [Davide ‘dakk’ Gessa] with 200 lines of BASIC code on a Commodore 64 (GitHub), implementing a range of quantum gates.

Much like a transistor in classical computing, the qubit forms the core of quantum computing, and we have known for a long time that a qubit can be simulated, even on something as mundane as an 8-bit MPU. Ergo [Davide]’s simulations of various quantum gates on a C64, ranging from Pauli-X, Pauli-Y, Pauli-Z, Hadamard, CNOT and SWAP, all using a two-qubit system running on a system that first saw the light of day in the early 1980s.

Naturally, the practical use of simulating a two-qubit system on a general-purpose MPU running at a blistering ~1 MHz is quite limited, but as a teaching tool it’s incredibly accessible and a fun way to introduce people to the world of quantum computing.

Intel To Ship Quantum Chip

In a world of 32-bit and 64-bit processors, it might surprise you to learn that Intel is releasing a 12-bit chip. Oh, wait, we mean 12-qubit. That makes more sense. Code named Tunnel Falls, the chip uses tiny silicon spin quantum bits, which Intel says are more advantageous than other schemes for encoding qubits. There’s a video about the device below.

It is a “research chip” and will be available to universities that might not be able to produce their own hardware. You probably aren’t going to find them listed on your favorite online reseller. Besides, the chip isn’t going to be usable on a breadboard. It is still going to take a lot of support to get it running.

Intel claims the silicon qubit technology is a million times smaller than other qubit types. The size is on the order of a device transistor — 50 nanometers square — simplifying things and allowing denser devices. In silicon spin qubits, information resides in the up or down spin of a single electron.

Of course, even Intel isn’t suggesting that 12 qubits are enough for a game-changing quantum computer, but you do have to start somewhere. This chip may enable more researchers to test the technology and will undoubtedly help Intel accelerate its research to the next step.

There is a lot of talk that silicon is the way to go for scalable quantum computing. It makes you wonder if there’s anything silicon can’t do? You can access today’s limited quantum computers in the proverbial cloud.

Continue reading “Intel To Ship Quantum Chip”

Quantum Interconnects Get Faster

If you are a retrocomputer fan, you might remember when serial ports were a few hundred baud and busses ran at a few megahertz at the most. Today, of course, we have buses and fabric that can run at tremendous speeds. Quantum computing, though, has to start from scratch. One major problem is that jockeying quantum states around for any distance is difficult and slow. Part of it is that qubits decay rapidly, so you don’t have much time. They are also generally susceptible to noise and perturbation by outside forces. So many quantum machines today are limited by how much they can cram on one chip since there isn’t a good way to connect to another chip. The University of Sussex thinks it has improved the outlook for quantum interconnects with a technique they claim can move qubits around at nearly 2,500 links per second.

The technique, called UQ Connect, uses electric field links to connect multiple chips using trapped ions for qubits. If you want to read the actual paper, you can find it in Nature Communications.

Continue reading “Quantum Interconnects Get Faster”