Scientific Honesty And Quantum Computing’s Latest Theoretical Hurdle

Quantum computer

Quantum computers are really in their infancy. If you created a few logic gates with tubes back in the 1930s, it would be difficult to predict all the ways we would use computers today. However, you could probably guess where at least some of the problems would lie in the future. One of the things we are pretty sure will limit quantum computer development is error correction.

As far as we know, every quantum qubit we’ve come up with so far is very fragile and prone to random errors. That’s why every practical design today incorporates some sort of QEC — quantum error correction. Of course, error correction isn’t news. We use it all the time on unreliable storage media or communication channels and high-reliability memory. The problem is, you can’t directly clone a qubit (a quantum bit), so it is hard to use traditional error correction techniques with qubits.

After all, the whole point to a qubit is we don’t measure it until the end of the computation which, like Schrödinger’s cat, seals its fate. So if you were to “read” a bunch of qubits to form a checksum or a CRC, you’d destroy their quantum nature in the process making your computer not very useful. You can’t even copy a bit to use something like triple redundancy, either. There seems to be no way to practically duplicate a qubit.

Peter Schor came up with an answer. Instead of copying a qubit directly, the computer can spread a logical qubit across nine actual qubits. It is then possible to determine if there has been a single physical qubit error using a complicated algorithm. Later research has dropped the number of qubits required down to five which appears to be the theoretical limit.

Imagine if your 32-bit CPU could only handle six bits. That’s less than an old 8080. So imagine the excitement in 2018 when scientists announced that they found a class of topological qubits based on Majorana zero-mode (MZM) quasiparticles — these are fermions that are their own antiparticles. Many experts feel like topological qubits are the future of practical quantum computers because instead of encoding information in fragile quantum states, a topological computer is immune to the random errors that plague current quantum computers.

The Majorana Announcement

Delft University of Technology announced they’d generated MZMs in indium antimonide nanowires. The next year Microsoft, a company that wants to back topological quantum computing architecture, opened a research center on the school’s campus.

Sounds great, right? A researcher from the University of Pittsburgh read about the advance in the journal Nature, a well-respected scientific journal. He and a partner in Australia had been doing similar work and asked for the raw data from the Delft group.

What they found was surprising. Some of the Delft paper didn’t seem right and it seemed possible some graphs had been manipulated. Data that didn’t support the conclusion had been excluded for no apparent reason, and processing all the data told a different story. The head of the Delft project looked at the data again and in 2021 asked Nature to retract the paper and published an apology.

According to an article in Quanta, an independent committee concluded that the paper wasn’t deliberately fraudulent, but noted:  “The authors had simply fooled themselves by zooming in only on the results that showed them what they hoped to see.”

The Review Process is Itself Quite Challenging

You’d like to think peer review would catch things like this, but the truth is, there aren’t that many peers at this depth of research. Peer review isn’t always so great, anyway. There have been several famous cases of people submitting random or nonsense papers to journals and having them published. Even Nature has had falsified papers accepted before and not just one time, either. On the opposite side of the scale, Enrico Fermi’s breakthrough paper on beta decay was rejected along with several other papers that would — in retrospect — turn out to be significant and even lead to Nobel prizes.

Even medical journals may have as much as 25% false information according to papers that, of course, themselves could be false. So how can a journal know if ground-breaking work is accurate? And how can we know if what a journal prints is accurate? Or even what any random person chooses to say considering that you don’t really need a journal to reach the world anymore. In a world where we increasingly depend on scientific results that we don’t have the knowledge and equipment to verify, this is a very important question.

A Matter of Trust

If you think about it, society, in general, depends on, among other things, trust. I trust that my employer will pay me and when I spend money at the store, they trust that the government is backing that money so they’ll be able to buy more supplies and pay their workers. Imagine if checking out at the grocery involved someone testing your gold to make sure it was authentic and weighing it to see if their scale agreed with yours.

But even if that were the case, those verifications are relatively simple. Quantum computing is on the bleeding edge of several disciplines and the domain knowledge needed to confirm new findings is vanishingly rare. How do you verify error correcting qubit techniques? How can prototype quantum computer performance be independently benchmarked? Many are forecasting that quantum computing is the next big thing. In the run up to that possibility, it’s important to look at each new announcement with a critical eye and to learn about individual researchers and research groups to better know where the trustworthy findings and verifications are coming from.

Need some help getting up to speed on how computing is expanding into the quantum world? We know of at least one effort to homebrew a trapped-ion quantum computer (which is not topological). If you want a 90-minute intro to the field, have a look at this Microsoft video. You can also take the Hackaday U class taught by Dr. Kitty Yeung, who incidentally is now a Senior Quantum Architect at Microsoft. The first video is below.

25 thoughts on “Scientific Honesty And Quantum Computing’s Latest Theoretical Hurdle

  1. It depends on the problem/solution?
    One example that is often discussed is factoring large numbers and possibly breaking encryption.
    If the QC generates only a few possible solutions that can be easily checked then QEC is less important.
    But, if the QC generates many false positives (and/or negatives?) then we are screwed.

    1. I think the issue is, at least in part, that the gate model of QC won’t scale well enough to run larger problems without QEC; in your highlighted example, all of the answers would be gibberish and not related to the input in any meaningful way, it wouldn’t be a matter of checking a handful of answers for correctness. I am most certainly no expert though!

      1. It was my understanding that some physicists have shown the coherence-length property will fundamentally constrain how large these implementations can scale. i.e. the Blockchain treasure is safe for awhile yet. ;-)

  2. Breaking encryption is the killer app for QC, all else is window dressing. One of the reasons you see so much money being pumped into QC despite the fact that it has never actually done anything practically useful is that the first entity to get it working at a level that can break modern non-QC encryption will own the world. QC is the new cold fusion. People are highly motivated to see positive progress so they can show it to their investors. So the sloppy work and outright charlatans are going to be thick in the field.

    1. It seems that wikipedia edits are too often motivated entirely by the propensity to correct people who don’t agree with a particular viewpoint – independent of boring things like evidence or lack of it. I am getting so bored with phrases like “the consensus is that ….. “.

      I have no evidence for this of course

    1. I’ve been paying loose attention for over 15 years. I am yet to see any big claimed breakthrough in QC – touted as just 2-5 years away – ever come to fruition.

      I see there is a QC Twitter BS detector – https://www.wired.com/story/revolt-scientists-say-theyre-sick-of-quantum-computings-hype/

      Scott Lochlin’s article is one I’ve been going back to for the last 2 years. The comment section on that article just gets better and better with age. https://scottlocklin.wordpress.com/2019/01/15/quantum-computing-as-a-field-is-obvious-bullshit/

  3. I hope this isn’t seen as nit-picking because I have a very specific point to make. Namely, this isn’t an issue of science being dishonest, but of scientists being dishonest (or incompentent). It was in fact *science* that noticed and pointed out the problem which lead to a correction. There will always be a certain percentage of people who will cheat or manipulate to work towards a personal goal of wealth or power, but it is the scientific community that helps keep them in line, because the majority of people are in fact honest where it counts.

    1. Science itself does not do anything, only by people who look at things and apply the scientific method.
      That itself is not a way to avoid any errors or biases, it remains an attempt.
      *People* pointed out the error, and even that is not a final fact.

      Drawing conclusions from data is always difficult. But even collecting data is not without fail.

      “Science” as an abstract entity does not exist, just people applying the scientific method, and hopefully finding insights. But science is never free of the subjective element. To believe that is unscientific in itself.

  4. Here’s the part I’m not understanding… aren’t the quantum superpositions created via doing some sort of Spooky Quantum Stuff to encode a qubit with known /classical/ data? Sure, once you entangle things, all bets are off, but the no-cloning theorem is about copying an /unknown/ quantum state, while encoding a qubit with /known/ classical data should put the qubit into a, if not specifically known, then at least /reproduceable/ state, shouldn’t it?

    Or am I completely off the mark, and superpositions shouldn’t be thought of as “an indexable array of known-starting data written into qubits”?

  5. I’m not saying that quantum computation is bullshit in it’s entirety… buuuuut for fun try asking anyone who calls themself knowledgeable about QC what the ‘hello world’ of QC is. like open GL is a spinning cube, arduino is blinky led, What’s the simplest program i can run on my brand new Quantum Computer, so I know that my tool chain is linking, compiling, building, running, and returning an output. What’s the difference in running this qHelloWorld example on a real QC and a simulated one? are the results repeatable? deterministic? stochastic?

Leave a Reply to FosseliusCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.