Has quantum advantage been achieved?

Recently, I gave a couple of perspective talks on quantum advantage, one at the annual retreat of the CIQC and one at a recent KITP programme. I started off by polling the audience on who believed quantum advantage had been achieved. Just this one, simple question.

The audience was mostly experimental and theoretical physicists with a few CS theory folks sprinkled in. I was sure that these audiences would be overwhelmingly convinced of the successful demonstration of quantum advantage. After all, more than half a decade has passed since the first experimental claim (G1) of “quantum supremacy” as the patron of this blog’s institute called the idea “to perform tasks with controlled quantum systems going beyond what can be achieved with ordinary digital computers” (Preskill, p. 2) back in 2012. Yes, this first experiment by the Google team may have been simulated in the meantime, but it was only the first in an impressive series of similar demonstrations that became bigger and better with every year that passed. Surely, so I thought, a significant part of my audiences would have been convinced of quantum advantage even before Google’s claim, when so-called quantum simulation experiments claimed to have performed computations that no classical computer could do (e.g. (qSim)).

I could not have been more wrong.

In both talks, less than half of the people in the audience thought that quantum advantage had been achieved.

In the discussions that ensued, I came to understand what folks criticized about the experiments that have been performed and even the concept of quantum advantage to begin with. But more on that later. Most of all, it seemed to me, the community had dismissed Google’s advantage claim because of the classical simulation shortly after. It hadn’t quite kept track of all the advances—theoretical and experimental—since then.

In a mini-series of three posts, I want to remedy this and convince you that the existing quantum computers can perform tasks that no classical computer can do. Let me caution, though, that the experiments I am going to talk about solve a (nearly) useless task. Nothing of what I say implies that you should (yet) be worried about your bank accounts.

I will start off by recapping what quantum advantage is and how it has been demonstrated in a set of experiments over the past few years.

Part 1: What is quantum advantage and what has been done?

To state the obvious: we are now fairly convinced that noiseless quantum computers would be able solve problems efficiently that no classical computer could solve. In fact, we have been convinced of that already since the mid-90ies when Lloyd and Shor discovered two basic quantum algorithms: simulating quantum systems and factoring large numbers. Both of these are tasks where we are as certain as we could be that no classical computer can solve them. So why talk about quantum advantage 20 and 30 years later?

The idea of a quantum advantage demonstration—be it on a completely useless task even—emerged as a milestone for the field in the 2010s. Achieving quantum advantage would finally demonstrate that quantum computing was not just a random idea of a bunch of academics who took quantum mechanics too seriously. It would show that quantum speedups are real: We can actually build quantum devices, control their states and the noise in them, and use them to solve tasks which not even the largest classical supercomputers could do—and these are very large.

What is quantum advantage?

But what exactly do we mean by “quantum advantage”. It is a vague concept, for sure. But some essential criteria that a demonstration should certainly satisfy are probably the following.

  1. The quantum device needs to solve a pre-specified computational task. This means that there needs to be an input to the quantum computer. Given the input, the quantum computer must then be programmed to solve the task for the given input. This may sound trivial. But it is crucial because it delineates programmable computing devices from just experiments on any odd physical system.
  2. There must be a scaling difference in the time it takes for a quantum computer to solve the task and the time it takes for a classical computer. As we make the problem or input size larger, the difference between the quantum and classical solution times should increase disproportionately, ideally exponentially.
  3. And finally: the actual task solved by the quantum computer should not be solvable by any classical machine (at the time).

Achieving this last criterion using imperfect, noisy quantum devices is the challenge the idea of quantum supremacy set for the field. After all, running any of our favourite quantum algorithms in a classically hard regime on these devices is completely out of the question. They are too small and too noisy. So the field had to come up with the conceivably smallest and most noise-robust quantum algorithm that has a significant scaling advantage against classical computation.

Random circuits are really hard to simulate!

The idea is simple: we just run a random computation, constructed in a way that is as favorable as we can make it to the quantum device while being as hard as possible classically. This may strike as a pretty unfair way to come up with a computational task—it is just built to be hard for classical computers without any other purpose. But: it is a fine computational task. There is an input: the description of the quantum circuit, drawn randomly. The device needs to be programmed to run this exact circuit. And there is a task: just return whatever this quantum computation would return. These are strings of 0s and 1s drawn from a certain distribution. Getting the distribution of the strings right for a given input circuit is the computational task.

This task, dubbed random circuit sampling, can be solved on a classical as well as a quantum computer, but there is a (presumably) exponential advantage for the quantum computer. More on that in Part 2.

For now, let me tell you about the experimental demonstrations of random circuit sampling. Allow me to be slightly more formal. The task solved in random circuit sampling is to produce bit strings x{0,1}nx \in \{0,1\}^n distributed according to the Born-rule outcome distribution

pC(x)=|x|C|0|2p_C(x) = | \bra x C \ket {0}|^2

of a sequence of elementary quantum operations (unitary rotations of one or two qubits at a time) which is drawn randomly according to certain rules. This circuit CC is applied to a reference state |0\ket 0 on the quantum computer and then measured, giving the string xx as an outcome.

The breakthrough: classically hard programmable quantum computations in the real world

In the first quantum supremacy experiment (G1) by the Google team, the quantum computer was built from 53 superconducting qubits arranged in a 2D grid. The operations were randomly chosen simple one-qubit gates (X,Y,X+Y\sqrt X, \sqrt Y, \sqrt{X+Y}) and deterministic two-qubit gates called fSim applied in the 2D pattern, and repeated a certain number of times (the depth of the circuit). The limiting factor in these experiments was the quality of the two-qubit gates and the measurements, with error probabilities around 0.6 % and 4 %, respectively.

A very similar experiment was performed by the USTC team on 56 qubits (U1) and both experiments were repeated with better fidelities (0.4 % and 1 % for two-qubit gates and measurements) and slightly larger system sizes (70 and 83 qubits, respectively) in the past two years (G2,U2).

Using a trapped-ion architecture, the Quantinuum team also demonstrated random circuit sampling on 56 qubits but with arbitrary connectivity (random regular graphs) (Q). There, the two-qubit gates were π/2\pi/2-rotations around ZZZ \otimes Z, the single-qubit gates were uniformly random and the error rates much better (0.15 % for both two-qubit gate and measurement errors).

All the experiments ran random circuits on varying system sizes and circuit depths, and collected thousands to millions of samples from a few random circuits at a given size. To benchmark the quality of the samples, the widely accepted benchmark is now the linear cross-entropy (XEB) benchmark defined as

χ=2n𝔼C𝔼xpC(x)1,\chi = 2^n \mathbb E_C \mathbb E_{x} p_C(x) -1 ,

for an nn-qubit circuit. The expectation over CC is over the random choice of circuit and the expectation over xx is over the experimental distribution of the bit strings. In other words, to compute the XEB given a list of samples, you ‘just’ need to compute the ideal probability of obtaining that sample from the circuit CC and average the outcomes.

The XEB is nice because it gives 1 for ideal samples from sufficiently random circuits and 0 for uniformly random samples, and it can be estimated accurately from just a few samples. Under the right conditions, it turns out to be a good proxy for the many-body fidelity of the quantum state prepared just before the measurement.

This tells us that we should expect an XEB score of (1error per gate)# gatescnd(1-\text{error per gate})^{\text{\# gates}} \sim c^{- n d } for some noise- and architecture-dependent constant cc. All of the experiments achieved a value of the XEB that was significantly (in the statistical sense) far away from 0 as you can see in the plot below. This shows that something nontrivial is going on in the experiments, because the fidelity we expect for a maximally mixed or random state is 2n2^{-n} which is less than 101410^{-14} % for all the experiments.

The complexity of simulating these experiments is roughly governed by an exponential in either the number of qubits or the maximum bipartite entanglement generated. Figure 5 of the Quantinuum paper has a nice comparison.

It is not easy to say how much leverage an XEB significantly lower than 1 gives a classical spoofer. But one can certainly use it to judiciously change the circuit a tiny bit to make it easier to simulate.

Even then, reproducing the low scores between 0.05 % and 0.2 % of the experiments is extremely hard on classical computers. To the best of my knowledge, producing samples that match the experimental XEB score has only been achieved for the first experiment from 2019 (PCZ). This simulation already exploited the relatively low XEB score to simplify the computation, but even for the slightly larger 56 qubit experiments these techniques may not be feasibly run. So to the best of my knowledge, the only one of the experiments which may actually have been simulated is the 2019 experiment by the Google team.

If there are better methods, or computers, or more willingness to spend money on simulating random circuits today, though, I would be very excited to hear about it!

Proxy of a proxy of a benchmark

Now, you may be wondering: “How do you even compute the XEB or fidelity in a quantum advantage experiment in the first place? Doesn’t it require computing outcome probabilities of the supposedly hard quantum circuits?” And that is indeed a very good question. After all, the quantum advantage of random circuit sampling is based on the hardness of computing these probabilities. This is why, to get an estimate of the XEB in the advantage regime, the experiments needed to use proxies and extrapolation from classically tractable regimes.

This will be important for Part 2 of this series, where I will discuss the evidence we have for quantum advantage, so let me give you some more detail. To extrapolate, one can just run smaller circuits of increasing sizes and extrapolate to the size in the advantage regime. Alternatively, one can run circuits with the same number of gates but with added structure that makes them classically simulatable and extrapolate to the advantage circuits. Extrapolation is based on samples from different experiments from the quantum advantage experiments. All of the experiments did this.

A separate estimate of the XEB score is based on proxies. An XEB proxy uses the samples from the advantage experiments, but computes a different quantity than the XEB that can actually be computed and for which one can collect independent numerical and theoretical evidence that it matches the XEB in the relevant regime. For example, the Google experiments averaged outcome probabilities of modified circuits that were related to the true circuits but easier to simulate.

The Quantinuum experiment did something entirely different, which is to estimate the fidelity of the advantage experiment by inverting the circuit on the quantum computer and measuring the probability of coming back to the initial state.

All of the methods used to estimate the XEB of the quantum advantage experiments required some independent verification based on numerics on smaller sizes and induction to larger sizes, as well as theoretical arguments.

In the end, the advantage claims are thus based on a proxy of a proxy of the quantum fidelity. This is not to say that the advantage claims do not hold. In fact, I will argue in my next post that this is just the way science works. I will also tell you more about the evidence that the experiments I described here actually demonstrate quantum advantage and discuss some skeptical arguments.


Let me close this first post with a few notes.

In describing the quantum supremacy experiments, I focused on random circuit sampling which is run on programmable digital quantum computers. What I neglected to talk about is boson sampling and Gaussian boson sampling, which are run on photonic devices and have also been experimentally demonstrated. The reason for this is that I think random circuits are conceptually cleaner since they are run on processors that are in principle capable of running an arbitrary quantum computation while the photonic devices used in boson sampling are much more limited and bear more resemblance to analog simulators.

I want to continue my poll here, so feel free to write in the comments whether or not you believe that quantum advantage has been demonstrated (by these experiments) and if not, why.

References

[G1] Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019).

[Preskill] Preskill, J. Quantum computing and the entanglement frontier. arXiv:1203.5813 (2012).

[qSim] Choi, J. et al. Exploring the many-body localization transition in two dimensions. Science 352, 1547–1552 (2016). .

[U1] Wu, Y. et al. Strong Quantum Computational Advantage Using a Superconducting Quantum Processor. Phys. Rev. Lett. 127, 180501 (2021).

[G2] Morvan, A. et al. Phase transitions in random circuit sampling. Nature 634, 328–333 (2024).

[U2] Gao, D. et al. Establishing a New Benchmark in Quantum Computational Advantage with 105-qubit Zuchongzhi 3.0 Processor. Phys. Rev. Lett. 134, 090601 (2025).

[Q] DeCross, M. et al. Computational Power of Random Quantum Circuits in Arbitrary Geometries. Phys. Rev. X 15, 021052 (2025).

[PCZ] Pan, F., Chen, K. & Zhang, P. Solving the sampling problem of the Sycamore quantum circuits. Phys. Rev. Lett. 129, 090502 (2022).

Quantum computing in the second quantum century

On December 10, I gave a keynote address at the Q2B 2025 Conference in Silicon Valley. This is a transcript of my remarks. The slides I presented are here.

The first century

We are nearing the end of the International Year of Quantum Science and Technology, so designated to commemorate the 100th anniversary of the discovery of quantum mechanics in 1925. The story goes that 23-year-old Werner Heisenberg, seeking relief from severe hay fever, sailed to the remote North Sea Island of Helgoland, where a crucial insight led to his first, and notoriously obscure, paper describing the framework of quantum mechanics.

In the years following, that framework was clarified and extended by Heisenberg and others. Notably among them was Paul Dirac, who emphasized that we have a theory of almost everything that matters in everyday life. It’s the Schrödinger equation, which captures the quantum behavior of many electrons interacting electromagnetically with one another and with atomic nuclei. That describes everything in chemistry and materials science and all that is built on those foundations. But, as Dirac lamented, in general the equation is too complicated to solve for more than a few electrons.

Somehow, over 50 years passed before Richard Feynman proposed that if we want a machine to help us solve quantum problems, it should be a quantum machine, not a classical machine. The quest for such a machine, he observed, is “a wonderful problem because it doesn’t look so easy,” a statement that still rings true.

I was drawn into that quest about 30 years ago. It was an exciting time. Efficient quantum algorithms for the factoring and discrete log problems were discovered, followed rapidly by the first quantum error-correcting codes and the foundations of fault-tolerant quantum computing. By late 1996, it was firmly established that a noisy quantum computer could simulate an ideal quantum computer efficiently if the noise is not too strong or strongly correlated. Many of us were then convinced that powerful fault-tolerant quantum computers could eventually be built and operated.

Three decades later, as we enter the second century of quantum mechanics, how far have we come? Today’s quantum devices can perform some tasks beyond the reach of the most powerful existing conventional supercomputers. Error correction had for decades been a playground for theorists; now informative demonstrations are achievable on quantum platforms. And the world is investing heavily in advancing the technology further.

Current NISQ machines can perform quantum computations with thousands of two-qubit gates, enabling early explorations of highly entangled quantum matter, but still with limited commercial value. To unlock a wide variety of scientific and commercial applications, we need machines capable of performing billions or trillions of two-qubit gates. Quantum error correction is the way to get there.

I’ll highlight some notable developments over the past year—among many others I won’t have time to discuss. (1) We’re seeing intriguing quantum simulations of quantum dynamics in regimes that are arguably beyond the reach of classical simulations. (2) Atomic processors, both ion traps and neutral atoms in optical tweezers, are advancing impressively. (3) We’re acquiring a deeper appreciation of the advantages of nonlocal connectivity in fault-tolerant protocols. (4) And resource estimates for cryptanalytically relevant quantum algorithms have dropped sharply.

Quantum machines for science

A few years ago, I was not particularly excited about running applications on the quantum platforms that were then available; now I’m more interested. We have superconducting devices from IBM and Google with over 100 qubits and two-qubit error rates approaching 10^{-3}. The Quantinuum ion trap device has even better fidelity as well as higher connectivity. Neutral-atom processors have many qubits; they lag behind now in fidelity, but are improving.

Users face tradeoffs: The high connectivity and fidelity of ion traps is an advantage, but their clock speeds are orders of magnitude slower than for superconducting processors. That limits the number of times you can run a given circuit, and therefore the attainable statistical accuracy when estimating expectations of observables.

Verifiable quantum advantage

Much attention has been paid to sampling from the output of random quantum circuits, because this task is provably hard classically under reasonable assumptions. The trouble is that, in the high-complexity regime where a quantum computer can reach far beyond what classical computers can do, the accuracy of the quantum computation cannot be checked efficiently. Therefore, attention is now shifting toward verifiable quantum advantage — tasks where the answer can be checked. If we solved a factoring or discrete log problem, we could easily check the quantum computer’s output with a classical computation, but we’re not yet able to run these quantum algorithms in the classically hard regime. We might settle instead for quantum verification, meaning that we check the result by comparing two quantum computations and verifying the consistency of the results.

A type of classical verification of a quantum circuit was demonstrated recently by BlueQubit on a Quantinuum processor. In this scheme, a designer builds a family of so-called “peaked” quantum circuits such that, for each such circuit and for a specific input, one output string occurs with unusually high probability. An agent with a quantum computer who knows the circuit and the right input can easily identify the preferred output string by running the circuit a few times. But the quantum circuits are cleverly designed to hide the peaked output from a classical agent — one may argue heuristically that the classical agent, who has a description of the circuit and the right input, will find it hard to predict the preferred output. Thus quantum agents, but not classical agents, can convince the circuit designer that they have reliable quantum computers. This observation provides a convenient way to benchmark quantum computers that operate in the classically hard regime.

The notion of quantum verification was explored by the Google team using Willow. One can execute a quantum circuit acting on a specified input, and then measure a specified observable in the output. By repeating the procedure sufficiently many times, one obtains an accurate estimate of the expectation value of that output observable. This value can be checked by any other sufficiently capable quantum computer that runs the same circuit. If the circuit is strategically chosen, then the output value may be very sensitive to many-qubit interference phenomena, in which case one may argue heuristically that accurate estimation of that output observable is a hard task for classical computers. These experiments, too, provide a tool for validating quantum processors in the classical hard regime. The Google team even suggests that such experiments may have practical utility for inferring molecular structure from nuclear magnetic resonance data.

Correlated fermions in two dimensions

Quantum simulations of fermionic systems are especially compelling, since electronic structure underlies chemistry and materials science. These systems can be hard to simulate in more than one dimension, particularly in parameter regimes where fermions are strongly correlated, or in other words profoundly entangled. The two-dimensional Fermi-Hubbard model is a simplified caricature of two-dimensional materials that exhibit high-temperature superconductivity and hence has been much studied in recent decades. Large-scale tensor-network simulations are reasonably successful at capturing static properties of this model, but the dynamical properties are more elusive.

Dynamics in the Fermi-Hubbard model has been simulated recently on both Quantinuum (here and here) and Google processors. Only a 6 x 6 lattice of electrons was simulated, but this is already well beyond the scope of exact classical simulation. Comparing (error-mitigated) quantum circuits with over 4000 two-qubit gates to heuristic classical tensor-network and Majorana path methods, discrepancies were noted, and the Phasecraft team argues that the quantum simulation results are more trustworthy. The Harvard group also simulated models of fermionic dynamics, but were limited to relatively low circuit depths due to atom loss. It’s encouraging that today’s quantum processors have reached this interesting two-dimensional strongly correlated regime, and with improved gate fidelity and noise mitigation we can go somewhat further, but expanding system size substantially in digital quantum simulation will require moving toward fault-tolerant implementations. We should also note that there are analog Fermi-Hubbard simulators with thousands of lattice sites, but digital simulators provide greater flexibility in the initial states we can prepare, the observables we can access, and the Hamiltonians we can reach.

When it comes to many-particle quantum simulation, a nagging question is: “Will AI eat quantum’s lunch?” There is surging interest in using classical artificial intelligence to solve quantum problems, and that seems promising. How will AI impact our quest for quantum advantage in this problem space? This question is part of a broader issue: classical methods for quantum chemistry and materials have been improving rapidly, largely because of better algorithms, not just greater processing power. But for now classical AI applied to strongly correlated matter is hampered by a paucity of training data.  Data from quantum experiments and simulations will likely enhance the power of classical AI to predict properties of new molecules and materials. The practical impact of that predictive power is hard to clearly foresee.

The need for fundamental research

Today is December 10th, the anniversary of Alfred Nobel’s death. The Nobel Prize award ceremony in Stockholm concluded about an hour ago, and the Laureates are about to sit down for a well-deserved sumptuous banquet. That’s a fitting coda to this International Year of Quantum. It’s useful to be reminded that the foundations for today’s superconducting quantum processors were established by fundamental research 40 years ago into macroscopic quantum phenomena. No doubt fundamental curiosity-driven quantum research will continue to uncover unforeseen technological opportunities in the future, just as it has in the past.

I have emphasized superconducting, ion-trap, and neutral atom processors because those are most advanced today, but it’s vital to continue to pursue alternatives that could suddenly leap forward, and to be open to new hardware modalities that are not top-of-mind at present. It is striking that programmable, gate-based quantum circuits in neutral-atom optical-tweezer arrays were first demonstrated only a few years ago, yet that platform now appears especially promising for advancing fault-tolerant quantum computing. Policy makers should take note!

The joy of nonlocal connectivity

As the fault-tolerant era dawns, we increasingly recognize the potential advantages of the nonlocal connectivity resulting from atomic movement in ion traps and tweezer arrays, compared to geometrically local two-dimensional processing in solid-state devices. Over the past few years, many contributions from both industry and academia have clarified how this connectivity can reduce the overhead of fault-tolerant protocols.

Even when using the standard surface code, the ability to implement two-qubit logical gates transversally—rather than through lattice surgery—significantly reduces the number of syndrome-measurement rounds needed for reliable decoding, thereby lowering the time overhead of fault tolerance. Moreover, the global control and flexible qubit layout in tweezer arrays increase the parallelism available to logical circuits.

Nonlocal connectivity also enables the use of quantum low-density parity-check (qLDPC) codes with higher encoding rates, reducing the number of physical qubits needed per logical qubit for a target logical error rate. These codes now have acceptably high accuracy thresholds, practical decoders, and—thanks to rapid theoretical progress this year—emerging constructions for implementing universal logical gate sets. (See for example here, here, here, here.)

A serious drawback of tweezer arrays is their comparatively slow clock speed, limited by the timescales for atom transport and qubit readout. A millisecond-scale syndrome-measurement cycle is a major disadvantage relative to microsecond-scale cycles in some solid-state platforms. Nevertheless, the reductions in logical-gate overhead afforded by atomic movement can partially compensate for this limitation, and neutral-atom arrays with thousands of physical qubits already exist.

To realize the full potential of neutral-atom processors, further improvements are needed in gate fidelity and continuous atom loading to maintain large arrays during deep circuits. Encouragingly, active efforts on both fronts are making steady progress.

Approaching cryptanalytic relevance

Another noteworthy development this year was a significant improvement in the physical qubit count required to run a cryptanalytically relevant quantum algorithm, reduced by Gidney to less than 1 million physical qubits from the 20 million Gidney and Ekerå had estimated earlier. This applies under standard assumptions: a two-qubit error rate of 10^{-3} and 2D geometrically local processing. The improvement was achieved using three main tricks. One was using approximate residue arithmetic to reduce the number of logical qubits. (This also suppresses the success probability and therefore lengthens the time to solution by a factor of a few.) Another was using a more efficient scheme to reduce the number of physical qubits for each logical qubit in cold storage. And the third was a recently formulated scheme for reducing the spacetime cost of non-Clifford gates. Further cost reductions seem possible using advanced fault-tolerant constructions, highlighting the urgency of accelerating migration from vulnerable cryptosystems to post-quantum cryptography.

Looking forward

Over the next 5 years, we anticipate dramatic progress toward scalable fault-tolerant quantum computing, and scientific insights enabled by programmable quantum devices arriving at an accelerated pace. Looking further ahead, what might the future hold? I was intrigued by a 1945 letter from John von Neumann concerning the potential applications of fast electronic computers. After delineating some possible applications, von Neumann added: “Uses which are not, or not easily, predictable now, are likely to be the most important ones … they will … constitute the most surprising extension of our present sphere of action.” Not even a genius like von Neumann could foresee the digital revolution that lay ahead. Predicting the future course of quantum technology is even more hopeless because quantum information processing entails an even larger step beyond past experience.

As we contemplate the long-term trajectory of quantum science and technology, we are hampered by our limited imaginations. But one way to loosely characterize the difference between the past and the future of quantum science is this: For the first hundred years of quantum mechanics, we achieved great success at understanding the behavior of weakly correlated many-particle systems, leading for example to transformative semiconductor and laser technologies. The grand challenge and opportunity we face in the second quantum century is acquiring comparable insight into the complex behavior of highly entangled states of many particles, behavior well beyond the scope of current theory or computation. The wonders we encounter in the second century of quantum mechanics, and their implications for human civilization, may far surpass those of the first century. So we should gratefully acknowledge the quantum pioneers of the past century, and wish good fortune to the quantum explorers of the future.

Credit: Iseult-Line Delfosse LLC, QC Ware

John Preskill receives 2025 Quantum Leadership Award

The 2025 Quantum Leadership Awards were announced at the Quantum World Congress on 18 September 2025. Upon receiving the Academic Pioneer in Quantum Award, John Preskill made these remarks.

I’m enormously excited and honored to receive this Quantum Leadership Award, and especially thrilled to receive it during this, the International Year of Quantum. The 100th anniversary of the discovery of quantum mechanics is a cause for celebration because that theory provides our deepest and most accurate description of how the universe works, and because that deeper understanding has incalculable value to humanity. What we have learned about electrons, photons, atoms, and molecules in the past century has already transformed our lives in many ways, but what lies ahead, as we learn to build and precisely control more and more complex quantum systems, will be even more astonishing.

As a professor at a great university, I have been lucky in many ways. Lucky to have the freedom to pursue the scientific challenges that I find most compelling and promising. Lucky to be surrounded by remarkable, supportive colleagues. Lucky to have had many collaborators who enabled me to do things I could never have done on my own. And lucky to have the opportunity to teach and mentor young scientists who have a passion for advancing the frontiers of science. What I’m most proud of is the quantum community we’ve built at Caltech, and the many dozens of young people who imbibed the interdisciplinary spirit of Caltech and then moved onward to become leaders in quantum science at universities, labs, and companies all over the world.

Right now is a thrilling time for quantum science and technology, a time of rapid progress, but these are still the early days in a nascent second quantum revolution. In quantum computing, we face two fundamental questions: How can we scale up to quantum machines that can solve very hard computational problems? And once we do so, what will be the most important applications for science and for industry? We don’t have fully satisfying answers yet to either question and we won’t find the answers all at once – they will unfold gradually as our knowledge and technology advance. But 10 years from now we’ll have much better answers than we have today.

Companies are now pursuing ambitious plans to build the world’s most powerful quantum computers.  Let’s not forget how we got to this point. It was by allowing some of the world’s most brilliant people to follow their curiosity and dream about what the future could bring. To fulfill the potential of quantum technology, we need that spirit of bold adventure now more than ever before. This award honors one scientist, and I’m profoundly grateful for this recognition. But more importantly it serves as a reminder of the vital ongoing need to support the fundamental research that will build foundations for the science and technology of the future. Thank you very much!

Beyond NISQ: The Megaquop Machine

On December 11, I gave a keynote address at the Q2B 2024 Conference in Silicon Valley. This is a transcript of my remarks. The slides I presented are here. The video of the talk is here.

NISQ and beyond

I’m honored to be back at Q2B for the 8th year in a row.

The Q2B conference theme is “The Roadmap to Quantum Value,” so I’ll begin by showing a slide from last year’s talk. As best we currently understand, the path to economic impact is the road through fault-tolerant quantum computing. And that poses a daunting challenge for our field and for the quantum industry.

We are in the NISQ era. And NISQ technology already has noteworthy scientific value. But as of now there is no proposed application of NISQ computing with commercial value for which quantum advantage has been demonstrated when compared to the best classical hardware running the best algorithms for solving the same problems. Furthermore, currently there are no persuasive theoretical arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.

NISQ, meaning Noisy Intermediate-Scale Quantum, is a deliberately vague term. By design, it has no precise quantitative meaning, but it is intended to convey an idea: We now have quantum machines such that brute force simulation of what the quantum machine does is well beyond the reach of our most powerful existing conventional computers. But these machines are not error-corrected, and noise severely limits their computational power.

In the future we can envision FASQ* machines, Fault-Tolerant Application-Scale Quantum computers that can run a wide variety of useful applications, but that is still a rather distant goal. What term captures the path along the road from NISQ to FASQ? Various terms retaining the ISQ format of NISQ have been proposed [here, here, here], but I would prefer to leave ISQ behind as we move forward, so I’ll speak instead of a megaquop or gigaquop machine and so on meaning one capable of executing a million or a billion quantum operations, but with the understanding that mega means not precisely a million but somewhere in the vicinity of a million.

Naively, a megaquop machine would have an error rate per logical gate of order 10^{-6}, which we don’t expect to achieve anytime soon without using error correction and fault-tolerant operation. Or maybe the logical error rate could be somewhat larger, as we expect to be able to boost the simulable circuit volume using various error mitigation techniques in the megaquop era just as we do in the NISQ era. Importantly, the megaquop machine would be capable of achieving some tasks beyond the reach of classical, NISQ, or analog quantum devices, for example by executing circuits with of order 100 logical qubits and circuit depth of order 10,000.

What resources are needed to operate it? That depends on many things, but a rough guess is that tens of thousands of high-quality physical qubits could suffice. When will we have it? I don’t know, but if it happens in just a few years a likely modality is Rydberg atoms in optical tweezers, assuming they continue to advance in both scale and performance.

What will we do with it? I don’t know, but as a scientist I expect we can learn valuable lessons by simulating the dynamics of many-qubit systems on megaquop machines. Will there be applications that are commercially viable as well as scientifically instructive? That I can’t promise you.

The road to fault tolerance

To proceed along the road to fault tolerance, what must we achieve? We would like to see many successive rounds of accurate error syndrome measurement such that when the syndromes are decoded the error rate per measurement cycle drops sharply as the code increases in size. Furthermore, we want to decode rapidly, as will be needed to execute universal gates on protected quantum information. Indeed, we will want the logical gates to have much higher fidelity than physical gates, and for the logical gate fidelities to improve sharply as codes increase in size. We want to do all this at an acceptable overhead cost in both the number of physical qubits and the number of physical gates. And speed matters — the time on the wall clock for executing a logical gate should be as short as possible.

A snapshot of the state of the art comes from the Google Quantum AI team. Their recently introduced Willow superconducting processor has improved transmon lifetimes, measurement errors, and leakage correction compared to its predecessor Sycamore. With it they can perform millions of rounds of surface-code error syndrome measurement with good stability, each round lasting about a microsecond. Most notably, they find that the logical error rate per measurement round improves by a factor of 2 (a factor they call Lambda) when the code distance increases from 3 to 5 and again from 5 to 7, indicating that further improvements should be achievable by scaling the device further. They performed accurate real-time decoding for the distance 3 and 5 codes. To further explore the performance of the device they also studied the repetition code, which corrects only bit flips, out to a much larger code distance. As the hardware continues to advance we hope to see larger values of Lambda for the surface code, larger codes achieving much lower error rates, and eventually not just quantum memory but also logical two-qubit gates with much improved fidelity compared to the fidelity of physical gates.

Last year I expressed concern about the potential vulnerability of superconducting quantum processors to ionizing radiation such as cosmic ray muons. In these events, errors occur in many qubits at once, too many errors for the error-correcting code to fend off. I speculated that we might want to operate a superconducting processor deep underground to suppress the muon flux, or to use less efficient codes that protect against such error bursts.

The good news is that the Google team has demonstrated that so-called gap engineering of the qubits can reduce the frequency of such error bursts by orders of magnitude. In their studies of the repetition code they found that, in the gap-engineered Willow processor, error bursts occurred about once per hour, as opposed to once every ten seconds in their earlier hardware.  Whether suppression of error bursts via gap engineering will suffice for running deep quantum circuits in the future is not certain, but this progress is encouraging. And by the way, the origin of the error bursts seen every hour or so is not yet clearly understood, which reminds us that not only in superconducting processors but in other modalities as well we are likely to encounter mysterious and highly deleterious rare events that will need to be understood and mitigated.

Real-time decoding

Fast real-time decoding of error syndromes is important because when performing universal error-corrected computation we must frequently measure encoded blocks and then perform subsequent operations conditioned on the measurement outcomes. If it takes too long to decode the measurement outcomes, that will slow down the logical clock speed. That may be a more serious problem for superconducting circuits than for other hardware modalities where gates can be orders of magnitude slower.

For distance 5, Google achieves a latency, meaning the time from when data from the final round of syndrome measurement is received by the decoder until the decoder returns its result, of about 63 microseconds on average. In addition, it takes about another 10 microseconds for the data to be transmitted via Ethernet from the measurement device to the decoding workstation. That’s not bad, but considering that each round of syndrome measurement takes only a microsecond, faster would be preferable, and the decoding task becomes harder as the code grows in size.

Riverlane and Rigetti have demonstrated in small experiments that the decoding latency can be reduced by running the decoding algorithm on FPGAs rather than CPUs, and by integrating the decoder into the control stack to reduce communication time. Adopting such methods may become increasingly important as we scale further. Google DeepMind has shown that a decoder trained by reinforcement learning can achieve a lower logical error rate than a decoder constructed by humans, but it’s unclear whether that will work at scale because the cost of training rises steeply with code distance. Also, the Harvard / QuEra team has emphasized that performing correlated decoding across multiple code blocks can reduce the depth of fault-tolerant constructions, but this also increases the complexity of decoding, raising concern about whether such a scheme will be scalable.

Trading simplicity for performance

The Google processors use transmon qubits, as do superconducting processors from IBM and various other companies and research groups. Transmons are the simplest superconducting qubits and their quality has improved steadily; we can expect further improvement with advances in materials and fabrication. But a logical qubit with very low error rate surely will be a complicated object due to the hefty overhead cost of quantum error correction. Perhaps it is worthwhile to fashion a more complicated physical qubit if the resulting gain in performance might actually simplify the operation of a fault-tolerant quantum computer in the megaquop regime or well beyond. Several versions of this strategy are being pursued.

One approach uses cat qubits, in which the encoded 0 and 1 are coherent states of a microwave resonator, well separated in phase space, such that the noise afflicting the qubit is highly biased. Bit flips are exponentially suppressed as the mean photon number of the resonator increases, while the error rate for phase flips induced by loss from the resonator increases only linearly with the photon number. This year the AWS team built a repetition code to correct phase errors for cat qubits that are passively protected against bit flips, and showed that increasing the distance of the repetition code from 3 to 5 slightly improves the logical error rate. (See also here.)

Another helpful insight is that error correction can be more effective if we know when and where the errors occur in a quantum circuit. We can apply this idea using a dual rail encoding of the qubits. With two microwave resonators, for example, we can encode a qubit by placing a single photon in either the first resonator (the 10) state, or the second resonator (the 01 state). The dominant error is loss of a photon, causing either the 01 or 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred without disturbing a coherent superposition of 01 and 10. In a device built by the Yale / QCI team, loss errors are detected over 99% of the time and all undetected errors are relatively rare. Similar results were reported by the AWS team, encoding a dual-rail qubit in a pair of transmons instead of resonators.

Another idea is encoding a finite-dimensional quantum system in a state of a resonator that is highly squeezed in two complementary quadratures, a so-called GKP encoding. This year the Yale group used this scheme to encode 3-dimensional and 4-dimensional systems with decay rate better by a factor of 1.8 than the rate of photon loss from the resonator. (See also here.)

A fluxonium qubit is more complicated than a transmon in that it requires a large inductance which is achieved with an array of Josephson junctions, but it has the advantage of larger anharmonicity, which has enabled two-qubit gates with better than three 9s of fidelity, as the MIT team has shown.

Whether this trading of simplicity for performance in superconducting qubits will ultimately be advantageous for scaling to large systems is still unclear. But it’s appropriate to explore such alternatives which might pay off in the long run.

Error correction with atomic qubits

We have also seen progress on error correction this year with atomic qubits, both in ion traps and optical tweezer arrays. In these platforms qubits are movable, making it possible to apply two-qubit gates to any pair of qubits in the device. This opens the opportunity to use more efficient coding schemes, and in fact logical circuits are now being executed on these platforms. The Harvard / MIT / QuEra team sampled circuits with 48 logical qubits on a 280-qubit device –- that big news broke during last year’s Q2B conference. Atom computing and Microsoft ran an algorithm with 28 logical qubits on a 256-qubit device. Quantinuum and Microsoft prepared entangled states of 12 logical qubits on a 56-qubit device.

However, so far in these devices it has not been possible to perform more than a few rounds of error syndrome measurement, and the results rely on error detection and postselection. That is, circuit runs are discarded when errors are detected, a scheme that won’t scale to large circuits. Efforts to address these drawbacks are in progress. Another concern is that the atomic movement slows the logical cycle time. If all-to-all coupling enabled by atomic movement is to be used in much deeper circuits, it will be important to speed up the movement quite a lot.

Toward the megaquop machine

How can we reach the megaquop regime? More efficient quantum codes like those recently discovered by the IBM team might help. These require geometrically nonlocal connectivity and are therefore better suited for Rydberg optical tweezer arrays than superconducting processors, at least for now. Error mitigation strategies tailored for logical circuits, like those pursued by Qedma, might help by boosting the circuit volume that can be simulated beyond what one would naively expect based on the logical error rate. Recent advances from the Google team, which reduce the overhead cost of logical gates, might also be helpful.

What about applications? Impactful applications to chemistry typically require rather deep circuits so are likely to be out of reach for a while yet, but applications to materials science provide a more tempting target in the near term. Taking advantage of symmetries and various circuit optimizations like the ones Phasecraft has achieved, we might start seeing informative results in the megaquop regime or only slightly beyond.

As a scientist, I’m intrigued by what we might conceivably learn about quantum dynamics far from equilibrium by doing simulations on megaquop machines, particularly in two dimensions. But when seeking quantum advantage in that arena we should bear in mind that classical methods for such simulations are also advancing impressively, including in the past year (for example, here and here).

To summarize, advances in hardware, control, algorithms, error correction, error mitigation, etc. are bringing us closer to megaquop machines, raising a compelling question for our community: What are the potential uses for these machines? Progress will require innovation at all levels of the stack.  The capabilities of early fault-tolerant quantum processors will guide application development, and our vision of potential applications will guide technological progress. Advances in both basic science and systems engineering are needed. These are still the early days of quantum computing technology, but our experience with megaquop machines will guide the way to gigaquops, teraquops, and beyond and hence to widely impactful quantum value that benefits the world.

I thank Dorit Aharonov, Sergio Boixo, Earl Campbell, Roland Farrell, Ashley Montanaro, Mike Newman, Will Oliver, Chris Pattison, Rob Schoelkopf, and Qian Xu for helpful comments.

*The acronym FASQ was suggested to me by Andrew Landahl.

The megaquop machine (image generated by ChatGPT.
The megaquop machine (image generated by ChatGPT).

Now published: Building Quantum Computers

Building Quantum Computers: A Practical Introduction by Shayan Majidy, Christopher Wilson, and Raymond Laflamme has been published by Cambridge University Press and will be released in the US on September 30. The authors invited me to write a Foreword for the book, which I was happy to do. The publisher kindly granted permission for me to post the Foreword here on Quantum Frontiers.

Foreword

The principles of quantum mechanics, which as far as we know govern all natural phenomena, were discovered in 1925. For 99 years we have built on that achievement to reach a comprehensive understanding of much of the physical world, from molecules to materials to elementary particles and much more. No comparably revolutionary advance in fundamental science has occurred since 1925. But a new revolution is in the offing.

Up until now, most of what we have learned about the quantum world has resulted from considering the behavior of individual particles — for example a single electron propagating as a wave through a crystal, unfazed by barriers that seem to stand in its way. Understanding that single-particle physics has enabled us to explore nature in unprecedented ways, and to build information technologies that have profoundly transformed our lives.

What’s happening now is we’re learning how to instruct particles to evolve in coordinated ways that can’t be accurately described in terms of the behavior of one particle at a time. The particles, as we like to say, can become entangled. Many particles, like electrons or photons or atoms, when highly entangled, exhibit an extraordinary complexity that we can’t capture with the most powerful of today’s supercomputers, or with our current theories of how nature works. That opens extraordinary opportunities for new discoveries and new applications.

Most temptingly, we anticipate that by building and operating large-scale quantum computers, which control the evolution of very complex entangled quantum systems, we will be able to solve some computational problems that are far beyond the reach of today’s digital computers. The concept of a quantum computer was proposed over 40 years ago, and the task of building quantum computing hardware has been pursued in earnest since the 1990s. After decades of steady progress, quantum information processors with hundreds of qubits have become feasible and are scientifically valuable. But we may need quantum processors with millions of qubits to realize practical applications of broad interest. There is still a long way to go.

Why is it taking so long? A conventional computer processes bits, where each bit could be, say, a switch which is either on or off. To build highly complex entangled quantum states, the fundamental information-carrying component of a quantum computer must be what we call a “qubit” rather than a bit. The trouble is that qubits are much more fragile than bits — when a qubit interacts with its environment, the information it carries is irreversibly damaged, a process called decoherence. To perform reliable logical operations on qubits, we need to prevent decoherence by keeping the qubits nearly perfectly isolated from their environment. That’s very hard to do. And because a qubit, unlike a bit, can change continuously, precisely controlling a qubit is a further challenge, even when decoherence is in check.

While theorists may find it convenient to regard a qubit (or a bit) as an abstract object, in an actual processor a qubit needs to be encoded in a particular physical system. There are many options. It might, for example, be encoded in a single atom which can be in either one of two long-lived internal states. Or the spin of a single atomic nucleus or electron which points either up or down along some axis. Or a single photon that occupies either one of two possible optical modes. These are all remarkable encodings, because the qubit resides in a very simple single quantum system, yet, thanks to technical advances over several decades, we have learned to control such qubits reasonably well. Alternatively, the qubit could be encoded in a more complex system, like a circuit conducting electricity without resistance at very low temperature. This is also remarkable, because although the qubit involves the collective motion of billions of pairs of electrons, we have learned to make it behave as though it were a single atom.

To run a quantum computer, we need to manipulate individual qubits and perform entangling operations on pairs of qubits. Once we can perform such single-qubit and two-qubit “quantum gates” with sufficient accuracy, and measure and initialize the qubits as well, then in principle we can perform any conceivable quantum computation by assembling sufficiently many qubits and executing sufficiently many gates.

It’s a daunting engineering challenge to build and operate a quantum system of sufficient complexity to solve very hard computation problems. That systems engineering task, and the potential practical applications of such a machine, are both beyond the scope of Building Quantum Computers. Instead the focus is on the computer’s elementary constituents for four different qubit modalities: nuclear spins, photons, trapped atomic ions, and superconducting circuits. Each type of qubit has its own fascinating story, told here expertly and with admirable clarity.

For each modality a crucial question must be addressed: how to produce well-controlled entangling interactions between two qubits. Answers vary. Spins have interactions that are always on, and can be “refocused” by applying suitable pulses. Photons hardly interact with one another at all, but such interactions can be mocked up using appropriate measurements. Because of their Coulomb repulsion, trapped ions have shared normal modes of vibration that can be manipulated to generate entanglement. Couplings and frequencies of superconducting qubits can be tuned to turn interactions on and off. The physics underlying each scheme is instructive, with valuable lessons for the quantum informationists to heed.

Various proposed quantum information processing platforms have characteristic strengths and weaknesses, which are clearly delineated in this book. For now it is important to pursue a variety of hardware approaches in parallel, because we don’t know for sure which ones have the best long term prospects. Furthermore, different qubit technologies might be best suited for different applications, or a hybrid of different technologies might be the best choice in some settings. The truth is that we are still in the early stages of developing quantum computing systems, and there is plenty of potential for surprises that could dramatically alter the outlook.

Building large-scale quantum computers is a grand challenge facing 21st-century science and technology. And we’re just getting started. The qubits and quantum gates of the distant future may look very different from what is described in this book, but the authors have made wise choices in selecting material that is likely to have enduring value. Beyond that, the book is highly accessible and fun to read. As quantum technology grows ever more sophisticated, I expect the study and control of highly complex many-particle systems to become an increasingly central theme of physical science. If so, Building Quantum Computers will be treasured reading for years to come.

John Preskill
Pasadena, California

Version 1.0.0

Building a Visceral Understanding of Quantum Phenomena

A great childhood memory that I have comes from first playing “The Incredible Machine” on PC in the early 90’s. For those not in the know, this is a physics-based puzzle game about building Rube Goldberg style contraptions to achieve given tasks. What made this game a standout for me was the freedom that it granted players. In many levels you were given a disparate set of components (e.g. strings, pulleys, rubber bands, scissors, conveyor belts, Pokie the Cat…) and it was entirely up to you to “MacGuyver” your way to some kind of solution (incidentally, my favorite TV show from that time period). In other words, it was often a creative exercise in designing your own solution, rather than “connecting the dots” to find a single intended solution. Growing up with games like this undoubtedly had significant influence in directing me to my profession as a research scientist: a job which is often about finding novel or creative solutions to a task given a limited set of tools.

From the late 90’s onwards puzzle games like “The Incredible Machine” largely went out of fashion as developers focused more on 3D games that exploited that latest hardware advances. However, this genre saw a resurgence in 2010’s spearheaded by developer “Zachtronics” who released a plethora of popular, and exceptionally challenging, logic and programming based puzzle games (some of my favorites include Opus Magnum and TIS-100). Zachtronics games similarly encouraged players to solve problems through creative designs, but also had the side-effect of helping players to develop and practice tangible programming skills (e.g. design patterns, control flow, optimization). This is a really great way to learn, I thought to myself.

So, fast-forward several years, while teaching undergraduate/graduate quantum courses at Georgia Tech I began thinking about whether it would be possible to incorporate quantum mechanics (and specifically quantum circuits) into a Zachtronics-style puzzle game. My thinking was that such a game might provide an opportunity for students to experiment with quantum through a hands-on approach, one that encouraged creativity and self-directed exploration. I was also hoping that representing quantum processes through a visual language that emphasized geometry, rather than mathematical language, could help students develop intuition in this setting. These thoughts ultimately led to the development of The Qubit Factory. At its core, this is a quantum circuit simulator with a graphic interface (not too dissimilar to the Quirk quantum circuit simulator) but providing a structured sequence of challenges, many based on tasks of real-life importance to quantum computing, that players must construct circuits to solve.

An example level of The Qubit Factory in action, showcasing a potential solution to a task involving quantum error correction. The column of “?” tiles represents a noisy channel that has a small chance of flipping any qubit that passes through. Players are challenged to send qubits from the input on the left to the output on the right while mitigating errors that occur due to this noisy channel. The solution shown here is based on a bit-flip code, although a more advanced strategy is required to earn a bonus star for the level!

Quantum Gamification and The Qubit Factory

My goal in designing The Qubit Factory was to provide an accurate simulation of quantum mechanics (although not necessarily a complete one), such that players could learn some authentic, working knowledge about quantum computers and how they differ from regular computers. However, I also wanted to make a game that was accessible to the layperson (i.e. without a prior knowledge of quantum mechanics or the underlying mathematical foundations like linear algebra). These goals, which are largely opposing one-another, are not easy to balance!

A key step in achieving this balance was to find a suitable visual depiction of quantum states and processes; here the Bloch sphere, which provides a simple geometric representation of qubit states, was ideal. However, it is also here that I made my first major compromise to the scope of the physics within the game by restricting the game state to real-valued wave-functions (which in turn implies that only gates which transform qubits within the X-Z plane can be allowed). I feel that this compromise was ultimately the correct choice: it greatly enhanced the visual clarity by allowing qubits to be represented as arrows on a flat disk rather than on a sphere, and similarly allowed the action of single-qubit gates to depicted clearly (i.e. as rotations and flips on the disk). Some purists may object to this limitation on grounds that it prevents universal quantum computation, but my counterpoint would be that there are still many interesting quantum tasks and algorithms that can be performed within this restricted scope. In a similar spirit, I decided to forgo the standard quantum circuit notation: instead I used stylized circuits to emphasize the geometric interpretation as demonstrated in the example below. This choice was made with the intention of allowing players to infer the action of gates from the visual design alone.

A quantum circuit in conventional notation versus the same circuit depicted in The Qubit Factory.

Okay, so while the Bloch sphere provides a nice way to represent (unentangled) single qubit states, we also need a way to represent entangled states of multiple qubits. Here I made use of some creative license to show entangled states as blinking through the basis states. I found this visualization to work well for conveying simple states such as the singlet state presented below, but players are also able to view the complete list of wave-function amplitudes if necessary.

\textrm{Singlet: }\left| \psi \right\rangle = \tfrac{1}{\sqrt{2}} \left( \left| \uparrow \downarrow \right\rangle - \left| \downarrow \uparrow \right\rangle \right)

A singlet state is created by entangling a pair of qubits via a CNOT gate.

Although the blinking effect is not a perfect solution for displaying superpositions, I think that it is useful in conveying key aspects like uncertainty and correlation. The animation below shows an example of the entangled wave-function collapsing when one of the qubits is measured.

A single qubit from a singlet is measured. While each qubit has a 50/50 chance of giving ▲ or ▼ when measured individually, once one qubit is measured the other qubit collapses to the anti-aligned state.

So, thus far, I have described a quantum circuit simulator with some added visual cues and animations, but how can this be turned into a game? Here, I leaned heavily on the existing example of Zachtronic (and Zachtronic-like) games: each level in The Qubit Factory provides the player with some input bits/qubits and requires the player to perform some logical task in order to produce a set of desired outputs. Some of the levels within the game are highly structured, similar to textbook exercises. They aim to teach a specific concept and may only have a narrow set of potential solutions. An example of such a structured level is the first quantum level (lvl QI.A) which tasks the player with inverting a sequence of single qubit gates. Of course, this problem would be trivial to those of you already familiar with quantum mechanics: you could use the linear algebra result (AB)^\dag = B^\dag A^\dag together with the knowledge that quantum gates are unitary, so the Hermitian conjugate of each gate doubles as its inverse. But what if you didn’t know quantum mechanics, or even linear algebra? Could this problem be solved through logical reasoning alone? This is where I think that the visuals really help; players should be able to infer several key points from geometry alone:

  • the inverse of a flip (or mirroring about some axis) is another equal flip.
  • the inverse of a rotation is an equal rotation in the opposite direction.
  • the last transformation done on each qubit should be the first transformation to be inverted.

So I think it is plausible that, even without prior knowledge in quantum mechanics or linear algebra, a player could not only solve the level but also grasp some important concepts (i.e. that quantum gates are invertible and that the order in which they are applied matters).

An early level challenges the player to invert the action of the 3 gates on the left. A solution is given on the right, formed by composing the inverse of each gate in reverse order.

Many of the levels in The Qubit Factory are also designed to be open-ended. Such levels, which often begin with a blank factory, have no single intended solution. The player is instead expected to use experimentation and creativity to design their own solution; this is the setting where I feel that the “game” format really shines. An example of an open-ended level is QIII.E, which gives the player 4 copies of a single qubit state \left| \psi \right\rangle, guaranteed to be either the +Z or +X eigenstate, and tasks the player to determine which state they have been given. Those familiar with quantum computing will recognize this as a relatively simple problem in state tomography. There are many viable strategies that could be employed to solve this task (and I am not even sure of the optimal one myself). However, by circumventing the need for a mathematical calculation, the Qubit Factory allows players to easily and quickly explore different approaches. Hopefully this could allow players to find effective strategies through trial-and-error, gaining some understanding of state tomography (and why it is challenging) in the process.

An example of a level in action! This level challenges the player to construct a circuit that can identify an unknown qubit state given several identical copies; a task in state tomography. The solution shown here uses a cascaded sequence of measurements, where the result of one measurement is used to control the axis of a subsequent measurement.

The Qubit Factory begins with levels covering the basics of qubits, gates and measurements. It later progresses to more advanced concepts like superpositions, basis changes and entangled states. Finally it culminates with levels based on introductory quantum protocols and algorithms (including quantum error correction, state tomography, super-dense coding, quantum repeaters, entanglement distillation and more). Even if you are familiar with the aforementioned material you should still be in for a substantial challenge, so please check it out if that sounds like your thing!

The Potential of Quantum Games

I believe that interactive games have great potential to provide new opportunities for people to better understand the quantum realm (a position shared by the IQIM, members of which have developed several projects in this area). As young children, playing is how we discover the world around us and build intuition for the rules that govern it. This is perhaps a significant reason why quantum mechanics is often a challenge for new students to learn; we don’t have direct experience or intuition with the quantum world in the same way that we do with the classical world. A quote from John Preskill puts it very succinctly:

“Perhaps kids who grow up playing quantum games will acquire a visceral understanding of quantum phenomena that our generation lacks.”


The Qubit Factory can be played at www.qubitfactory.io

A classical foreshadow of John Preskill’s Bell Prize

Editor’s Note: This post was co-authored by Hsin-Yuan Huang (Robert) and Richard Kueng.

John Preskill, Richard P. Feynman Professor of Theoretical Physics at Caltech, has been named the 2024 John Stewart Bell Prize recipient. The prize honors John’s contributions in “the developments at the interface of efficient learning and processing of quantum information in quantum computation, and following upon long standing intellectual leadership in near-term quantum computing.” The committee cited John’s seminal work defining the concept of the NISQ (noisy intermediate-scale quantum) era, our joint work “Predicting Many Properties of a Quantum System from Very Few Measurements” proposing the classical shadow formalism, along with subsequent research that builds on classical shadows to develop new machine learning algorithms for processing information in the quantum world.

We are truly honored that our joint work on classical shadows played a role in John winning this prize. But as the citation implies, this is also a much-deserved “lifetime achievement” award. For the past two and a half decades, first at IQI and now at IQIM, John has cultivated a wonderful, world-class research environment at Caltech that celebrates intellectual freedom, while fostering collaborations between diverse groups of physicists, computer scientists, chemists, and mathematicians. John has said that his job is to shield young researchers from bureaucratic issues, teaching duties and the like, so that we can focus on what we love doing best. This extraordinary generosity of spirit has been responsible for seeding the world with some of the bests minds in the field of quantum information science and technology.

A cartoon depiction of John Preskill (Middle), Hsin-Yuan Huang (Left), and Richard Kueng (Right). [Credit: Chi-Yun Cheng]

It is in this environment that the two of us (Robert and Richard) met and first developed the rudimentary form of classical shadows — inspired by Scott Aaronson’s idea of shadow tomography. While the initial form of classical shadows is mathematically appealing and was appreciated by the theorists (it was a short plenary talk at the premier quantum information theory conference), it was deemed too abstract to be of practical use. As a result, when we submitted the initial version of classical shadows for publication, the paper was rejected. John not only recognized the conceptual beauty of our initial idea, but also pointed us towards a direction that blossomed into the classical shadows we know today. Applications range from enabling scientists to more efficiently understand engineered quantum devices, speeding up various near-term quantum algorithms, to teaching machines to learn and predict the behavior of quantum systems.

Congratulations John! Thank you for bringing this community together to do extraordinarily fun research and for guiding us throughout the journey.

What can you do in 48 hours?

Have you ever wondered what can be done in 48 hours? For instance, our heart beats around 200 000 times. One of the biggest supercomputers crunches petabytes (peta = 1015) of numbers to simulate an experiment that took Google’s quantum processor only 300 seconds to run. In 48 hours, one can also participate in the Sciathon with almost 500 young researchers from more than 80 countries! 

Two weeks ago I participated in a scientific marathon, the Sciathon. The structure of this event roughly resembled a hackathon. I am sure many readers are familiar with the idea of a hackathon from personal experience. For those unfamiliar — a hackathon is an intense collaborative event, usually organized over the weekend, during which people with different backgrounds work in groups to create prototypes of functioning software or hardware. For me, it was the very first time to have firsthand experience with a hackathon-like event!

The Sciathon was organized by the Lindau Nobel Laureate Meetings (more about the meetings with Nobel laureates, which happen annually in the lovely German town of Lindau, in another blogpost, I promise!) This year, unfortunately, the face-to-face meeting in Lindau was postponed until the summer of 2021. Instead, the Lindau Nobel Laureate Meetings alumni and this year’s would-be attendees had an opportunity to gather for the Sciathon, as well as the Online Science Days earlier this week, during which the best Sciathon projects were presented.

The participants of the Sciathon could choose to contribute new views, perspectives and solutions to three main topics: Lindau Guidelines, Communicating Climate Change and Capitalism After Corona. The first topic concerned an open, cooperative science community where data and knowledge are freely shared, the second — how scientists could show that the climate crisis is just as big a threat as the SARS-CoV-19 virus, and the last — how to remodel our current economic systems so that they are more robust to unexpected sudden crises. More detailed descriptions of each topic can be found on the official Sciathon webpage.

My group of ten eager scientists, mostly physicists, from master students to postdoctoral researchers, focused on the first topic. In particular, our goal was to develop a method of familiarizing high school students with the basics of quantum information and computation. We envisioned creating an online notebook, where an engaging story would be intertwined with interactive blocks of Python code utilizing the open-source quantum computing toolkit Qiskit. This hands-on approach would enable students to play with quantum systems described in the story-line by simply running the pre-programmed commands with a click of the mouse and then observe how “experiment” matches “the theory”. We decided to work with a system comprising one or two qubits and explain such fundamental concepts in quantum physics as superposition, entanglement and measurement. The last missing part was a captivating story.

The story we came up with involved two good friends from the lab, Miss Schrödinger and Miss Pauli, as well as their kittens, Alice and Bob. At first, Alice and Bob seemed to be ordinary cats, however whenever they sipped quantum milk, they would turn into quantum cats, or as quantum physicists would say — kets. Do I have to remind the reader that a quantum cat, unlike an ordinary one, could be both awake and asleep at the same time?

Miss Schrödinger was a proud cat owner who not only loved her cat, but also would take hundreds of pictures of Alice and eagerly upload them on social media. Much to Miss Schrödinger’s surprise, none of the pictures showed Alice partly awake and partly asleep — the ket would always collapse to the cat awake or the cat asleep! Every now and then, Miss Pauli would come to visit Miss Schrödinger and bring her own cat Bob. While the good friends were chit-chatting over a cup of afternoon tea, the cats sipped a bit of quantum milk and started to play with a ball of wool, resulting in a cute mess of two kittens tangled up in wool. Every time after coming back home, Miss Pauli would take a picture of Bob and share it with Miss Schrödinger, who would obviously also take a picture of Alice. After a while, the young scientists started to notice some strange correlations between the states of their cats… 

The adventures of Miss Schrödinger and her cat continue! For those interested, you can watch a short video about our project! 

Overall, I can say that I had a lot of fun participating in the Sciathon. It was an intense yet extremely gratifying event. In addition to the obvious difficulty of racing against the clock, our group also had to struggle with coordinating video calls between group members scattered across three almost equidistant time zones — Eastern Australian, Central European and Central US! During the Sciathon I had a chance to interact with other science enthusiasts from different backgrounds and work on something from outside my area of expertise. I would strongly encourage anyone to participate in hackathon-like events to break the daily routine, particularly monotonous during the lockdown, and unleash one’s creative spirit. Such events can also be viewed as an opportunity to communicate science and scientific progress to the public. Lastly, I would like to thank other members of my team — collaborating with you during the Sciathon was a blast!

During the Sciathon, we had many brainstorming sessions. You can see most of the members of my group in this video call (from left to right, top to bottom): Shuang, myself, Martin, Kyle, Hadewijch, Saskia, Michael and Bartłomiej. The team also included Ahmed and Watcharaphol.

Quantum Error Correction with Molecules

In the previous blog post (titled, “On the Coattails of Quantum Supremacy“) we started with Google and ended up with molecules! I also mentioned a recent paper by John Preskill, Jake Covey, and myself (see also this videoed talk) where we assume that, somewhere in the (near?) future, experimentalists will be able to construct quantum superpositions of several orientations of molecules or other rigid bodies. Next, I’d like to cover a few more details on how to construct error-correcting codes for anything from classical bits in your phone to those future quantum computers, molecular or otherwise.

Classical error correction: the basics

Error correction is concerned with the design of an encoding that allows for protection against noise. Let’s say we want to protect one classical bit, which is in either “0” or “1”. If the bit is say in “0”, and the environment (say, the strong magnetic field from a magnet you forgot was laying next to your hard drive) flipped it to “1” without our knowledge, an error would result (e.g., making your phone think you swiped right!)

Now let’s encode our single logical bit into three physical bits, whose 2^3=8 possible states are represented by the eight corners of the cube below. Let’s encode the logical bit as “0” —> 000 and “1” —> 111, corresponding to the corners of the cube marked by the black and white ball, respectively. For our (local) noise model, we assume that flips of only one of the three physical bits are more likely to occur than flips of two or three at the same time.

Error correction is, like many Hollywood movies, an origin story. If, say, the first bit flips in our above code, the 000 state is mapped to 100, and 111 is mapped to 011. Since we have assumed that the most likely error is a flip of one of the bits, we know upon observing that 100 must have come from the clean 000, and 011 from 111. Thus, in either case of the logical bit being “0” or “1”, we can recover the information by simply observing which state the majority of the bits are in. The same things happen when the second or third bits flip. In all three cases, the logical “0” state is mapped to one of its three neighboring points (above, in blue) while the logical “1” is mapped to its own three points, which, crucially, are distinct from the neighbors of “0”. The set of points \{000,100,010,001\} that are closer to 000 than to 111 is called a Voronoi tile.

Now, let’s adapt these ideas to molecules. Consider the rotational states of a dumb-bell molecule consisting of two different atoms. (Let’s assume that we have frozen this molecule to the point that the vibration of the inter-atomic bond is limited, essentially creating a fixed distance between the two atoms.) This molecule can orient itself in any direction, and each such orientation can be represented as a point \mathbf{v} on the surface of a sphere. Now let us encode a classical bit using the north and south poles of this sphere (represented in the picture below as a black and a white ball, respectively). The north pole of the sphere corresponds to the molecule being parallel to the z-axis, while the south pole corresponds to the molecule being anti-parallel.

This time, the noise consists of small shifts in the molecule’s orientation. Clearly, if such shifts are small, the molecule just wiggles a bit around the z-axis. Such wiggles still allow us to infer that the molecule is (mostly) parallel and anti-parallel to the axis, as long as they do not rotate the molecule all the way past the equator. Upon such correctable rotations, the logical “0” state — the north pole — is mapped to a point in the northern hemisphere, while logical “1” — the south pole — is mapped to a point in the southern hemisphere. The northern hemisphere forms a Voronoi tile of the logical “0” state (blue in the picture), which, along with the corresponding tile of the logical “1” state (the southern hemisphere), tiles the entire sphere.

Quantum error correction

To upgrade these ideas to the quantum realm, recall that this time we have to protect superpositions. This means that, in addition to shifting our quantum logical state to other states as before, noise can also affect the terms in the superposition itself. Namely, if, say, the superposition is equal — with an amplitude of +1/\sqrt{2} in “0” and +1/\sqrt{2} in “1” — noise can change the relative sign of the superposition and map one of the amplitudes to -1/\sqrt{2}. We didn’t have to worry about such sign errors before, because our classical information would always be the definite state of “0” or “1”. Now, there are two effects of noise to worry about, so our task has become twice as hard!

Not to worry though. In order to protect against both sources of noise, all we need to do is effectively stagger the above constructions. Now we will need to design a logical “0” state which is itself a superposition of different points, with each point separated from all of the points that are superimposed to make the logical “1” state.

Diatomic molecules: For the diatomic molecule example, consider superpositions of all four corners of two antipodal tetrahedra for the two respective logical states.

blog_tet

The logical “0” state for the quantum code is now itself a quantum superposition of orientations of our diatomic molecule corresponding to the four black points on the sphere to the left (the sphere to the right is a top-down view). Similarly, the logical “1” quantum state is a superposition of all orientations corresponding to the white points.

Each orientation (black or white point) present in our logical states rotates under fluctuations in the position of the molecule. However, the entire set of orientations for say logical “0” — the tetrahedron — rotates rigidly under such rotations. Therefore, the region from which we can successfully recover after rotations is fully determined by the Voronoi tile of any one of the corners of the tetrahedron. (Above, we plot the tile for the point at the north pole.) This cell is clearly smaller than the one for classical north-south-pole encoding we used before. However, the tetrahedral code now provides some protection against phase errors — the other type of noise that we need to worry about if we are to protect quantum information. This is an example of the trade-off we must make in order to protect against both types of noise; a licensed quantum mechanic has to live with such trade-offs every day.

Oscillators: Another example of a quantum encoding is the GKP encoding in the phase space of the harmonic oscillator. Here, we have at our disposal the entire two-dimensional plane indexing different values of position and momentum. In this case, we can use a checkerboard approach, superimposing all points at the centers of the black squares for the logical “0” state, and similarly all points at the centers of the white squares for the logical “1”. The region depicting correctable momentum and position shifts is then the Voronoi cell of the point at the origin: if a shift takes our central black point to somewhere inside the blue square, we know (most likely) where that point came from! In solid state circles, the blue square is none other than the primitive or unit cell of the lattice consisting of points making up both of the logical states.

Asymmetric molecules (a.k.a. rigid rotors): Now let’s briefly return to molecules. Above, we considered diatomic molecules that had a symmetry axis, i.e., that were left unchanged under rotations about the axis that connects the two atoms. There are of course more general molecules out there, including ones that are completely asymmetric under any possible (proper) 3D rotation (see figure below for an example).

mol-f0 - blog

BONUS: There is a subtle mistake relating to the geometry of the rotation group in the labeling of this figure. Let me know if you can find it in the comments!

All of the orientations of the asymmetric molecule, and more generally a rigid body, can no longer be parameterized by the sphere. They can be parameterized by the 3D rotation group \mathsf{SO}(3): each orientation of an asymmetric molecule is labeled by the 3D rotation necessary to obtain said orientation from a reference state. Such rotations, and in turn the orientations themselves, are parameterized by an axis \mathbf{v} (around which to rotate) and an angle \omega (by which one rotates). The rotation group \mathsf{SO}(3) luckily can still be viewed by humans on a sheet of paper. Namely, \mathsf{SO}(3) can be thought of as a ball of radius \pi with opposite points identified. The direction of each vector \omega\mathbf{v} lying inside the ball corresponds to the axis of rotation, while the length corresponds to the angle. This may take some time to digest, but it’s not crucial to the story.

So far we’ve looked at codes defined on cubes of bits, spheres, and phase-space lattices. Turns out that even \mathsf{SO}(3) can house similar encodings! In other words, \mathsf{SO}(3) can also be cut up into different Voronoi tiles, which in turn can be staggered to create logical “0” and “1” states consisting of different molecular orientations. There are many ways to pick such states, corresponding to various subgroups of \mathsf{SO}(3). Below, we sketch two sets of black/white points, along with the Voronoi tile corresponding to the rotations that are corrected by each encoding.

Voronoi tiles of the black point at the center of the ball representing the 3D rotation group, for two different molecular codes. This and the Voronoi cells corresponding to the other points tile together to make up the entire ball. 3D printing all of these tiles would make for cool puzzles!

In closing…

Achieving supremacy was a big first step towards making quantum computing a practical and universal tool. However, the largest obstacles still await, namely handling superposition-poisoning noise coming from the ever-curious environment. As quantum technologies advance, other possible routes for error correction are by encoding qubits in harmonic oscillators and molecules, alongside the “traditional” approach of using arrays of physical qubits. Oscillator and molecular qubits possess their own mechanisms for error correction, and could prove useful (granted that the large high-energy space required for the procedures to work can be accessed and controlled). Even though molecular qubits are not yet mature enough to be used in quantum computers, we have at least outlined a blueprint for how some of the required pieces can be built. We are by no means done however: besides an engineering barrier, we need to further develop how to run robust computations on these exotic spaces.

Author’s note: I’d like to acknowledge Jose Gonzalez for helping me immensely with the writing of this post, as well as for drawing the comic panels in the previous post. The figures above were made possible by Mathematica 12.

On the Coattails of Quantum Supremacy

Most readers have by now heard that Google has “achieved” quantum “supremacy”. Notice the only word not in quotes is “quantum”, because unlike previous proposals that have also made some waves, quantumness is mostly not under review here. (Well, neither really are the other two words, but that story has already been covered quite eloquently by John, Scott, and Toby.) The Google team has managed to engineer a device that, although noisy, can do the right thing a large-enough fraction of the time for people to be able to “quantify its quantumness”.

However, the Google device, while less so than previous incarnations, is still noisy. Future devices like it will continue to be noisy. Noise is what makes quantum computers so darn difficult to build; it is what destroys the fragile quantum superpositions that we are trying so hard to protect (remember, unlike a classical computer, we are not protecting things we actually observe, but their superposition).

Protecting quantum information is like taking your home-schooled date (who has lived their entire life in a bunker) to the prom for the first time. It is a fun and necessary part of a healthy relationship to spend time in public, but the price you pay is the possibility that your date will hit it off with someone else. This will leave you abandoned, dancing alone to Taylor Swift’s “You Belong With Me” while crying into your (spiked?) punch.

When the environment corrupts your quantum date.

The high school sweetheart/would-be dance partner in the above provocative example is the quantum superposition — the resource we need for a working quantum computer. You want it all to yourself, but your adversary — the environment — wants it too. No matter how much you try to protect it, you’ll have to observe it eventually (after all, you want to know the answer to your computation). And when you do (take your date out onto the crowded dance floor), you run the risk of the environment collapsing the information before you do, leaving you with nothing.

Protecting quantum information is also like (modern!) medicine. The fussy patient is the quantum information, stored in delicate superposition, while quantumists are the doctors aiming to prevent the patient from getting sick (or “corrupted”). If our patient incurs say “quasiparticle poisoning”, we first diagnose the patient’s syndromes, and, based on this diagnosis, apply procedures like “lattice surgery” and “state injection” to help our patient successfully recover.

The medical analogy to QEC, noticed first by Daniel Litinski. All terms are actually used in papers. Cartoon by Jose Gonzalez.

Error correction with qubits

Error correction sounds hard, and it should! Not to fear: plenty of very smart people have thought hard about this problem, and have come up with a plan — to redundantly encode the quantum superposition in a way that allows protection from errors caused by noise. Such quantum error-correction is an expansion of the techniques we currently use to protect classical bits in your phone and computer, but now the aim is to protect, not the definitive bit states 0 or 1, but their quantum superpositions. Things are even harder now, as the protection machinery has to do its magic without disturbing the superposition itself (after all, we want our quantum calculation to run to its conclusion and hack your bank).

For example, consider a qubit — the fundamental quantum unit represented by two shelves (which, e.g., could be the ground and excited states of an atom, the absence or presence of a photon in a box, or the zeroth and first quanta of a really cold LC circuit). This qubit can be in any quantum superposition of the two shelves, described by 2 probability amplitudes, one corresponding to each shelf. Observing this qubit will collapse its state onto either one of the shelves, changing the values of the 2 amplitudes. Since the resource we use for our computation is precisely this superposition, we definitely do not want to observe this qubit during our computation. However, we are not the only ones looking: the environment (other people at the prom: the trapping potential of our atom, the jiggling atoms of our metal box, nearby circuit elements) is also observing this system, thereby potentially manipulating the stored quantum state without our knowledge and ruining our computation.

Now consider 50 such qubits. Such a space allows for a superposition with 2^{50} different amplitudes (instead of just 2^1 for the case of a single qubit). We are once again plagued by noise coming from the environment. But what if we now, less ambitiously, want to store only one qubit’s worth of information in this 50-qubit system? Now there is room to play with! A clever choice of how to do this (a.k.a. the encoding) helps protect from the bad environment. 

The entire prospect of building a bona-fide quantum computer rests on this extra overhead or quantum redundancy of using a larger system to encode a smaller one. It sounds daunting at first: if we need 50 physical qubits for each robust logical qubit, then we’d need “I-love-you-3000” physical qubits for 60 logical ones? Yes, this is a fact we all have to live with. But granted we can scale up our devices to that many qubits, there is no fundamental obstacle that prevents us from then using error correction to make next-level computers.

To what extent do we need to protect our quantum superposition from the environment? It would be too ambitious to protect it from a meteor shower. Or a power outage (although that would be quite useful here in California). So what then can we protect against?

Our working answer is local noise — noise that affects only a few qubits that are located near each other in the device. We can never be truly certain if this type of noise is all that our quantum computers will encounter. However, our belief that this is the noise we should focus on is grounded in solid physical principles — that nature respects locality, that affecting things far away from you is harder than making an impact nearby. (So far Google has not reported otherwise, although much more work needs to be done to verify this intuition.)

The harmonic oscillator

In what other ways can we embed our two-shelf qubit into a larger space? Instead of scaling up using many physical qubits, we can utilize a fact that we have so far swept under the rug: in any physical system, our two shelves are already part of an entire bookcase! Atoms have more than one excited state, there can be more than one photon in a box, and there can be more than one quantum in a cold LC circuit. Why don’t we use some of that higher-energy space for our redundant encoding?

The noise in our bookcase will certainly be different, since the structure of the space, and therefore the notion of locality, is different. How to cope with this? The good news is that such a space — the space of the harmonic oscillator — also has a(t least one) natural notion of locality!

Whatever the incarnation, the oscillator has associated with it a position and momentum (different jargon for these quantities may be used, depending on the context, but you can just think of a child on a swing, just quantized). Anyone who knows the joke about Heisenberg getting pulled over, will know that these two quantities cannot be set simultaneously.

Cartoon by Jose Gonzalez.

Nevertheless, local errors can be thought of as small shifts in position or momentum, while nonlocal errors are ones that suddenly shift our bewildered swinging quantized child from one side of the swing to the other.

Armed with a local noise model, we can extend our know-how from multi-qubit land to the oscillator. One of the first such oscillator codes were developed by Gottesman, Kitaev, and Preskill (GKP). Proposed in 2001, GKP encodings posed a difficult engineering challenge: some believed that GKP states could never be realized, that they “did not exist”. In the past few years however, GKP states have been realized nearly simultaneously in two experimental platforms. (Food for thought for the non-believers!)

Parallel to GKP codes, another promising oscillator encoding using cat states is also being developed. This encoding has historically been far easier to create experimentally. It is so far the only experimental procedure achieving the break-even point, at which the actively protected logical information has the same lifetime as the system’s best unprotected degree of freedom.

Can we mix and match all of these different systems? Why yes! While Google is currently trying to build the surface code out of qubits, using oscillators (instead of qubits) for the surface code and encoding said oscillators either in GKP (see related IBM post) [1,2,3] or cat [4,5] codes is something people are seriously considering. There is even more overhead, but the extra information one gets from the correction procedure might make for a more fault-tolerant machine. With all of these different options being explored, it’s an exciting time to be into quantum!

Molecules?

It turns out there are still other systems we can consider, although because they are sufficiently more “out there” at the moment, I should first say “bear with me!” as I explain. Forget about atoms, photons in a box, and really cold LC circuits. Instead, consider a rigid 3-dimensional object whose center of mass has been pinned in such a way that the object can rotate any way it wants. Now, “quantize” it! In other words, consider the possibility of having quantum superpositions of different orientations of this object. Just like superpositions of a dead and alive cat, of a photon and no photon, the object can be in quantum superposition of oriented up, sideways, and down, for example. Superpositions of all possible orientations then make up our new configuration space (read: playground), and we are lucky that it too inherits many of the properties we know and love from its multi-qubit and oscillator cousins.

Examples of rigid bodies include airplanes (which can roll, pitch and yaw, even while “fixed” on a particular trajectory vector) and robot arms (which can rotate about multiple joints). Given that we’re not quantizing those (yet?), what rigid body should we have in mind as a serious candidate? Well, in parallel to the impressive engineering successes of the multi-qubit and oscillator paradigms, physicists and chemists have made substantial progress in trapping and cooling molecules. If a trapped molecule is cold enough, it’s vibrational and electronic states can be neglected, and its rotational states form exactly the rigid body we are interested in. Such rotational states, as far as we can tell, are not in the realm of Avengers-style science fiction.

Superpositions of molecular orientations don’t violate the Deutsch proposition.

The idea to use molecules for quantum computing dates all the way back to a 2001 paper by Dave DeMille, but in a recent paper by Jacob Covey, John Preskill, and myself, we propose a framework of how to utilize the large space of molecular orientations to protect against (you guessed it!) a type of local noise. In the second part of the story, called “Quantum Error Correction with Molecules“, I will cover a particular concept that is not only useful for a proper error-correcting code (classical and quantum), but also one that is quite fun to try and understand. The concept is based on a certain kind of tiling, called Voronoi tiles or Thiessen polygons, which can be used to tile anything from your bathroom floor to the space of molecular orientations. Stay tuned!