Unknown's avatar

About preskill

I am a theoretical physicist at Caltech, and the Director of the Institute for Quantum Information and Matter. Follow me on Twitter @preskill.

Quantum computing in the second quantum century

On December 10, I gave a keynote address at the Q2B 2025 Conference in Silicon Valley. This is a transcript of my remarks. The slides I presented are here.

The first century

We are nearing the end of the International Year of Quantum Science and Technology, so designated to commemorate the 100th anniversary of the discovery of quantum mechanics in 1925. The story goes that 23-year-old Werner Heisenberg, seeking relief from severe hay fever, sailed to the remote North Sea Island of Helgoland, where a crucial insight led to his first, and notoriously obscure, paper describing the framework of quantum mechanics.

In the years following, that framework was clarified and extended by Heisenberg and others. Notably among them was Paul Dirac, who emphasized that we have a theory of almost everything that matters in everyday life. It’s the Schrödinger equation, which captures the quantum behavior of many electrons interacting electromagnetically with one another and with atomic nuclei. That describes everything in chemistry and materials science and all that is built on those foundations. But, as Dirac lamented, in general the equation is too complicated to solve for more than a few electrons.

Somehow, over 50 years passed before Richard Feynman proposed that if we want a machine to help us solve quantum problems, it should be a quantum machine, not a classical machine. The quest for such a machine, he observed, is “a wonderful problem because it doesn’t look so easy,” a statement that still rings true.

I was drawn into that quest about 30 years ago. It was an exciting time. Efficient quantum algorithms for the factoring and discrete log problems were discovered, followed rapidly by the first quantum error-correcting codes and the foundations of fault-tolerant quantum computing. By late 1996, it was firmly established that a noisy quantum computer could simulate an ideal quantum computer efficiently if the noise is not too strong or strongly correlated. Many of us were then convinced that powerful fault-tolerant quantum computers could eventually be built and operated.

Three decades later, as we enter the second century of quantum mechanics, how far have we come? Today’s quantum devices can perform some tasks beyond the reach of the most powerful existing conventional supercomputers. Error correction had for decades been a playground for theorists; now informative demonstrations are achievable on quantum platforms. And the world is investing heavily in advancing the technology further.

Current NISQ machines can perform quantum computations with thousands of two-qubit gates, enabling early explorations of highly entangled quantum matter, but still with limited commercial value. To unlock a wide variety of scientific and commercial applications, we need machines capable of performing billions or trillions of two-qubit gates. Quantum error correction is the way to get there.

I’ll highlight some notable developments over the past year—among many others I won’t have time to discuss. (1) We’re seeing intriguing quantum simulations of quantum dynamics in regimes that are arguably beyond the reach of classical simulations. (2) Atomic processors, both ion traps and neutral atoms in optical tweezers, are advancing impressively. (3) We’re acquiring a deeper appreciation of the advantages of nonlocal connectivity in fault-tolerant protocols. (4) And resource estimates for cryptanalytically relevant quantum algorithms have dropped sharply.

Quantum machines for science

A few years ago, I was not particularly excited about running applications on the quantum platforms that were then available; now I’m more interested. We have superconducting devices from IBM and Google with over 100 qubits and two-qubit error rates approaching 10^{-3}. The Quantinuum ion trap device has even better fidelity as well as higher connectivity. Neutral-atom processors have many qubits; they lag behind now in fidelity, but are improving.

Users face tradeoffs: The high connectivity and fidelity of ion traps is an advantage, but their clock speeds are orders of magnitude slower than for superconducting processors. That limits the number of times you can run a given circuit, and therefore the attainable statistical accuracy when estimating expectations of observables.

Verifiable quantum advantage

Much attention has been paid to sampling from the output of random quantum circuits, because this task is provably hard classically under reasonable assumptions. The trouble is that, in the high-complexity regime where a quantum computer can reach far beyond what classical computers can do, the accuracy of the quantum computation cannot be checked efficiently. Therefore, attention is now shifting toward verifiable quantum advantage — tasks where the answer can be checked. If we solved a factoring or discrete log problem, we could easily check the quantum computer’s output with a classical computation, but we’re not yet able to run these quantum algorithms in the classically hard regime. We might settle instead for quantum verification, meaning that we check the result by comparing two quantum computations and verifying the consistency of the results.

A type of classical verification of a quantum circuit was demonstrated recently by BlueQubit on a Quantinuum processor. In this scheme, a designer builds a family of so-called “peaked” quantum circuits such that, for each such circuit and for a specific input, one output string occurs with unusually high probability. An agent with a quantum computer who knows the circuit and the right input can easily identify the preferred output string by running the circuit a few times. But the quantum circuits are cleverly designed to hide the peaked output from a classical agent — one may argue heuristically that the classical agent, who has a description of the circuit and the right input, will find it hard to predict the preferred output. Thus quantum agents, but not classical agents, can convince the circuit designer that they have reliable quantum computers. This observation provides a convenient way to benchmark quantum computers that operate in the classically hard regime.

The notion of quantum verification was explored by the Google team using Willow. One can execute a quantum circuit acting on a specified input, and then measure a specified observable in the output. By repeating the procedure sufficiently many times, one obtains an accurate estimate of the expectation value of that output observable. This value can be checked by any other sufficiently capable quantum computer that runs the same circuit. If the circuit is strategically chosen, then the output value may be very sensitive to many-qubit interference phenomena, in which case one may argue heuristically that accurate estimation of that output observable is a hard task for classical computers. These experiments, too, provide a tool for validating quantum processors in the classical hard regime. The Google team even suggests that such experiments may have practical utility for inferring molecular structure from nuclear magnetic resonance data.

Correlated fermions in two dimensions

Quantum simulations of fermionic systems are especially compelling, since electronic structure underlies chemistry and materials science. These systems can be hard to simulate in more than one dimension, particularly in parameter regimes where fermions are strongly correlated, or in other words profoundly entangled. The two-dimensional Fermi-Hubbard model is a simplified caricature of two-dimensional materials that exhibit high-temperature superconductivity and hence has been much studied in recent decades. Large-scale tensor-network simulations are reasonably successful at capturing static properties of this model, but the dynamical properties are more elusive.

Dynamics in the Fermi-Hubbard model has been simulated recently on both Quantinuum (here and here) and Google processors. Only a 6 x 6 lattice of electrons was simulated, but this is already well beyond the scope of exact classical simulation. Comparing (error-mitigated) quantum circuits with over 4000 two-qubit gates to heuristic classical tensor-network and Majorana path methods, discrepancies were noted, and the Phasecraft team argues that the quantum simulation results are more trustworthy. The Harvard group also simulated models of fermionic dynamics, but were limited to relatively low circuit depths due to atom loss. It’s encouraging that today’s quantum processors have reached this interesting two-dimensional strongly correlated regime, and with improved gate fidelity and noise mitigation we can go somewhat further, but expanding system size substantially in digital quantum simulation will require moving toward fault-tolerant implementations. We should also note that there are analog Fermi-Hubbard simulators with thousands of lattice sites, but digital simulators provide greater flexibility in the initial states we can prepare, the observables we can access, and the Hamiltonians we can reach.

When it comes to many-particle quantum simulation, a nagging question is: “Will AI eat quantum’s lunch?” There is surging interest in using classical artificial intelligence to solve quantum problems, and that seems promising. How will AI impact our quest for quantum advantage in this problem space? This question is part of a broader issue: classical methods for quantum chemistry and materials have been improving rapidly, largely because of better algorithms, not just greater processing power. But for now classical AI applied to strongly correlated matter is hampered by a paucity of training data.  Data from quantum experiments and simulations will likely enhance the power of classical AI to predict properties of new molecules and materials. The practical impact of that predictive power is hard to clearly foresee.

The need for fundamental research

Today is December 10th, the anniversary of Alfred Nobel’s death. The Nobel Prize award ceremony in Stockholm concluded about an hour ago, and the Laureates are about to sit down for a well-deserved sumptuous banquet. That’s a fitting coda to this International Year of Quantum. It’s useful to be reminded that the foundations for today’s superconducting quantum processors were established by fundamental research 40 years ago into macroscopic quantum phenomena. No doubt fundamental curiosity-driven quantum research will continue to uncover unforeseen technological opportunities in the future, just as it has in the past.

I have emphasized superconducting, ion-trap, and neutral atom processors because those are most advanced today, but it’s vital to continue to pursue alternatives that could suddenly leap forward, and to be open to new hardware modalities that are not top-of-mind at present. It is striking that programmable, gate-based quantum circuits in neutral-atom optical-tweezer arrays were first demonstrated only a few years ago, yet that platform now appears especially promising for advancing fault-tolerant quantum computing. Policy makers should take note!

The joy of nonlocal connectivity

As the fault-tolerant era dawns, we increasingly recognize the potential advantages of the nonlocal connectivity resulting from atomic movement in ion traps and tweezer arrays, compared to geometrically local two-dimensional processing in solid-state devices. Over the past few years, many contributions from both industry and academia have clarified how this connectivity can reduce the overhead of fault-tolerant protocols.

Even when using the standard surface code, the ability to implement two-qubit logical gates transversally—rather than through lattice surgery—significantly reduces the number of syndrome-measurement rounds needed for reliable decoding, thereby lowering the time overhead of fault tolerance. Moreover, the global control and flexible qubit layout in tweezer arrays increase the parallelism available to logical circuits.

Nonlocal connectivity also enables the use of quantum low-density parity-check (qLDPC) codes with higher encoding rates, reducing the number of physical qubits needed per logical qubit for a target logical error rate. These codes now have acceptably high accuracy thresholds, practical decoders, and—thanks to rapid theoretical progress this year—emerging constructions for implementing universal logical gate sets. (See for example here, here, here, here.)

A serious drawback of tweezer arrays is their comparatively slow clock speed, limited by the timescales for atom transport and qubit readout. A millisecond-scale syndrome-measurement cycle is a major disadvantage relative to microsecond-scale cycles in some solid-state platforms. Nevertheless, the reductions in logical-gate overhead afforded by atomic movement can partially compensate for this limitation, and neutral-atom arrays with thousands of physical qubits already exist.

To realize the full potential of neutral-atom processors, further improvements are needed in gate fidelity and continuous atom loading to maintain large arrays during deep circuits. Encouragingly, active efforts on both fronts are making steady progress.

Approaching cryptanalytic relevance

Another noteworthy development this year was a significant improvement in the physical qubit count required to run a cryptanalytically relevant quantum algorithm, reduced by Gidney to less than 1 million physical qubits from the 20 million Gidney and Ekerå had estimated earlier. This applies under standard assumptions: a two-qubit error rate of 10^{-3} and 2D geometrically local processing. The improvement was achieved using three main tricks. One was using approximate residue arithmetic to reduce the number of logical qubits. (This also suppresses the success probability and therefore lengthens the time to solution by a factor of a few.) Another was using a more efficient scheme to reduce the number of physical qubits for each logical qubit in cold storage. And the third was a recently formulated scheme for reducing the spacetime cost of non-Clifford gates. Further cost reductions seem possible using advanced fault-tolerant constructions, highlighting the urgency of accelerating migration from vulnerable cryptosystems to post-quantum cryptography.

Looking forward

Over the next 5 years, we anticipate dramatic progress toward scalable fault-tolerant quantum computing, and scientific insights enabled by programmable quantum devices arriving at an accelerated pace. Looking further ahead, what might the future hold? I was intrigued by a 1945 letter from John von Neumann concerning the potential applications of fast electronic computers. After delineating some possible applications, von Neumann added: “Uses which are not, or not easily, predictable now, are likely to be the most important ones … they will … constitute the most surprising extension of our present sphere of action.” Not even a genius like von Neumann could foresee the digital revolution that lay ahead. Predicting the future course of quantum technology is even more hopeless because quantum information processing entails an even larger step beyond past experience.

As we contemplate the long-term trajectory of quantum science and technology, we are hampered by our limited imaginations. But one way to loosely characterize the difference between the past and the future of quantum science is this: For the first hundred years of quantum mechanics, we achieved great success at understanding the behavior of weakly correlated many-particle systems, leading for example to transformative semiconductor and laser technologies. The grand challenge and opportunity we face in the second quantum century is acquiring comparable insight into the complex behavior of highly entangled states of many particles, behavior well beyond the scope of current theory or computation. The wonders we encounter in the second century of quantum mechanics, and their implications for human civilization, may far surpass those of the first century. So we should gratefully acknowledge the quantum pioneers of the past century, and wish good fortune to the quantum explorers of the future.

Credit: Iseult-Line Delfosse LLC, QC Ware

John Preskill receives 2025 Quantum Leadership Award

The 2025 Quantum Leadership Awards were announced at the Quantum World Congress on 18 September 2025. Upon receiving the Academic Pioneer in Quantum Award, John Preskill made these remarks.

I’m enormously excited and honored to receive this Quantum Leadership Award, and especially thrilled to receive it during this, the International Year of Quantum. The 100th anniversary of the discovery of quantum mechanics is a cause for celebration because that theory provides our deepest and most accurate description of how the universe works, and because that deeper understanding has incalculable value to humanity. What we have learned about electrons, photons, atoms, and molecules in the past century has already transformed our lives in many ways, but what lies ahead, as we learn to build and precisely control more and more complex quantum systems, will be even more astonishing.

As a professor at a great university, I have been lucky in many ways. Lucky to have the freedom to pursue the scientific challenges that I find most compelling and promising. Lucky to be surrounded by remarkable, supportive colleagues. Lucky to have had many collaborators who enabled me to do things I could never have done on my own. And lucky to have the opportunity to teach and mentor young scientists who have a passion for advancing the frontiers of science. What I’m most proud of is the quantum community we’ve built at Caltech, and the many dozens of young people who imbibed the interdisciplinary spirit of Caltech and then moved onward to become leaders in quantum science at universities, labs, and companies all over the world.

Right now is a thrilling time for quantum science and technology, a time of rapid progress, but these are still the early days in a nascent second quantum revolution. In quantum computing, we face two fundamental questions: How can we scale up to quantum machines that can solve very hard computational problems? And once we do so, what will be the most important applications for science and for industry? We don’t have fully satisfying answers yet to either question and we won’t find the answers all at once – they will unfold gradually as our knowledge and technology advance. But 10 years from now we’ll have much better answers than we have today.

Companies are now pursuing ambitious plans to build the world’s most powerful quantum computers.  Let’s not forget how we got to this point. It was by allowing some of the world’s most brilliant people to follow their curiosity and dream about what the future could bring. To fulfill the potential of quantum technology, we need that spirit of bold adventure now more than ever before. This award honors one scientist, and I’m profoundly grateful for this recognition. But more importantly it serves as a reminder of the vital ongoing need to support the fundamental research that will build foundations for the science and technology of the future. Thank you very much!

The first and second centuries of quantum mechanics

At this week’s American Physical Society Global Physics Summit in Anaheim, California, John Preskill spoke at an event celebrating 100 years of groundbreaking advances in quantum mechanics. Here are his remarks.

Welcome, everyone, to this celebration of 100 years of quantum mechanics hosted by the Physical Review Journals. I’m John Preskill and I’m honored by this opportunity to speak today. I was asked by our hosts to express some thoughts appropriate to this occasion and to feel free to share my own personal journey as a physicist. I’ll embrace that charge, including the second part of it, perhaps even more that they intended. But over the next 20 minutes I hope to distill from my own experience some lessons of broader interest.

I began graduate study in 1975, the midpoint of the first 100 years of quantum mechanics, 50 years ago and 50 years after the discovery of quantum mechanics in 1925 that we celebrate here. So I’ll seize this chance to look back at where quantum physics stood 50 years ago, how far we’ve come since then, and what we can anticipate in the years ahead.

As an undergraduate at Princeton, I had many memorable teachers; I’ll mention just one: John Wheeler, who taught a full-year course for sophomores that purported to cover all of physics. Wheeler, having worked with Niels Bohr on nuclear fission, seemed implausibly old, though he was actually 61. It was an idiosyncratic course, particularly because Wheeler did not refrain from sharing with the class his current research obsessions. Black holes were a topic he shared with particular relish, including the controversy at the time concerning whether evidence for black holes had been seen by astronomers. Especially notably, when covering the second law of thermodynamics, he challenged us to ponder what would happen to entropy lost behind a black hole horizon, something that had been addressed by Wheeler’s graduate student Jacob Bekenstein, who had finished his PhD that very year. Bekenstein’s remarkable conclusion that black holes have an intrinsic entropy proportional to the event horizon area delighted the class, and I’ve had had many occasions to revisit that insight in the years since then. The lesson being that we should not underestimate the potential impact of sharing our research ideas with undergraduate students.

Stephen Hawking made that connection between entropy and area precise the very next year when he discovered that black holes radiate; his resulting formula for black hole entropy, a beautiful synthesis of relativity, quantum theory, and thermodynamics ranks as one of the shining achievements in the first 100 years of quantum mechanics. And it raised a deep puzzle pointed out by Hawking himself with which we have wrestled since then, still without complete success — what happens to information that disappears inside black holes?

Hawking’s puzzle ignited a titanic struggle between cherished principles. Quantum mechanics tells us that as quantum systems evolve, information encoded in a system can get scrambled into an unrecognizable form, but cannot be irreversibly destroyed. Relativistic causality tells us that information that falls into a black hole, which then evaporates, cannot possibly escape and therefore must be destroyed. Who wins – quantum theory or causality? A widely held view is that quantum mechanics is the victor, that causality should be discarded as a fundamental principle. This calls into question the whole notion of spacetime — is it fundamental, or an approximate property that emerges from a deeper description of how nature works? If emergent, how does it emerge and from what? Fully addressing that challenge we leave to the physicists of the next quantum century.

I made it to graduate school at Harvard and the second half century of quantum mechanics ensued. My generation came along just a little too late to take part in erecting the standard model of particle physics, but I was drawn to particle physics by that intoxicating experimental and theoretical success. And many new ideas were swirling around in the mid and late 70s of which I’ll mention only two. For one, appreciation was growing for the remarkable power of topology in quantum field theory and condensed matter, for example the theory of topological solitons. While theoretical physics and mathematics had diverged during the first 50 years of quantum mechanics, they have frequently crossed paths in the last 50 years, and topology continues to bring both insight and joy to physicists. The other compelling idea was to seek insight into fundamental physics at very short distances by searching for relics from the very early history of the universe. My first publication resulted from contemplating a question that connected topology and cosmology: Would magnetic monopoles be copiously produced in the early universe? To check whether my ideas held water, I consulted not a particle physicist or a cosmologist, but rather a condensed matter physicist (Bert Halperin) who provided helpful advice. The lesson being that scientific opportunities often emerge where different subfields intersect, a realization that has helped to guide my own research over the following decades.

Looking back at my 50 years as a working physicist, what discoveries can the quantumists point to with particular pride and delight?

I was an undergraduate when Phil Anderson proclaimed that More is Different, but as an arrogant would be particle theorist at the time I did not appreciate how different more can be. In the past 50 years of quantum mechanics no example of emergence was more stunning than the fractional quantum Hall effect. We all know full well that electrons are indivisible particles. So how can it be that in a strongly interacting two-dimensional gas an electron can split into quasiparticles each carrying a fraction of its charge? The lesson being: in a strongly-correlated quantum world, miracles can happen. What other extraordinary quantum phases of matter await discovery in the next quantum century?

Another thing I did not adequately appreciate in my student days was atomic physics. Imagine how shocked those who elucidated atomic structure in the 1920s would be by the atomic physics of today. To them, a quantum measurement was an action performed on a large ensemble of similarly prepared systems. Now we routinely grab ahold of a single atom, move it, excite it, read it out, and induce pairs of atoms to interact in precisely controlled ways. When interest in quantum computing took off in the mid-90s, it was ion-trap clock technology that enabled the first quantum processors. Strong coupling between single photons and single atoms in optical and microwave cavities led to circuit quantum electrodynamics, the basis for today’s superconducting quantum computers. The lesson being that advancing our tools often leads to new capabilities we hadn’t anticipated. Now clocks are so accurate that we can detect the gravitational redshift when an atom moves up or down by a millimeter in the earth’s gravitational field. Where will the clocks of the second quantum century take us?

Surely one of the great scientific triumphs of recent decades has been the success of LIGO, the laser interferometer gravitational-wave observatory. If you are a gravitational wave scientist now, your phone buzzes so often to announce another black hole merger that it’s become annoying. LIGO would not be possible without advanced laser technology, but aside from that what’s quantum about LIGO? When I came to Caltech in the early 1980s, I learned about a remarkable idea (from Carl Caves) that the sensitivity of an interferometer can be enhanced by a quantum strategy that did not seem at all obvious — injecting squeezed vacuum into the interferometer’s dark port. Now, over 40 years later, LIGO improves its detection rate by using that strategy. The lesson being that theoretical insights can enhance and transform our scientific and technological tools. But sometimes that takes a while.

What else has changed since 50 years ago? Let’s give thanks for the arXiv. When I was a student few scientists would type their own technical papers. It took skill, training, and patience to operate the IBM typewriters of the era. And to communicate our results, we had no email or world wide web. Preprints arrived by snail mail in Manila envelopes, if you were lucky enough to be on the mailing list. The Internet and the arXiv made scientific communication far faster, more convenient, and more democratic, and LaTeX made producing our papers far easier as well. And the success of the arXiv raises vexing questions about the role of journal publication as the next quantum century unfolds.

I made a mid-career shift in research direction, and I’m often asked how that came about. Part of the answer is that, for my generation of particle physicists, the great challenge and opportunity was to clarify the physics beyond the standard model, which we expected to provide a deeper understanding of how nature works. We had great hopes for the new phenomenology that would be unveiled by the Superconducting Super Collider, which was under construction in Texas during the early 90s. The cancellation of that project in 1993 was a great disappointment. The lesson being that sometimes our scientific ambitions are thwarted because the required resources are beyond what society will support. In which case, we need to seek other ways to move forward.

And then the next year, Peter Shor discovered the algorithm for efficiently finding the factors of a large composite integer using a quantum computer. Though computational complexity had not been part of my scientific education, I was awestruck by this discovery. It meant that the difference between hard and easy problems — those we can never hope to solve, and those we can solve with advanced technologies — hinges on our world being quantum mechanical. That excited me because one could anticipate that observing nature through a computational lens would deepen our understanding of fundamental science. I needed to work hard to come up to speed in a field that was new to me — teaching a course helped me a lot.

Ironically, for 4 ½ years in the mid-1980s I sat on the same corridor as Richard Feynman, who had proposed the idea of simulating nature with quantum computers in 1981. And I never talked to Feynman about quantum computing because I had little interest in that topic at the time. But Feynman and I did talk about computation, and in particular we were both very interested in what one could learn about quantum chromodynamics from Euclidean Monte Carlo simulations on conventional computers, which were starting to ramp up in that era. Feynman correctly predicted that it would be a few decades before sufficient computational power would be available to make accurate quantitative predictions about nonperturbative QCD. But it did eventually happen — now lattice QCD is making crucial contributions to the particle physics and nuclear physics programs. The lesson being that as we contemplate quantum computers advancing our understanding of fundamental science, we should keep in mind a time scale of decades.

Where might the next quantum century take us? What will the quantum computers of the future look like, or the classical computers for that matter? Surely the qubits of 100 years from now will be much different and much better than what we have today, and the machine architecture will no doubt be radically different than what we can currently envision. And how will we be using those quantum computers? Will our quantum technology have transformed medicine and neuroscience and our understanding of living matter? Will we be building materials with astonishing properties by assembling matter atom by atom? Will our clocks be accurate enough to detect the stochastic gravitational wave background and so have reached the limit of accuracy beyond which no stable time standard can even be defined? Will quantum networks of telescopes be observing the universe with exquisite precision and what will that reveal? Will we be exploring the high energy frontier with advanced accelerators like muon colliders and what will they teach us? Will we have identified the dark matter and explained the dark energy? Will we have unambiguous evidence of the universe’s inflationary origin? Will we have computed the parameters of the standard model from first principles, or will we have convinced ourselves that’s a hopeless task? Will we have understood the fundamental constituents from which spacetime itself is composed?

There is an elephant in the room. Artificial intelligence is transforming how we do science at a blistering pace. What role will humans play in the advancement of science 100 years from now? Will artificial intelligence have melded with quantum intelligence? Will our instruments gather quantum data Nature provides, transduce it to quantum memories, and process it with quantum computers to discern features of the world that would otherwise have remained deeply hidden?

To a limited degree, in contemplating the future we are guided by the past. Were I asked to list the great ideas about physics to surface over the 50-year span of my career, there are three in particular I would nominate for inclusion on that list. (1) The holographic principle, our best clue about how gravity and quantum physics fit together. (2) Topological quantum order, providing ways to distinguish different phases of quantum matter when particles strongly interact with one another. (3) And quantum error correction, our basis for believing we can precisely control very complex quantum systems, including advanced quantum computers. It’s fascinating that these three ideas are actually quite closely related. The common thread connecting them is that all relate to the behavior of many-particle systems that are highly entangled.

Quantum error correction is the idea that we can protect quantum information from local noise by encoding the information in highly entangled states such that the protected information is inaccessible locally, when we look at just a few particles at a time. Topological quantum order is the idea that different quantum phases of matter can look the same when we observe them locally, but are distinguished by global properties hidden from local probes — in other words such states of matter are quantum memories protected by quantum error correction. The holographic principle is the idea that all the information in a gravitating three-dimensional region of space can be encoded by mapping it to a local quantum field theory on the two-dimensional boundary of the space. And that map is in fact the encoding map of a quantum error-correcting code. These ideas illustrate how as our knowledge advances, different fields of physics are converging on common principles. Will that convergence continue in the second century of quantum mechanics? We’ll see.

As we contemplate the long-term trajectory of quantum science and technology, we are hampered by our limited imaginations. But one way to loosely characterize the difference between the past and the future of quantum science is this: For the first hundred years of quantum mechanics, we achieved great success at understanding the behavior of weakly correlated many-particles systems relevant to for example electronic structure, atomic and molecular physics, and quantum optics. The insights gained regarding for instance how electrons are transported through semiconductors or how condensates of photons and atoms behave had invaluable scientific and technological impact. The grand challenge and opportunity we face in the second quantum century is acquiring comparable insight into the complex behavior of highly entangled states of many particles which are well beyond the reach of current theory or computation. This entanglement frontier is vast, inviting, and still largely unexplored. The wonders we encounter in the second century of quantum mechanics, and their implications for human civilization, are bound to supersede by far those of the first century. So let us gratefully acknowledge the quantum heroes of the past and present, and wish good fortune to the quantum explorers of the future.

Image credit: Jorge Cham

Beyond NISQ: The Megaquop Machine

On December 11, I gave a keynote address at the Q2B 2024 Conference in Silicon Valley. This is a transcript of my remarks. The slides I presented are here. The video of the talk is here.

NISQ and beyond

I’m honored to be back at Q2B for the 8th year in a row.

The Q2B conference theme is “The Roadmap to Quantum Value,” so I’ll begin by showing a slide from last year’s talk. As best we currently understand, the path to economic impact is the road through fault-tolerant quantum computing. And that poses a daunting challenge for our field and for the quantum industry.

We are in the NISQ era. And NISQ technology already has noteworthy scientific value. But as of now there is no proposed application of NISQ computing with commercial value for which quantum advantage has been demonstrated when compared to the best classical hardware running the best algorithms for solving the same problems. Furthermore, currently there are no persuasive theoretical arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.

NISQ, meaning Noisy Intermediate-Scale Quantum, is a deliberately vague term. By design, it has no precise quantitative meaning, but it is intended to convey an idea: We now have quantum machines such that brute force simulation of what the quantum machine does is well beyond the reach of our most powerful existing conventional computers. But these machines are not error-corrected, and noise severely limits their computational power.

In the future we can envision FASQ* machines, Fault-Tolerant Application-Scale Quantum computers that can run a wide variety of useful applications, but that is still a rather distant goal. What term captures the path along the road from NISQ to FASQ? Various terms retaining the ISQ format of NISQ have been proposed [here, here, here], but I would prefer to leave ISQ behind as we move forward, so I’ll speak instead of a megaquop or gigaquop machine and so on meaning one capable of executing a million or a billion quantum operations, but with the understanding that mega means not precisely a million but somewhere in the vicinity of a million.

Naively, a megaquop machine would have an error rate per logical gate of order 10^{-6}, which we don’t expect to achieve anytime soon without using error correction and fault-tolerant operation. Or maybe the logical error rate could be somewhat larger, as we expect to be able to boost the simulable circuit volume using various error mitigation techniques in the megaquop era just as we do in the NISQ era. Importantly, the megaquop machine would be capable of achieving some tasks beyond the reach of classical, NISQ, or analog quantum devices, for example by executing circuits with of order 100 logical qubits and circuit depth of order 10,000.

What resources are needed to operate it? That depends on many things, but a rough guess is that tens of thousands of high-quality physical qubits could suffice. When will we have it? I don’t know, but if it happens in just a few years a likely modality is Rydberg atoms in optical tweezers, assuming they continue to advance in both scale and performance.

What will we do with it? I don’t know, but as a scientist I expect we can learn valuable lessons by simulating the dynamics of many-qubit systems on megaquop machines. Will there be applications that are commercially viable as well as scientifically instructive? That I can’t promise you.

The road to fault tolerance

To proceed along the road to fault tolerance, what must we achieve? We would like to see many successive rounds of accurate error syndrome measurement such that when the syndromes are decoded the error rate per measurement cycle drops sharply as the code increases in size. Furthermore, we want to decode rapidly, as will be needed to execute universal gates on protected quantum information. Indeed, we will want the logical gates to have much higher fidelity than physical gates, and for the logical gate fidelities to improve sharply as codes increase in size. We want to do all this at an acceptable overhead cost in both the number of physical qubits and the number of physical gates. And speed matters — the time on the wall clock for executing a logical gate should be as short as possible.

A snapshot of the state of the art comes from the Google Quantum AI team. Their recently introduced Willow superconducting processor has improved transmon lifetimes, measurement errors, and leakage correction compared to its predecessor Sycamore. With it they can perform millions of rounds of surface-code error syndrome measurement with good stability, each round lasting about a microsecond. Most notably, they find that the logical error rate per measurement round improves by a factor of 2 (a factor they call Lambda) when the code distance increases from 3 to 5 and again from 5 to 7, indicating that further improvements should be achievable by scaling the device further. They performed accurate real-time decoding for the distance 3 and 5 codes. To further explore the performance of the device they also studied the repetition code, which corrects only bit flips, out to a much larger code distance. As the hardware continues to advance we hope to see larger values of Lambda for the surface code, larger codes achieving much lower error rates, and eventually not just quantum memory but also logical two-qubit gates with much improved fidelity compared to the fidelity of physical gates.

Last year I expressed concern about the potential vulnerability of superconducting quantum processors to ionizing radiation such as cosmic ray muons. In these events, errors occur in many qubits at once, too many errors for the error-correcting code to fend off. I speculated that we might want to operate a superconducting processor deep underground to suppress the muon flux, or to use less efficient codes that protect against such error bursts.

The good news is that the Google team has demonstrated that so-called gap engineering of the qubits can reduce the frequency of such error bursts by orders of magnitude. In their studies of the repetition code they found that, in the gap-engineered Willow processor, error bursts occurred about once per hour, as opposed to once every ten seconds in their earlier hardware.  Whether suppression of error bursts via gap engineering will suffice for running deep quantum circuits in the future is not certain, but this progress is encouraging. And by the way, the origin of the error bursts seen every hour or so is not yet clearly understood, which reminds us that not only in superconducting processors but in other modalities as well we are likely to encounter mysterious and highly deleterious rare events that will need to be understood and mitigated.

Real-time decoding

Fast real-time decoding of error syndromes is important because when performing universal error-corrected computation we must frequently measure encoded blocks and then perform subsequent operations conditioned on the measurement outcomes. If it takes too long to decode the measurement outcomes, that will slow down the logical clock speed. That may be a more serious problem for superconducting circuits than for other hardware modalities where gates can be orders of magnitude slower.

For distance 5, Google achieves a latency, meaning the time from when data from the final round of syndrome measurement is received by the decoder until the decoder returns its result, of about 63 microseconds on average. In addition, it takes about another 10 microseconds for the data to be transmitted via Ethernet from the measurement device to the decoding workstation. That’s not bad, but considering that each round of syndrome measurement takes only a microsecond, faster would be preferable, and the decoding task becomes harder as the code grows in size.

Riverlane and Rigetti have demonstrated in small experiments that the decoding latency can be reduced by running the decoding algorithm on FPGAs rather than CPUs, and by integrating the decoder into the control stack to reduce communication time. Adopting such methods may become increasingly important as we scale further. Google DeepMind has shown that a decoder trained by reinforcement learning can achieve a lower logical error rate than a decoder constructed by humans, but it’s unclear whether that will work at scale because the cost of training rises steeply with code distance. Also, the Harvard / QuEra team has emphasized that performing correlated decoding across multiple code blocks can reduce the depth of fault-tolerant constructions, but this also increases the complexity of decoding, raising concern about whether such a scheme will be scalable.

Trading simplicity for performance

The Google processors use transmon qubits, as do superconducting processors from IBM and various other companies and research groups. Transmons are the simplest superconducting qubits and their quality has improved steadily; we can expect further improvement with advances in materials and fabrication. But a logical qubit with very low error rate surely will be a complicated object due to the hefty overhead cost of quantum error correction. Perhaps it is worthwhile to fashion a more complicated physical qubit if the resulting gain in performance might actually simplify the operation of a fault-tolerant quantum computer in the megaquop regime or well beyond. Several versions of this strategy are being pursued.

One approach uses cat qubits, in which the encoded 0 and 1 are coherent states of a microwave resonator, well separated in phase space, such that the noise afflicting the qubit is highly biased. Bit flips are exponentially suppressed as the mean photon number of the resonator increases, while the error rate for phase flips induced by loss from the resonator increases only linearly with the photon number. This year the AWS team built a repetition code to correct phase errors for cat qubits that are passively protected against bit flips, and showed that increasing the distance of the repetition code from 3 to 5 slightly improves the logical error rate. (See also here.)

Another helpful insight is that error correction can be more effective if we know when and where the errors occur in a quantum circuit. We can apply this idea using a dual rail encoding of the qubits. With two microwave resonators, for example, we can encode a qubit by placing a single photon in either the first resonator (the 10) state, or the second resonator (the 01 state). The dominant error is loss of a photon, causing either the 01 or 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred without disturbing a coherent superposition of 01 and 10. In a device built by the Yale / QCI team, loss errors are detected over 99% of the time and all undetected errors are relatively rare. Similar results were reported by the AWS team, encoding a dual-rail qubit in a pair of transmons instead of resonators.

Another idea is encoding a finite-dimensional quantum system in a state of a resonator that is highly squeezed in two complementary quadratures, a so-called GKP encoding. This year the Yale group used this scheme to encode 3-dimensional and 4-dimensional systems with decay rate better by a factor of 1.8 than the rate of photon loss from the resonator. (See also here.)

A fluxonium qubit is more complicated than a transmon in that it requires a large inductance which is achieved with an array of Josephson junctions, but it has the advantage of larger anharmonicity, which has enabled two-qubit gates with better than three 9s of fidelity, as the MIT team has shown.

Whether this trading of simplicity for performance in superconducting qubits will ultimately be advantageous for scaling to large systems is still unclear. But it’s appropriate to explore such alternatives which might pay off in the long run.

Error correction with atomic qubits

We have also seen progress on error correction this year with atomic qubits, both in ion traps and optical tweezer arrays. In these platforms qubits are movable, making it possible to apply two-qubit gates to any pair of qubits in the device. This opens the opportunity to use more efficient coding schemes, and in fact logical circuits are now being executed on these platforms. The Harvard / MIT / QuEra team sampled circuits with 48 logical qubits on a 280-qubit device –- that big news broke during last year’s Q2B conference. Atom computing and Microsoft ran an algorithm with 28 logical qubits on a 256-qubit device. Quantinuum and Microsoft prepared entangled states of 12 logical qubits on a 56-qubit device.

However, so far in these devices it has not been possible to perform more than a few rounds of error syndrome measurement, and the results rely on error detection and postselection. That is, circuit runs are discarded when errors are detected, a scheme that won’t scale to large circuits. Efforts to address these drawbacks are in progress. Another concern is that the atomic movement slows the logical cycle time. If all-to-all coupling enabled by atomic movement is to be used in much deeper circuits, it will be important to speed up the movement quite a lot.

Toward the megaquop machine

How can we reach the megaquop regime? More efficient quantum codes like those recently discovered by the IBM team might help. These require geometrically nonlocal connectivity and are therefore better suited for Rydberg optical tweezer arrays than superconducting processors, at least for now. Error mitigation strategies tailored for logical circuits, like those pursued by Qedma, might help by boosting the circuit volume that can be simulated beyond what one would naively expect based on the logical error rate. Recent advances from the Google team, which reduce the overhead cost of logical gates, might also be helpful.

What about applications? Impactful applications to chemistry typically require rather deep circuits so are likely to be out of reach for a while yet, but applications to materials science provide a more tempting target in the near term. Taking advantage of symmetries and various circuit optimizations like the ones Phasecraft has achieved, we might start seeing informative results in the megaquop regime or only slightly beyond.

As a scientist, I’m intrigued by what we might conceivably learn about quantum dynamics far from equilibrium by doing simulations on megaquop machines, particularly in two dimensions. But when seeking quantum advantage in that arena we should bear in mind that classical methods for such simulations are also advancing impressively, including in the past year (for example, here and here).

To summarize, advances in hardware, control, algorithms, error correction, error mitigation, etc. are bringing us closer to megaquop machines, raising a compelling question for our community: What are the potential uses for these machines? Progress will require innovation at all levels of the stack.  The capabilities of early fault-tolerant quantum processors will guide application development, and our vision of potential applications will guide technological progress. Advances in both basic science and systems engineering are needed. These are still the early days of quantum computing technology, but our experience with megaquop machines will guide the way to gigaquops, teraquops, and beyond and hence to widely impactful quantum value that benefits the world.

I thank Dorit Aharonov, Sergio Boixo, Earl Campbell, Roland Farrell, Ashley Montanaro, Mike Newman, Will Oliver, Chris Pattison, Rob Schoelkopf, and Qian Xu for helpful comments.

*The acronym FASQ was suggested to me by Andrew Landahl.

The megaquop machine (image generated by ChatGPT.
The megaquop machine (image generated by ChatGPT).

Now published: Building Quantum Computers

Building Quantum Computers: A Practical Introduction by Shayan Majidy, Christopher Wilson, and Raymond Laflamme has been published by Cambridge University Press and will be released in the US on September 30. The authors invited me to write a Foreword for the book, which I was happy to do. The publisher kindly granted permission for me to post the Foreword here on Quantum Frontiers.

Foreword

The principles of quantum mechanics, which as far as we know govern all natural phenomena, were discovered in 1925. For 99 years we have built on that achievement to reach a comprehensive understanding of much of the physical world, from molecules to materials to elementary particles and much more. No comparably revolutionary advance in fundamental science has occurred since 1925. But a new revolution is in the offing.

Up until now, most of what we have learned about the quantum world has resulted from considering the behavior of individual particles — for example a single electron propagating as a wave through a crystal, unfazed by barriers that seem to stand in its way. Understanding that single-particle physics has enabled us to explore nature in unprecedented ways, and to build information technologies that have profoundly transformed our lives.

What’s happening now is we’re learning how to instruct particles to evolve in coordinated ways that can’t be accurately described in terms of the behavior of one particle at a time. The particles, as we like to say, can become entangled. Many particles, like electrons or photons or atoms, when highly entangled, exhibit an extraordinary complexity that we can’t capture with the most powerful of today’s supercomputers, or with our current theories of how nature works. That opens extraordinary opportunities for new discoveries and new applications.

Most temptingly, we anticipate that by building and operating large-scale quantum computers, which control the evolution of very complex entangled quantum systems, we will be able to solve some computational problems that are far beyond the reach of today’s digital computers. The concept of a quantum computer was proposed over 40 years ago, and the task of building quantum computing hardware has been pursued in earnest since the 1990s. After decades of steady progress, quantum information processors with hundreds of qubits have become feasible and are scientifically valuable. But we may need quantum processors with millions of qubits to realize practical applications of broad interest. There is still a long way to go.

Why is it taking so long? A conventional computer processes bits, where each bit could be, say, a switch which is either on or off. To build highly complex entangled quantum states, the fundamental information-carrying component of a quantum computer must be what we call a “qubit” rather than a bit. The trouble is that qubits are much more fragile than bits — when a qubit interacts with its environment, the information it carries is irreversibly damaged, a process called decoherence. To perform reliable logical operations on qubits, we need to prevent decoherence by keeping the qubits nearly perfectly isolated from their environment. That’s very hard to do. And because a qubit, unlike a bit, can change continuously, precisely controlling a qubit is a further challenge, even when decoherence is in check.

While theorists may find it convenient to regard a qubit (or a bit) as an abstract object, in an actual processor a qubit needs to be encoded in a particular physical system. There are many options. It might, for example, be encoded in a single atom which can be in either one of two long-lived internal states. Or the spin of a single atomic nucleus or electron which points either up or down along some axis. Or a single photon that occupies either one of two possible optical modes. These are all remarkable encodings, because the qubit resides in a very simple single quantum system, yet, thanks to technical advances over several decades, we have learned to control such qubits reasonably well. Alternatively, the qubit could be encoded in a more complex system, like a circuit conducting electricity without resistance at very low temperature. This is also remarkable, because although the qubit involves the collective motion of billions of pairs of electrons, we have learned to make it behave as though it were a single atom.

To run a quantum computer, we need to manipulate individual qubits and perform entangling operations on pairs of qubits. Once we can perform such single-qubit and two-qubit “quantum gates” with sufficient accuracy, and measure and initialize the qubits as well, then in principle we can perform any conceivable quantum computation by assembling sufficiently many qubits and executing sufficiently many gates.

It’s a daunting engineering challenge to build and operate a quantum system of sufficient complexity to solve very hard computation problems. That systems engineering task, and the potential practical applications of such a machine, are both beyond the scope of Building Quantum Computers. Instead the focus is on the computer’s elementary constituents for four different qubit modalities: nuclear spins, photons, trapped atomic ions, and superconducting circuits. Each type of qubit has its own fascinating story, told here expertly and with admirable clarity.

For each modality a crucial question must be addressed: how to produce well-controlled entangling interactions between two qubits. Answers vary. Spins have interactions that are always on, and can be “refocused” by applying suitable pulses. Photons hardly interact with one another at all, but such interactions can be mocked up using appropriate measurements. Because of their Coulomb repulsion, trapped ions have shared normal modes of vibration that can be manipulated to generate entanglement. Couplings and frequencies of superconducting qubits can be tuned to turn interactions on and off. The physics underlying each scheme is instructive, with valuable lessons for the quantum informationists to heed.

Various proposed quantum information processing platforms have characteristic strengths and weaknesses, which are clearly delineated in this book. For now it is important to pursue a variety of hardware approaches in parallel, because we don’t know for sure which ones have the best long term prospects. Furthermore, different qubit technologies might be best suited for different applications, or a hybrid of different technologies might be the best choice in some settings. The truth is that we are still in the early stages of developing quantum computing systems, and there is plenty of potential for surprises that could dramatically alter the outlook.

Building large-scale quantum computers is a grand challenge facing 21st-century science and technology. And we’re just getting started. The qubits and quantum gates of the distant future may look very different from what is described in this book, but the authors have made wise choices in selecting material that is likely to have enduring value. Beyond that, the book is highly accessible and fun to read. As quantum technology grows ever more sophisticated, I expect the study and control of highly complex many-particle systems to become an increasingly central theme of physical science. If so, Building Quantum Computers will be treasured reading for years to come.

John Preskill
Pasadena, California

Version 1.0.0

Crossing the quantum chasm: From NISQ to fault tolerance

On December 6, I gave a keynote address at the Q2B 2023 Conference in Silicon Valley. Here is a transcript of my remarks. The slides I presented are here. A video of my presentation is here.

Toward quantum value

The theme of this year’s Q2B meeting is “The Roadmap to Quantum Value.” I interpret “quantum value” as meaning applications of quantum computing that have practical utility for end-users in business. So I’ll begin by reiterating a point I have made repeatedly in previous appearances at Q2B. As best we currently understand, the path to economic impact is the road through fault-tolerant quantum computing. And that poses daunting challenges for our field and for the quantum industry.

We are in the NISQ era. NISQ (rhymes with “risk’”) is an acronym meaning “Noisy Intermediate-Scale Quantum.” Here “intermediate-scale” conveys that current quantum computing platforms with of order 100 qubits are difficult to simulate by brute force using the most powerful currently existing supercomputers. “Noisy” reminds us that today’s quantum processors are not error-corrected, and noise is a serious limitation on their computational power. NISQ technology already has noteworthy scientific value. But as of now there is no proposed application of NISQ computing with commercial value for which quantum advantage has been demonstrated when compared to the best classical hardware running the best algorithms for solving the same problems. Furthermore, currently there are no persuasive theoretical arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.

A useful survey of quantum computing applications, over 300 pages long, recently appeared, providing rough estimates of end-to-end run times for various quantum algorithms. This is hardly the last word on the subject — new applications are continually proposed, and better implementations of existing algorithms continually arise. But it is a valuable snapshot of what we understand today, and it is sobering.

There can be quantum advantage in some applications of quantum computing to optimization, finance, and machine learning. But in this application area, the speedups are typically at best quadratic, meaning the quantum run time scales as the square root of the classical run time. So the advantage kicks in only for very large problem instances and deep circuits, which we won’t be able to execute without error correction.

Larger polynomial advantage and perhaps superpolynomial advantage is possible in applications to chemistry and materials science, but these may require at least hundreds of very well-protected logical qubits, and hundreds of millions of very high-fidelity logical gates, if not more. Quantum fault tolerance will be needed to run these applications, and fault tolerance has a hefty cost in both the number of physical qubits and the number of physical gates required. We should also bear in mind that the speed of logical gates is relevant, since the run time as measured by the wall clock will be an important determinant of the value of quantum algorithms.

Overcoming noise in quantum devices

Already in today’s quantum processors steps are taken to address limitations imposed by the noise — we use error mitigation methods like zero noise extrapolation or probabilistic error cancellation. These methods work effectively at extending the size of the circuits we can execute with useful fidelity. But the asymptotic cost scales exponentially with the size of the circuit, so error mitigation alone may not suffice to reach quantum value. Quantum error correction, on the other hand, scales much more favorably, like a power of a logarithm of the circuit size. But quantum error correction is not practical yet. To make use of it, we’ll need better two-qubit gate fidelities, many more physical qubits, robust systems to control those qubits, as well as the ability to perform fast and reliable mid-circuit measurements and qubit resets; all these are technically demanding goals.

To get a feel for the overhead cost of fault-tolerant quantum computing, consider the surface code — it’s presumed to be the best near-term prospect for achieving quantum error correction, because it has a high accuracy threshold and requires only geometrically local processing in two dimensions. Once the physical two-qubit error rate is below the threshold value of about 1%, the probability of a logical error per error correction cycle declines exponentially as we increase the code distance d:

Plogical = (0.1)(Pphysical/Pthreshold)(d+1)/2

where the number of physical qubits in the code block (which encodes a single protected qubit) is the distance squared.

Suppose we wish to execute a circuit with 1000 qubits and 100 million time steps. Then we want the probability of a logical error per cycle to be 10-11. Assuming the physical error rate is 10-3, better than what is currently achieved in multi-qubit devices, from this formula we infer that we need a code distance of 19, and hence 361 physical qubits to encode each logical qubit, and a comparable number of ancilla qubits for syndrome measurement — hence over 700 physical qubits per logical qubit, or a total of nearly a million physical qubits.  If the physical error rate improves to 10-4 someday, that cost is reduced, but we’ll still need hundreds of thousands of physical qubits if we rely on the surface code to protect this circuit.

Progress toward quantum error correction

The study of error correction is gathering momentum, and I’d like to highlight some recent experimental and theoretical progress. Specifically, I’ll remark on three promising directions, all with the potential to hasten the arrival of the fault-tolerant era: erasure conversion, biased noise, and more efficient quantum codes.

Erasure conversion

Error correction is more effective if we know when and where the errors occurred. To appreciate the idea, consider the case of a classical repetition code that protects against bit flips. If we don’t know which bits have errors we can decode successfully by majority voting, assuming that fewer than half the bits have errors. But if errors are heralded then we can decode successfully by just looking at any one of the undamaged bits. In quantum codes the details are more complicated but the same principle applies — we can recover more effectively if so-called erasure errors dominate; that is, if we know which qubits are damaged and in which time steps. “Erasure conversion” means fashioning a processor such that the dominant errors are erasure errors.

We can make use of this idea if the dominant errors exit the computational space of the qubit, so that an error can be detected without disturbing the coherence of undamaged qubits. One realization is with Alkaline earth Rydberg atoms in optical tweezers, where 0 is encoded as a low energy state, and 1 is a highly excited Rydberg state. The dominant error is the spontaneous decay of the 1 to a lower energy state. But if the atomic level structure and the encoding allow, 1 usually decays not to a 0, but rather to another state g. We can check whether the g state is occupied, to detect whether or not the error occurred, without disturbing a coherent superposition of 0 and 1.

Erasure conversion can also be arranged in superconducting devices, by using a so-called dual-rail encoding of the qubit in a pair of transmons or a pair of microwave resonators. With two resonators, for example, we can encode a qubit by placing a single photon in one resonator or the other. The dominant error is loss of the photon, causing either the 01 state or the 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred, without disturbing a coherent superposition of 01 and 10.

Erasure detection has been successfully demonstrated in recent months, for both atomic (here and here) and superconducting (here and here) qubit encodings.

Biased noise

Another setting in which the effectiveness of quantum error correction can be enhanced is when the noise is highly biased. Quantum error correction is more difficult than classical error correction partly because more types of errors can occur — a qubit can flip in the standard basis, or it can flip in the complementary basis, what we call a phase error. In suitably designed quantum hardware the bit flips are highly suppressed, so we can concentrate the error-correcting power of the code on protecting against phase errors. For this scheme to work, it is important that phase errors occurring during the execution of a quantum gate do not propagate to become bit-flip errors. And it was realized just a few years ago that such bias-preserving gates are possible for qubits encoded in continuous variable systems like microwave resonators.

Specifically, we may consider a cat code, in which the encoded 0 and encoded 1 are coherent states, well separated in phase space. Then bit flips are exponentially suppressed as the mean photon number in the resonator increases. The main source of error, then, is photon loss from the resonator, which induces a phase error for the cat qubit, with an error rate that increases only linearly with photon number. We can then strike a balance, choosing a photon number in the resonator large enough to provide physical protection against bit flips, and then use a classical code like the repetition code to build a logical qubit well protected against phase flips as well.

Work on such repetition cat codes is ongoing (see here, here, and here), and we can expect to hear about progress in that direction in the coming months.

More efficient codes

Another exciting development has been the recent discovery of quantum codes that are far more efficient than the surface code. These include constant-rate codes, in which the number of protected qubits scales linearly with the number of physical qubits in the code block, in contrast to the surface code, which protects just a single logical qubit per block. Furthermore, such codes can have constant relative distance, meaning that the distance of the code, a rough measure of how many errors can be corrected, scales linearly with the block size rather than the square root scaling attained by the surface code.

These new high-rate codes can have a relatively high accuracy threshold, can be efficiently decoded, and schemes for executing fault-tolerant logical gates are currently under development.

A drawback of the high-rate codes is that, to extract error syndromes, geometrically local processing in two dimensions is not sufficient — long-range operations are needed. Nonlocality can be achieved through movement of qubits in neutral atom tweezer arrays or ion traps, or one can use the native long-range coupling in an ion trap processor. Long-range coupling is more challenging to achieve in superconducting processors, but should be possible.

An example with potential near-term relevance is a recently discovered code with distance 12 and 144 physical qubits. In contrast to the surface code with similar distance and length which encodes just a single logical qubit, this code protects 12 logical qubits, a significant improvement in encoding efficiency.

The quest for practical quantum error corrections offers numerous examples like these of co-design. Quantum error correction schemes are adapted to the features of the hardware, and ideas about quantum error correction guide the realization of new hardware capabilities. This fruitful interplay will surely continue.

An exciting time for Rydberg atom arrays

In this year’s hardware news, now is a particularly exciting time for platforms based on Rydberg atoms trapped in optical tweezer arrays. We can anticipate that Rydberg platforms will lead the progress in quantum error correction for at least the next few years, if two-qubit gate fidelities continue to improve. Thousands of qubits can be controlled, and geometrically nonlocal operations can be achieved by reconfiguring the atomic positions. Further improvement in error correction performance might be possible by means of erasure conversion. Significant progress in error correction using Rydberg platforms is reported in a paper published today.

But there are caveats. So far, repeatable error syndrome measurement has not been demonstrated. For that purpose, continuous loading of fresh atoms needs to be developed. And both the readout and atomic movement are relatively slow, which limits the clock speed.

Movability of atomic qubits will be highly enabling in the short run. But in the longer run, movement imposes serious limitations on clock speed unless much faster movement can be achieved. As things currently stand, one can’t rapidly accelerate an atom without shaking it loose from an optical tweezer, or rapidly accelerate an ion without heating its motional state substantially. To attain practical quantum computing using Rydberg arrays, or ion traps, we’ll eventually need to make the clock speed much faster.

Cosmic rays!

To be fair, other platforms face serious threats as well. One is the vulnerability of superconducting circuits to ionizing radiation. Cosmic ray muons for example will occasionally deposit a large amount of energy in a superconducting circuit, creating many phonons which in turn break Cooper pairs and induce qubit errors in a large region of the chip, potentially overwhelming the error-correcting power of the quantum code. What can we do? We might go deep underground to reduce the muon flux, but that’s expensive and inconvenient. We could add an additional layer of coding to protect against an event that wipes out an entire surface code block; that would increase the overhead cost of error correction. Or maybe modifications to the hardware can strengthen robustness against ionizing radiation, but it is not clear how to do that.

Outlook

Our field and the quantum industry continue to face a pressing question: How will we scale up to quantum computing systems that can solve hard problems? The honest answer is: We don’t know yet. All proposed hardware platforms need to overcome serious challenges. Whatever technologies may seem to be in the lead over, say, the next 10 years might not be the best long-term solution. For that reason, it remains essential at this stage to develop a broad array of hardware platforms in parallel.

Today’s NISQ technology is already scientifically useful, and that scientific value will continue to rise as processors advance. The path to business value is longer, and progress will be gradual. Above all, we have good reason to believe that to attain quantum value, to realize the grand aspirations that we all share for quantum computing, we must follow the road to fault tolerance. That awareness should inform our thinking, our strategy, and our investments now and in the years ahead.

Crossing the quantum chasm (image generated using Midjourney)

Caltech’s Ginsburg Center

Editor’s note: On 10 August 2023, Caltech celebrated the groundbreaking for the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement, which will open in 2025. At a lunch following the ceremony, John Preskill made these remarks.

Rendering of the facade of the Ginsburg Center

Hello everyone. I’m John Preskill, a professor of theoretical physics at Caltech, and I’m honored to have this opportunity to make some brief remarks on this exciting day.

In 2025, the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement will open on the Caltech campus. That will certainly be a cause for celebration. Quite fittingly, in that same year, we’ll have something else to celebrate — the 100th anniversary of the formulation of quantum mechanics in 1925. In 1900, it had become clear that the physics of the 19th century had serious shortcomings that needed to be addressed, and for 25 years a great struggle unfolded to establish a firm foundation for the science of atoms, electrons, and light; the momentous achievements of 1925 brought that quest to a satisfying conclusion. No comparably revolutionary advance in fundamental science has occurred since then.

For 98 years now we’ve built on those achievements of 1925 to arrive at a comprehensive understanding of much of the physical world, from molecules to materials to atomic nuclei and exotic elementary particles, and much else besides. But a new revolution is in the offing. And the Ginsburg Center will arise at just the right time and at just the right place to drive that revolution forward.

Up until now, most of what we’ve learned about the quantum world has resulted from considering the behavior of individual particles. A single electron propagating as a wave through a crystal, unfazed by barriers that seem to stand in its way. Or a single photon, bouncing hundreds of times between mirrors positioned kilometers apart, dutifully tracking the response of those mirrors to gravitational waves from black holes that collided in a galaxy billions of light years away. Understanding that single-particle physics has enabled us to explore nature in unprecedented ways, and to build information technologies that have profoundly transformed our lives.

At the groundbreaking: Physics, Math and Astronomy Chair Fiona Harrison, California Assemblymember Chris Holden, President Tom Rosenbaum, Charlotte Ginsburg, Dr. Allen Ginsburg, Pasadena Mayor Victor Gordo, Provost Dave Tirrell.

What’s happening now is that we’re getting increasingly adept at instructing particles to move in coordinated ways that can’t be accurately described in terms of the behavior of one particle at a time. The particles, as we like to say, can become entangled. Many particles, like electrons or photons or atoms, when highly entangled, exhibit an extraordinary complexity that we can’t capture with the most powerful of today’s supercomputers, or with our current theories of how Nature works. That opens extraordinary opportunities for new discoveries and new applications.

We’re very proud of the role Caltech has played in setting the stage for the next quantum revolution. Richard Feynman envisioning quantum computers that far surpass the computers we have today. Kip Thorne proposing ways to use entangled photons to perform extraordinarily precise measurements. Jeff Kimble envisioning and executing ingenious methods for entangling atoms and photons. Jim Eisenstein creating and studying extraordinary phenomena in a soup of entangled electrons. And much more besides. But far greater things are yet to come.

How can we learn to understand and exploit the behavior of many entangled particles that work together? For that, we’ll need many scientists and engineers who work together. I joined the Caltech faculty in August 1983, almost exactly 40 years ago. These have been 40 good years, but I’m having more fun now than ever before. My training was in elementary particle physics. But as our ability to manipulate the quantum world advances, I find that I have more and more in common with my colleagues from different specialties. To fully realize my own potential as a researcher and a teacher, I need to stay in touch with atomic physics, condensed matter physics, materials science, chemistry, gravitational wave physics, computer science, electrical engineering, and much else. Even more important, that kind of interdisciplinary community is vital for broadening the vision of the students and postdocs in our research groups.

Nurturing that community — that’s what the Ginsburg Center is all about. That’s what will happen there every day. That sense of a shared mission, enhanced by colocation, will enable the Ginsburg Center to lead the way as quantum science and technology becomes increasingly central to Caltech’s research agenda in the years ahead, and increasingly important for science and engineering around the globe. And I just can’t wait for 2025.

Caltech is very fortunate to have generous and visionary donors like the Ginsburgs and the Sherman Fairchild Foundation to help us realize our quantum dreams.

Dr. Allen and Charlotte Ginsburg

It from Qubit: The Last Hurrah

Editor’s note: Since 2015, the Simons Foundation has supported the “It from Qubit” collaboration, a group of scientists drawing on ideas from quantum information theory to address deep issues in fundamental physics. The collaboration held its “Last Hurrah” event at Perimeter Institute last week. Here is a transcript of remarks by John Preskill at the conference dinner.

It from Qubit 2023 at Perimeter Institute

This meeting is forward-looking, as it should be, but it’s fun to look back as well, to assess and appreciate the progress we’ve made. So my remarks may meander back and forth through the years. Settle back — this may take a while.

We proposed the It from Qubit collaboration in March 2015, in the wake of several years of remarkable progress. Interestingly, that progress was largely provoked by an idea that most of us think is wrong: Black hole firewalls. Wrong perhaps, but challenging to grapple with.

This challenge accelerated a synthesis of quantum computing, quantum field theory, quantum matter, and quantum gravity as well. By 2015, we were already appreciating the relevance to quantum gravity of concepts like quantum error correction, quantum computational complexity, and quantum chaos. It was natural to assemble a collaboration in which computer scientists and information theorists would participate along with high-energy physicists.

We built our proposal around some deep questions where further progress seemed imminent, such as these:

Does spacetime emerge from entanglement?
Do black holes have interiors?
What is the information-theoretical structure of quantum field theory?
Can quantum computers simulate all physical phenomena?

On April 30, 2015 we presented our vision to the Simons Foundation, led by Patrick [Hayden] and Matt [Headrick], with Juan [Maldacena], Lenny [Susskind] and me tagging along. We all shared at that time a sense of great excitement; that feeling must have been infectious, because It from Qubit was successfully launched.

Some It from Qubit investigators at a 2015 meeting.

Since then ideas we talked about in 2015 have continued to mature, to ripen. Now our common language includes ideas like islands and quantum extremal surfaces, traversable wormholes, modular flow, the SYK model, quantum gravity in the lab, nonisometric codes, the breakdown of effective field theory when quantum complexity is high, and emergent geometry described by Von Neumann algebras. In parallel, we’ve seen a surge of interest in quantum dynamics in condensed matter, focused on issues like how entanglement spreads, and how chaotic systems thermalize — progress driven in part by experimental advances in quantum simulators, both circuit-based and analog.

Why did we call ourselves “It from Qubit”? Patrick explained that in our presentation with a quote from John Wheeler in 1990. Wheeler said,

“It from bit” symbolizes the idea that every item of the physical world has at bottom—a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

As is often the case with Wheeler, you’re not quite sure what he’s getting at. But you can glean that Wheeler envisioned that progress in fundamental physics would be hastened by bringing in ideas from information theory. So we updated Wheeler’s vision by changing “it from bit” to “it from qubit.”

As you may know, Richard Feynman had been Wheeler’s student, and he once said this about Wheeler: “Some people think Wheeler’s gotten crazy in his later years, but he’s always been crazy.” So you can imagine how flattered I was when Graeme Smith said the exact same thing about me.

During the 1972-73 academic year, I took a full-year undergraduate course from Wheeler at Princeton that covered everything in physics, so I have a lot of Wheeler stories. I’ll just tell one, which will give you some feel for his teaching style. One day, Wheeler arrives in class dressed immaculately in a suit and tie, as always, and he says: “Everyone take out a sheet of paper, and write down all the equations of physics – don’t leave anything out.” We dutifully start writing equations. The Schrödinger equation, Newton’s laws, Maxwell’s equations, the definition of entropy and the laws of thermodynanics, Navier-Stokes … we had learned a lot. Wheeler collects all the papers, and puts them in a stack on a table at the front of the classroom. He gestures toward the stack and says imploringly “Fly!” [Long pause.] Nothing happens. He tries again, even louder this time: “Fly!” [Long pause.] Nothing happens. Then Wheeler concludes: “On good authority, this stack of papers contains all the equations of physics. But it doesn’t fly. Yet, the universe flies. Something must be missing.”

Channeling Wheeler at the banquet, I implore my equations to fly. Photo by Jonathan Oppenheim.

He was an odd man, but inspiring. And not just odd, but also old. We were 19 and could hardly believe he was still alive — after all, he had worked with Bohr on nuclear fission in the 1930s! He was 61. I’m wiser now, and know that’s not really so old.

Now let’s skip ahead to 1998. Just last week, Strings 2023 happened right here at PI. So it’s fitting to mention that a pivotal Strings meeting occurred 25 years ago, Strings 1998 in Santa Barbara. The participants were in a celebratory mood, so much so that Jeff Harvey led hundreds of physicists in a night of song and dance. It went like this [singing to the tune of “The Macarena”]:

You start with the brane
and the brane is BPS.
Then you go near the brane
and the space is AdS.
Who knows what it means?
I don’t, I confess.
Ehhhh! Maldacena!

You can’t blame them for wanting to celebrate. Admittedly I wasn’t there, so how did I know that hundreds of physicists were singing and dancing? I read about it in the New York Times!

It was significant that by 1998, the Strings meetings had already been held annually for 10 years. You might wonder how that came about. Let’s go back to 1984. Those of you who are too young to remember might not realize that in the late 70s and early 80s string theory was in eclipse. It had initially been proposed as a model of hadrons, but after the discovery of asymptotic freedom in 1973 quantum chromodynamics became accepted as the preferred theory of the strong interactions. (Maybe the QCD string will make a comeback someday – we’ll see.) The community pushing string theory forward shrunk to a handful of people around the world. That changed very abruptly in August 1984. I tried to capture that sudden change in a poem I wrote for John Schwarz’s 60th birthday in 2001. I’ll read it — think of this as a history lesson.

Thirty years ago or more
John saw what physics had in store.
He had a vision of a string
And focused on that one big thing.

But then in nineteen-seven-three
Most physicists had to agree
That hadrons blasted to debris
Were well described by QCD.

The string, it seemed, by then was dead.
But John said: “It’s space-time instead!
The string can be revived again.
Give masses twenty powers of ten!

Then Dr. Green and Dr. Black,
Writing papers by the stack,
Made One, Two-A, and Two-B glisten.
Why is it none of us would listen?

We said, “Who cares if super tricks
Bring D to ten from twenty-six?
Your theory must have fatal flaws.
Anomalies will doom your cause.”

If you weren’t there you couldn’t know
The impact of that mighty blow:
“The Green-Schwarz theory could be true —
It works for S-O-thirty-two!”

Then strings of course became the rage
And young folks of a certain age
Could not resist their siren call:
One theory that explains it all.

Because he never would give in,
Pursued his dream with discipline,
John Schwarz has been a hero to me.
So … please don’t spell it with a  “t”!

And 39 years after the revolutionary events of 1984, the intellectual feast launched by string theory still thrives.

In the late 1980s and early 1990s, many high-energy physicists got interested in the black hole information problem. Of course, the problem was 15 years old by then; it arose when Hawking radiation was discovered, as Hawking himself pointed out shortly thereafter. But many of us were drawn to this problem while we waited for the Superconducting Super Collider to turn on. As I have sometimes done when I wanted to learn something, in 1990 I taught a course on quantum field theory in curved spacetime, the main purpose of which was to explain the origin of Hawking radiation, and then for a few years I tried to understand whether information can escape from black holes and if so how, as did many others in those days. That led to a 1992 Aspen program co-organized by Andy Strominger and me on “Quantum Aspects of Black Holes.” Various luminaries were there, among them Hawking, Susskind, Sidney Coleman, Kip Thorne, Don Page, and others. Andy and I were asked to nominate someone from our program to give the Aspen Center colloquium, so of course we chose Lenny, and he gave an engaging talk on “The Puzzle of Black Hole Evaporation.”

At the end of the talk, Lenny reported on discussions he’d had with various physicists he respected about the information problem, and he summarized their views. Of course, Hawking said information is lost. ‘t Hooft said that the S-matrix must be unitary for profound reasons we needed to understand. Polchinski said in 1992 that information is lost and there is no way to retrieve it. Yakir Aharonov said that the information resides in a stable Planck-sized black hole remnant. Sidney Coleman said a black hole is a lump of coal — that was the code in 1992 for what we now call the central dogma of black hole physics, that as seen from the outside a black hole is a conventional quantum system. And – remember this was Lenny’s account of what he claimed people had told him – Frank Wilczek said this is a technical problem, I’ll soon have it solved, while Ed Witten said he did not find the problem interesting.

We talked a lot that summer about the no-cloning principle, and our discomfort with the notion that the quantum information encoded in an infalling encyclopedia could be in two places at once on the same time slice, seen inside the black hole by infalling observers and seen outside the black hole by observers who peruse the Hawking radiation. That potential for cloning shook the faith of the self-appointed defenders of unitarity. Andy and I wrote a report at the end of the workshop with a pessimistic tone:

There is an emerging consensus among the participants that Hawking is essentially right – that the information loss paradox portends a true revolution in fundamental physics. If so, then one must go further, and develop a sensible “phenomenological” theory of information loss. One must reconcile the fact of information loss with established principles of physics, such as locality and energy conservation. We expect that many people, stimulated by their participation in the workshop, will now focus attention on this challenge.

I posted a paper on the arXiv a month later with a similar outlook.

There was another memorable event a year later, in June 1993, a conference at the ITP in Santa Barbara (there was no “K” back then), also called “Quantum Aspects of Black Holes.” Among those attending were Susskind, Gibbons, Polchinski, Thorne, Wald, Israel, Bekenstein, and many others. By then our mood was brightening. Rather pointedly, Lenny said to me that week: “Why is this meeting so much better than the one you organized last year?” And I replied, “Because now you think you know the answer!”

That week we talked about “black hole complementarity,” our hope that quantum information being available both inside and outside the horizon could be somehow consistent with the linearity of quantum theory. Complementarity then was a less radical, less wildly nonlocal idea than it became later on. We envisioned that information in an infalling body could stick to the stretched horizon, but not, as I recall, that the black hole interior would be somehow encoded in Hawking radiation emitted long ago — that came later. But anyway, we felt encouraged.

Joe Polchinski organized a poll of the participants, where one could choose among four options.

  1. Information is lost (unitarity violated)
  2. Information escapes (causality violated)
  3. Planck-scale black hole remnants
  4. None of the above

The poll results favored unitarity over information loss by a 60-40 margin. Perhaps not coincidentally, the participants self-identified as 60% high energy physicists and 40% relativists.

The following summer in June 1994, there was a program called Geometry and Gravity at the Newton Institute in Cambridge. Hawking, Gibbons, Susskind, Strominger, Harvey, Sorkin, and (Herman) Verlinde were among the participants. I had more discussions with Lenny that month than any time before or since. I recall sending an email to Paul Ginsparg after one such long discussion in which I said, “When I hear Lenny Susskind speak, I truly believe that information can come out of a black hole.” Secretly, though, having learned about Shor’s algorithm shortly before that program began, I was spending my evenings struggling to understand Shor’s paper. After Cambridge, Lenny visited ‘t Hooft in Utrecht, and returned to Stanford all charged up to write his paper on “The world as a hologram,” in which he credits ‘t Hooft with the idea that “the world is in a sense two-dimensional.”

Important things happened in the next few years: D-branes, counting of black hole microstates, M-theory, and AdS/CFT. But I’ll skip ahead to the most memorable of my visits to Perimeter Institute. (Of course, I always like coming here, because in Canada you use the same electrical outlets we do …)

In June 2007, there was a month-long program at PI called “Taming the Quantum World.” I recall that Lucien Hardy objected to that title — he preferred “Let the Beast Loose” — which I guess is a different perspective on the same idea. I talked there about fault-tolerant quantum computing, but more importantly, I shared an office with Patrick Hayden. I already knew Patrick well — he had been a Caltech postdoc — but I was surprised and pleased that he was thinking about black holes. Patrick had already reached crucial insights concerning the behavior of a black hole that is profoundly entangled with its surroundings. That sparked intensive discussions resulting in a paper later that summer called “Black holes as mirrors.” In the acknowledgments you’ll find this passage:

We are grateful for the hospitality of the Perimeter Institute, where we had the good fortune to share an office, and JP thanks PH for letting him use the comfortable chair.

We intended for that paper to pique the interest of both the quantum information and quantum gravity communities, as it seemed to us that the time was ripe to widen the communication channel between the two. Since then, not only has that communication continued, but a deeper synthesis has occurred; most serious quantum gravity researchers are now well acquainted with the core concepts of quantum information science.

That John Schwarz poem I read earlier reminds me that I often used to write poems. I do it less often lately. Still, I feel that you are entitled to hear something that rhymes tonight. But I quickly noticed our field has many words that are quite hard to rhyme, like “chaos” and “dogma.” And perhaps the hardest of all: “Takayanagi.” So I decided to settle for some limericks — that’s easier for me than a full-fledged poem.

This first one captures how I felt when I first heard about AdS/CFT: excited but perplexed.

Spacetime is emergent they say.
But emergent in what sort of way?
It’s really quite cool,
The bulk has a dual!
I might understand that someday.

For a quantum information theorist, it was pleasing to learn later on that we can interpret the dictionary as an encoding map, such that the bulk degrees of freedom are protected when a portion of the boundary is erased.

Almheiri and Harlow and Dong
Said “you’re thinking about the map wrong.”
It’s really a code!
That’s the thing that they showed.
Should we have known that all along?

(It is easier to rhyme “Dong” than “Takayanagi”.) To see that connection one needed a good grasp of both AdS/CFT and quantum error-correcting codes. In 2014 few researchers knew both, but those guys did.

For all our progress, we still don’t have a complete answer to a key question that inspired IFQ. What’s inside a black hole?

Information loss has been denied.
Locality’s been cast aside.
When the black hole is gone
What fell in’s been withdrawn.
I’d still like to know: what’s inside?

We’re also still lacking an alternative nonperturbative formulation of the bulk; we can only say it’s something that’s dual to the boundary. Until we can define both sides of the correspondence, the claim that two descriptions are equivalent, however inspiring, will remain unsatisfying.

Duality I can embrace.
Complexity, too, has its place.
That’s all a good show
But I still want to know:
What are the atoms of space?

The question, “What are the atoms of space?” is stolen from Joe Polchinski, who framed it to explain to a popular audience what we’re trying to answer. I miss Joe. He was a founding member of It from Qubit, an inspiring scientific leader, and still an inspiration for all of us today.

The IFQ Simons collaboration may fade away, but the quest that has engaged us these past 8 years goes on. IFQ is the continuation of a long struggle, which took on great urgency with Hawking’s formulation of the information loss puzzle nearly 50 years ago. Understanding quantum gravity and its implications is a huge challenge and a grand quest that humanity is obligated to pursue. And it’s fun and it’s exciting, and I sincerely believe that we’ve made remarkable progress in recent years, thanks in large part to you, the IFQ community. We are privileged to live at a time when truths about the nature of space and time are being unveiled. And we are privileged to be part of this community, with so many like-minded colleagues pulling in the same direction, sharing the joy of facing this challenge.

Where is it all going? Coming back to our pitch to the Simons Foundation in 2015, I was very struck by Juan’s presentation that day, and in particular his final slide. I liked it so much that I stole it and used in my presentations for a while. Juan tried to explain what we’re doing by means of an analogy to biological science. How are the quantumists like the biologists?

Well, bulk quantum gravity is life. We all want to understand life. The boundary theory is chemistry, which underlies life. The quantum information theorists are chemists; they want to understand chemistry in detail. The quantum gravity theorists are biologists, they think chemistry is fine, if it can really help them to understand life. What we want is: molecular biology, the explanation for how life works in terms of the underlying chemistry. The black hole information problem is our fruit fly, the toy problem we need to solve before we’ll be ready to take on a much bigger challenge: finding the cure for cancer; that is, understanding the big bang.

How’s it going? We’ve made a lot of progress since 2015. We haven’t cured cancer. Not yet. But we’re having a lot of fun along the way there.

I’ll end with this hope, addressed especially to those who were not yet born when AdS/CFT was first proposed, or were still scampering around in your playpens. I’ll grant you a reprieve, you have another 8 years. By then: May you cure cancer!

So I propose this toast: To It from Qubit, to our colleagues and friends, to our quest, to curing cancer, to understanding the universe. I wish you all well. Cheers!

Quantum Information Meets Quantum Matter: Now Published!

Two things you should know about me are: (1) I have unbounded admiration for scientists who can actually finish writing a book, and (2) I’m a firm believer that exciting progress can be ignited when two fields fuse together. So I’m doubly thrilled that Quantum Information Meets Quantum Matter, by IQIM physicist Xie Chen and her colleagues Bei Zeng, Duan-Lu Zhou, and Xiao-Gang Wen, has now been published by Springer.

The authors kindly invited me to write a foreword for the book, which I was happy to contribute. That foreword is reproduced here, with the permission of the publisher.

Foreword

In 1989 I attended a workshop at the University of Minnesota. The organizers had hoped the workshop would spawn new ideas about the origin of high-temperature superconductivity, which had recently been discovered. But I was especially impressed by a talk about the fractional quantum Hall effect by a young physicist named Xiao-Gang Wen.

From Wen I heard for the first time about a concept called topological order. He explained that for some quantum phases of two-dimensional matter the ground state becomes degenerate when the system resides on a surface of nontrivial topology such as a torus, and that the degree of degeneracy provides a useful signature for distinguishing different phases. I was fascinated.

Up until then, studies of phases of matter and the transitions between them usually built on principles annunciated decades earlier by Lev Landau. Landau had emphasized the crucial role of symmetry, and of local order parameters that distinguish different symmetry realizations. Though much of what Wen said went over my head, I did manage to glean that he was proposing a way to distinguish quantum phases founded on much different principles that Landau’s. As a particle physicist I deeply appreciated the power of Landau theory, but I was also keenly aware that the interface of topology and physics had already yielded many novel and fruitful insights.

Mulling over these ideas on the plane ride home, I scribbled a few lines of verse:

Now we are allowed
To disavow Landau.
Wow …

Without knowing where it might lead, one could sense the opening of a new chapter.

At around that same time, another new research direction was beginning to gather steam, the study of quantum information. Richard Feynman and Yuri Manin had suggested that a computer processing quantum information might perform tasks beyond the reach of ordinary digital computers. David Deutsch formalized the idea, which attracted the attention of computer scientists, and eventually led to Peter Shor’s discovery that a quantum computer can factor large numbers in polynomial time. Meanwhile, Alexander Holevo, Charles Bennett and others seized the opportunity to unify Claude Shannon’s information theory with quantum physics, erecting new schemes for quantifying quantum entanglement and characterizing processes in which quantum information is acquired, transmitted, and processed.

The discovery of Shor’s algorithm caused a burst of excitement and activity, but quantum information science remained outside the mainstream of physics, and few scientists at that time glimpsed the rich connections between quantum information and the study of quantum matter. One notable exception was Alexei Kitaev, who had two remarkable insights in the 1990s. He pointed out that finding the ground state energy of a quantum system defined by a “local” Hamiltonian, when suitably formalized, is as hard as any problem whose solution can be verified with a quantum computer. This idea launched the study of Hamiltonian complexity. Kitaev also discerned the relationship between Wen’s concept of topological order and the quantum error-correcting codes that can protect delicate quantum superpositions from the ravages of environmental decoherence. Kitaev’s notion of a topological quantum computer, a mere theorist’s fantasy when proposed in 1997, is by now pursued in experimental laboratories around the world (though the technology still has far to go before truly scalable quantum computers will be capable of addressing hard problems).

Thereafter progress accelerated, led by a burgeoning community of scientists working at the interface of quantum information and quantum matter. Guifre Vidal realized that many-particle quantum systems that are only slightly entangled can be succinctly described using tensor networks. This new method extended the reach of mean-field theory and provided an illuminating new perspective on the successes of the Density Matrix Renormalization Group (DMRG). By proving that the ground state of a local Hamiltonian with an energy gap has limited entanglement (the area law), Matthew Hastings showed that tensor network tools are widely applicable. These tools eventually led to a complete understanding of gapped quantum phases in one spatial dimension.

The experimental discovery of topological insulators focused attention on the interplay of symmetry and topology. The more general notion of a symmetry-protected topological (SPT) phase arose, in which a quantum system has an energy gap in the bulk but supports gapless excitations confined to its boundary which are protected by specified symmetries. (For topological insulators the symmetries are particle-number conservation and time-reversal invariance.) Again, tensor network methods proved to be well suited for establishing a complete classification of one-dimensional SPT phases, and guided progress toward understanding higher dimensions, though many open questions remain.

We now have a much deeper understanding of topological order than when I first heard about it from Wen nearly 30 years ago. A central new insight is that topologically ordered systems have long-range entanglement, and that the entanglement has universal properties, like topological entanglement entropy, which are insensitive to the microscopic details of the Hamiltonian. Indeed, topological order is an intrinsic property of a quantum state and can be identified without reference to any particular Hamiltonian at all. To understand the meaning of long-range entanglement, imagine a quantum computer which applies a sequence of geometrically local operations to an input quantum state, producing an output product state which is completely disentangled. If the time required to complete this disentangling computation is independent of the size of the system, then we say the input state is short-ranged entangled; otherwise it is long-range entangled. More generally (loosely speaking), two states are in different quantum phases if no constant-time quantum computation can convert one state to the other. This fundamental connection between quantum computation and quantum order has many ramifications which are explored in this book.

When is the right time for a book that summarizes the status of an ongoing research area? It’s a subtle question. The subject should be sufficiently mature that enduring concepts and results can be identified and clearly explained. If the pace of progress is sufficiently rapid, and the topics emphasized are not well chosen, then an ill-timed book might become obsolete quickly. On the other hand, the subject ought not to be too mature; only if there are many exciting open questions to attack will the book be likely to attract a sizable audience eager to master the material.

I feel confident that Quantum Information Meets Quantum Matter is appearing at an opportune time, and that the authors have made wise choices about what to include. They are world-class experts, and are themselves responsible for many of the scientific advances explained here. The student or senior scientist who studies this book closely will be well grounded in the tools and ideas at the forefront of current research at the confluence of quantum information science and quantum condensed matter physics.

Indeed, I expect that in the years ahead a steadily expanding community of scientists, including computer scientists, chemists, and high-energy physicists, will want to be well acquainted with the ideas at the heart of Quantum Information Meets Quantum Matter. In particular, growing evidence suggests that the quantum physics of spacetime itself is an emergent manifestation of long-range quantum entanglement in an underlying more fundamental quantum theory. More broadly, as quantum technology grows ever more sophisticated, I believe that the theoretical and experimental study of highly complex many-particle systems will be an increasingly central theme of 21st century physical science. It that’s true, Quantum Information Meets Quantum Matter is bound to hold an honored place on the bookshelves of many scientists for years to come.

John Preskill
Pasadena, California
September 2018

 

 

My QIP 2019 After-Dinner Speech

Scientists who work on theoretical aspects of quantum computation and information look forward each year to the Conference on Quantum Information Processing (QIP), an annual event since 1998. This year’s meeting, QIP 2019, was hosted this past week by the University of Colorado at Boulder. I attended and had a great time, as I always do.

But this year, in addition to catching up with old friends and talking with colleagues about the latest research advances, I also accepted a humbling assignment: I was the after-dinner speaker at the conference banquet. Here is (approximately) what I said.

QIP 2019 After-Dinner Speech
16 January 2019

Thanks, it’s a great honor to be here, and especially to be introduced by Graeme Smith, my former student. I’m very proud of your success, Graeme. Back in the day, who would have believed it?

And I’m especially glad to join you for these holiday festivities. You do know this is a holiday, don’t you? Yes, as we do every January, we are once again celebrating Gottesman’s birthday! Happy Birthday, Daniel!

Look, I’m kidding of course. Yes, it really is Daniel’s birthday — and I’m sure he appreciates 500 people celebrating in his honor — but I know you’re really here for QIP. We’ve been holding this annual celebration of Quantum Information Processing since 1998 — this is the 22nd QIP. If you are interested in the history of this conference, it’s very helpful that the QIP website includes links to the sites for all previous QIPs. I hope that continues; it conveys a sense of history. For each of those past meetings, you can see what people were talking about, who was there, what they looked like in the conference photo, etc.

Some of you were there the very first time – I was not. But among the attendees at the first QIP, in Arhus in 1998, where a number of brilliant up-and-coming young scientists who have since then become luminaries of our field. Including: Dorit Aharonov, Wim van Dam, Peter Hoyer (who was an organizer), Michele Mosca, John Smolin, Barbara Terhal, and John Watrous. Also somewhat more senior people were there, like Harry Buhrman and Richard Cleve. And pioneers so eminent that we refer to them by their first names alone:  Umesh … Gilles … Charlie. It’s nice to know those people are still around, but it validates the health of our field that so many new faces are here, that so many young people are still drawn to QIP, 21 years after it all began. Over 300 students and postdocs are here this year, among nearly 500 attendees.

QIP has changed since the early days. It was smaller and more informal then; the culture was more like a theoretical physics conference, where the organizing committee brainstorms and conjures up a list of invited speakers. The system changed in 2006, when for the first time there were submissions and a program committee. That more formal system opened up opportunities to speak to a broader community, and the quality of the accepted talks has stayed very high — only 18% of 349 submissions were accepted this year.

In fact it has become a badge of honor to speak here — people put it on their CVs: “I gave a QIP contributed talk, or plenary talk, or invited talk.” But what do you think is the highest honor that QIP can bestow? Well, it’s obvious isn’t it? It’s the after-dinner speech! That’s the talk to rule them all. So Graeme told me, when he invited me to do this. And I checked, Gottesman put it on his website, and everyone knows Daniel is a very serious guy. So it must be important. Look, we’re having a banquet in honor of his birthday, and he can hardly crack a smile!

I hear the snickers. I know what you’re thinking. “John, wake up. Don’t you see what Graeme was trying to tell you: You’re too washed up to get a talk accepted to QIP! This is the only way to get you on the program now!” But no, you’re wrong. Graeme told me this is a great honor. And I trust Graeme. He’s an honest man. What? Why are you laughing? It’s true.

I asked Graeme, what should I talk about? He said, “Well, you might try to be funny.” I said, “What do you mean funny? You mean funny Ha Ha? Or do you mean funny the way cheese smells when it’s been in the fridge for too long?” He said, “No I mean really, really funny. You know, like Scott.”

So there it was, the gauntlet had been thrown. Some of you are too young to remember this, but the most notorious QIP after-dinner speech of them all was Scott Aaronson’s in Paris in 2006. Were you there? He used props, and he skewered his more senior colleagues with razor sharp impressions. And remember, this was 2006, so everybody was Scott’s more senior colleague. He was 12 at the time, if memory serves.

He killed. Even I appreciated some of the jokes; for example, as a physicist I could understand this one: Scott said, “I don’t care about the fine structure constant, it’s just a constant.” Ba ding!  So Scott set the standard back then, and though many have aspired to clear the bar since then, few have come close.

But remember, this was Graeme I was talking to. And I guess many of you know that I’ve had a lot of students through the years, and I’m proud of all of them. But my memory isn’t what it once was; I need to use mnemonic tricks to keep track of them now. So I have a rating system;  I rate them according to how funny they are. And Graeme is practically off the chart, that’s how funny he is. But his is what I call stealth humor. You can’t always tell that he’s being funny, but you assume it.

So I said, “Graeme, What’s the secret? Teach me how to be funny.” I meant it sincerely, and he responded sympathetically. Graeme said, “Well, if you want to be funny, you have to believe you are funny. So when I want to be funny, I think of someone who is funny, and I pretend to be that person.” I said, “Aha, so you go out there and pretend to be Graeme Smith?” And Graeme said, “No, that wouldn’t work for me. I close my eyes and pretend I’m … John Smolin!” I said, “Graeme, you mean you want me to be indistinguishable from John Smolin to an audience of computationally bounded quantum adversaries?” He nodded. “But Graeme, I don’t know any plausible cryptographic assumptions under which that’s possible!”

Fortunately, I had another idea. “I write poems,” I said. “What if I recite a poem? This would set a great precedent. From now on, everyone would know: the QIP after-dinner speech will be a poetry slam!”

Graeme replied “Well, that sounds [long pause] really [pause] boring. But how about a limerick? People love limericks.” I objected, “Graeme, I don’t do limericks. I’m not good at limericks.” But he wouldn’t back down. “Try a limerick,” Graeme said. “People like limericks. They’re so [pause] short.”

But I don’t do limericks. You see:

I was invited to speak here by Graeme.
He knows me well, just as I am.
He was really quite nice
When he gave this advice:
Please don’t do a poetry slam.

Well, like I said, I don’t do limericks.

So now I’m starting to wonder: Why did they invite me to do this anyway? And I think I figured that out. See, Graeme asked me to speak just a few days ago. This must be what happened. Like any smoothly functioning organizing committee, they lined up an after-dinner speaker months in advance, as is the usual practice.

But then, just a few days before the conference began, they began to worry. “We better comb through the speaker’s Twitter feed. Maybe, years ago, our speaker said something offensive, something disqualifying.” And guess what? They found something, something really bad. It turned out that the designated after-dinner speaker had once made a deeply offensive remark about something called “quantum supremacy” … No, wait … that can’t be it.

Can’t you picture the panicky meeting of the organizers? QIP is about to start, and there’s no after-dinner speaker! So people started throwing out suggestions, starting with the usual suspects.

“How about Schroedinger’s Rat?”
“No, he’s booked.”
“Are you telling me Schroedinger’s Rat has another gig that same night?”
“No, no, I mean they booked him.  A high-profile journal filed a complaint and he’s in the slammer.”
“Well, how about RogueQIPConference?”
“No, same problem.”

“I’ve got it,” someone says: “How about the hottest quantum Twitter account out there? Yes, I’m talking about Quantum Computing Memes for QMA-Complete Teens!”

Are you all following that account? You should be. That’s where I go for all the latest fast-breaking quantum news. And that’s where you can get advice about what a quantumist should wear on Halloween. Your costume should combine Sexy with your greatest fear.  Right, I mean Sexy P = BQP.

Hey does that worry you? That maybe P = BQP? Does it keep you up at night? It’s possible, isn’t it? But it doesn’t worry me much. If it turns out that P = BQP, I’m just going to make up another word. How about NISP? Noisy Intermediate-Scale Polynomial.

I guess they weren’t able to smoke out whoever is behind Quantum Computing Memes for QMA-Complete Teens. So here I am.

Aside from Limericks, Graeme had another suggestion. He said, “You can reminisce. Tell us what QIP was like in the old days.” “The old days?” I said. “Yes, you know. You could be one of those stooped-over white-haired old men who tells interminable stories that nobody cares about.” I hesitated. “Yeah, I think I could do that.”

Okay, if that’s what you want, I’ll tell a story about my first QIP; that was QIP 2000, which was actually in Montreal in December 1999. It was back in the BPC era — Before Program Committee — and I was an invited speaker (I talked about decoding the toric code). Attending with me was Michael Nielsen, then a Caltech postdoc. Michael’s good friend Ike Chuang was also in the hotel, and they were in adjacent rooms. Both had brought laptops (not a given in 1999), and they wanted to share files. Well, hotels did not routinely offer Internet access back then, and certainly not wireless. But Ike had brought along a spool of Ethernet cable. So Ike and Mike both opened their windows, even though it was freezing cold. And Ike leaned out his window and made repeated attempts to toss the cable though Michael’s window before he finally succeeded, and they connected their computers.

I demanded to know, why the urgent need for a connection? And that was the day I found what most of the rest of the quantum world already knew: Mike and Ike were writing a book! By then they were in the final stages of writing, after some four years of effort (they sent the final draft of the book off to Cambridge University Press the following June).

So, QIP really has changed. The Mike and Ike book is out now. And it’s no longer necessary to open your window on a frigid Montreal evening to share a file with your collaborator.

Boy, it was cold that week in Montreal. [How cold was it?] Well, we went to lunch one day during the conference, and were walking single file down a narrow sidewalk toward the restaurant, when Harry Buhrman, who was right behind me, said: “John, there’s an icicle on your backpack!” You see, I hadn’t screwed the cap all the way shut on my water bottle, water was leaking out of the bottle, soaking through the backback, and immediately freezing on contact with the air; hence the icicle. And ever since then I’ve always been sure to screw my bottle cap shut tight. But over the years since then, lots of other things have spilled in my backpack just the same, and I’d love to tell you about that, but …

Well, my stories may be too lacking in drama to carry the evening ….  Look, I don’t care what Graeme says, I’m gonna recite some poems!

I can’t remember how this got started, but some years ago I started writing a poem whenever I needed to introduce a speaker at the Caltech physics colloquium. I don’t do this so much anymore. Partly because I realized that my poetry might reveal my disturbing innermost thoughts, which are best kept private.

Actually, one of my colleagues, after hearing one of my poems, suggested throwing the poem into a black hole. And when we tried it … boom …. it bounced right back, but in a highly scrambled form! And ever since then I’ve had that excuse. If someone says “That’s not such a great poem,” I can shoot back, “Yeah, but it was better before it got scrambled.”

But anyway, here’s one I wrote to honor Ben Schumacher, the pioneer of quantum information theory who named the qubit, and whose compression theorem you all know well.

Ben.
He rocks.
I remember
When
He showed me how to fit
A qubit
In a small box.

I wonder how it feels
To be compressed.
And then to pass
A fidelity test.
Or does it feel
At all, and if it does
Would I squeal
Or be just as I was?

If not undone
I’d become as I’d begun
And write a memorandum
On being random.
Had it felt like a belt
Of rum?

And might it be predicted
That I’d become addicted,
Longing for my session
Of compression?

I’d crawl
To Ben again.
And call,
“Put down your pen!
Don’t stall!
Make me small!”

[Silence]

Yeah that’s the response I usually get when I recite this poem — embarrassed silence, followed by a few nervous titters.

So, as you can see, as in Ben Schumacher’s case, I use poetry to acknowledge our debt to the guiding intellects of our discipline. It doesn’t always work, though. I once tried to write a poem about someone I admire very much, Daniel Gottesman, and it started like this:

When the weather’s hottest, then
I call for Daniel Gottesman.
My apples are less spotted when
Daniel eats the rottenest ten …

It just wasn’t working, so I stopped there. Someday, I’ll go back and finish it. But it’s tough to rhyme “Gottesman.”

More apropos of QIP, some of you may recall that about 12 years ago, one of the hot topics was quantum speedups for formula evaluation, a subject ignited by a brilliant paper by Eddie Farhi, Jeffrey Goldstone, and Sam Gutmann. They showed there’s a polynomial speedup if we use a quantum computer to, say, determine whether a two-player game has a winning strategy. That breakthrough inspired me to write an homage to Eddie, which went:

We’re very sorry, Eddie Farhi
Your algorithm’s quantum.
Can’t run it on those mean machines
Until we’ve actually got ‘em.

You’re not alone, so go on home,
Tell Jeffrey and tell Sam:
Come up with something classical
Or else it’s just a scam.

Unless … you think it’s on the brink
A quantum-cal device.
That solves a game and brings you fame.
Damn! That would be nice!

Now, one thing that Graeme explained to me is that the white-haired-old-man talk has a mandatory feature: It must go on too long. Maybe I have met that criterion by now. Except …

There’s one thing Graeme neglected to say. He never told me that I must not sing at QIP.

You see, there’s a problem: Tragically, though I like to sing, I don’t sing very well at all. And unfortunately, I am totally unaware of this fact. So I sometimes I sing in public, despite strongly worded advice not to do so.

When I was about to leave home on my way to QIP, my wife Roberta asked me, “When are you going to prepare your after-dinner talk?” I said, “Well, I guess I’ll work on it on the plane.” She said, “LA to Denver, that’s not a long enough flight.” I said, “I know!”

What I didn’t say, is that I was thinking of singing a song. If I had, Roberta would have tried to stop me from boarding the plane.

So I guess it’s up to you, what do you think? Should we stop here while I’m (sort of) ahead, or should we take the plunge. Song or no song? How many say song?

All right, that’s good enough for me! This is a song that I usually perform in front of a full orchestra, and I hoped the Denver Symphony Orchestra would be here to back me up. But it turns out they don’t exist anymore. So I’ll just have to do my best.

If you are a fan of Rodgers and Hammerstein, you’ll recognize the tune as a butchered version of Some Enchanted Evening, But the lyrics have changed. This song is called One Entangled Evening.

One entangled evening
We will see a qubit
And another qubit
Across a crowded lab.

And somehow we’ll know
We’ll know even then
This qubit’s entangled
Aligned with its friend.

One entangled evening
We’ll cool down a circuit
See if we can work it
At twenty milli-K.

A circuit that cold
Is worth more than gold
For qubits within it.
Will do as they’re told.

Quantum’s inviting, just as Feynman knew.
The future’s exciting, if we see it through

One entangled evening
Anyons will be braiding
And thereby evading
The noise that haunts the lab.

Then our quantum goods
Will work as they should
Solving the problems
No old gadget could!

Once we have dreamt it, we can make it so.
Once we have dreamt it, we can make it so!

The song lyrics are meant to be uplifting, and I admit they’re corny. No one can promise you that, in the words of another song, “the dreams that you dare to dream really do come true.” That’s not always the case.

At this time in the field of quantum information processing, there are very big dreams, and many of us worry about unrealistic expectations concerning the time scale for quantum computing to have a transformative impact on society. Progress will be incremental. New technology does not change the world all at once; it’s a gradual process.

But I do feel that from the perspective of the broad sweep of history, we (the QIP community and the broader quantum community) are very privileged to be working in this field at a pivotal time in the history of science and technology on earth. We should deeply cherish that good fortune, and the opportunities it affords. I’m confident that great discoveries lie ahead for us.

It’s been a great privilege for me to be a part of a thriving quantum community for more than 20 years. By now, QIP has become one of our venerable traditions, and I hope it continues to flourish for many years ahead. Now it’s up to all of you to make our quantum dreams come true. We are on a great intellectual adventure. Let’s savor it and enjoy it to the hilt!

Thanks for putting up with me tonight.

[And here’s proof that I really did sing.]