Has quantum advantage been achieved?

Recently, I gave a couple of perspective talks on quantum advantage, one at the annual retreat of the CIQC and one at a recent KITP programme. I started off by polling the audience on who believed quantum advantage had been achieved. Just this one, simple question.

The audience was mostly experimental and theoretical physicists with a few CS theory folks sprinkled in. I was sure that these audiences would be overwhelmingly convinced of the successful demonstration of quantum advantage. After all, more than half a decade has passed since the first experimental claim (G1) of “quantum supremacy” as the patron of this blog’s institute called the idea “to perform tasks with controlled quantum systems going beyond what can be achieved with ordinary digital computers” (Preskill, p. 2) back in 2012. Yes, this first experiment by the Google team may have been simulated in the meantime, but it was only the first in an impressive series of similar demonstrations that became bigger and better with every year that passed. Surely, so I thought, a significant part of my audiences would have been convinced of quantum advantage even before Google’s claim, when so-called quantum simulation experiments claimed to have performed computations that no classical computer could do (e.g. (qSim)).

I could not have been more wrong.

In both talks, less than half of the people in the audience thought that quantum advantage had been achieved.

In the discussions that ensued, I came to understand what folks criticized about the experiments that have been performed and even the concept of quantum advantage to begin with. But more on that later. Most of all, it seemed to me, the community had dismissed Google’s advantage claim because of the classical simulation shortly after. It hadn’t quite kept track of all the advances—theoretical and experimental—since then.

In a mini-series of three posts, I want to remedy this and convince you that the existing quantum computers can perform tasks that no classical computer can do. Let me caution, though, that the experiments I am going to talk about solve a (nearly) useless task. Nothing of what I say implies that you should (yet) be worried about your bank accounts.

I will start off by recapping what quantum advantage is and how it has been demonstrated in a set of experiments over the past few years.

Part 1: What is quantum advantage and what has been done?

To state the obvious: we are now fairly convinced that noiseless quantum computers would be able solve problems efficiently that no classical computer could solve. In fact, we have been convinced of that already since the mid-90ies when Lloyd and Shor discovered two basic quantum algorithms: simulating quantum systems and factoring large numbers. Both of these are tasks where we are as certain as we could be that no classical computer can solve them. So why talk about quantum advantage 20 and 30 years later?

The idea of a quantum advantage demonstration—be it on a completely useless task even—emerged as a milestone for the field in the 2010s. Achieving quantum advantage would finally demonstrate that quantum computing was not just a random idea of a bunch of academics who took quantum mechanics too seriously. It would show that quantum speedups are real: We can actually build quantum devices, control their states and the noise in them, and use them to solve tasks which not even the largest classical supercomputers could do—and these are very large.

What is quantum advantage?

But what exactly do we mean by “quantum advantage”. It is a vague concept, for sure. But some essential criteria that a demonstration should certainly satisfy are probably the following.

  1. The quantum device needs to solve a pre-specified computational task. This means that there needs to be an input to the quantum computer. Given the input, the quantum computer must then be programmed to solve the task for the given input. This may sound trivial. But it is crucial because it delineates programmable computing devices from just experiments on any odd physical system.
  2. There must be a scaling difference in the time it takes for a quantum computer to solve the task and the time it takes for a classical computer. As we make the problem or input size larger, the difference between the quantum and classical solution times should increase disproportionately, ideally exponentially.
  3. And finally: the actual task solved by the quantum computer should not be solvable by any classical machine (at the time).

Achieving this last criterion using imperfect, noisy quantum devices is the challenge the idea of quantum supremacy set for the field. After all, running any of our favourite quantum algorithms in a classically hard regime on these devices is completely out of the question. They are too small and too noisy. So the field had to come up with the conceivably smallest and most noise-robust quantum algorithm that has a significant scaling advantage against classical computation.

Random circuits are really hard to simulate!

The idea is simple: we just run a random computation, constructed in a way that is as favorable as we can make it to the quantum device while being as hard as possible classically. This may strike as a pretty unfair way to come up with a computational task—it is just built to be hard for classical computers without any other purpose. But: it is a fine computational task. There is an input: the description of the quantum circuit, drawn randomly. The device needs to be programmed to run this exact circuit. And there is a task: just return whatever this quantum computation would return. These are strings of 0s and 1s drawn from a certain distribution. Getting the distribution of the strings right for a given input circuit is the computational task.

This task, dubbed random circuit sampling, can be solved on a classical as well as a quantum computer, but there is a (presumably) exponential advantage for the quantum computer. More on that in Part 2.

For now, let me tell you about the experimental demonstrations of random circuit sampling. Allow me to be slightly more formal. The task solved in random circuit sampling is to produce bit strings x{0,1}nx \in \{0,1\}^n distributed according to the Born-rule outcome distribution

pC(x)=|x|C|0|2p_C(x) = | \bra x C \ket {0}|^2

of a sequence of elementary quantum operations (unitary rotations of one or two qubits at a time) which is drawn randomly according to certain rules. This circuit CC is applied to a reference state |0\ket 0 on the quantum computer and then measured, giving the string xx as an outcome.

The breakthrough: classically hard programmable quantum computations in the real world

In the first quantum supremacy experiment (G1) by the Google team, the quantum computer was built from 53 superconducting qubits arranged in a 2D grid. The operations were randomly chosen simple one-qubit gates (X,Y,X+Y\sqrt X, \sqrt Y, \sqrt{X+Y}) and deterministic two-qubit gates called fSim applied in the 2D pattern, and repeated a certain number of times (the depth of the circuit). The limiting factor in these experiments was the quality of the two-qubit gates and the measurements, with error probabilities around 0.6 % and 4 %, respectively.

A very similar experiment was performed by the USTC team on 56 qubits (U1) and both experiments were repeated with better fidelities (0.4 % and 1 % for two-qubit gates and measurements) and slightly larger system sizes (70 and 83 qubits, respectively) in the past two years (G2,U2).

Using a trapped-ion architecture, the Quantinuum team also demonstrated random circuit sampling on 56 qubits but with arbitrary connectivity (random regular graphs) (Q). There, the two-qubit gates were π/2\pi/2-rotations around ZZZ \otimes Z, the single-qubit gates were uniformly random and the error rates much better (0.15 % for both two-qubit gate and measurement errors).

All the experiments ran random circuits on varying system sizes and circuit depths, and collected thousands to millions of samples from a few random circuits at a given size. To benchmark the quality of the samples, the widely accepted benchmark is now the linear cross-entropy (XEB) benchmark defined as

χ=2n𝔼C𝔼xpC(x)1,\chi = 2^n \mathbb E_C \mathbb E_{x} p_C(x) -1 ,

for an nn-qubit circuit. The expectation over CC is over the random choice of circuit and the expectation over xx is over the experimental distribution of the bit strings. In other words, to compute the XEB given a list of samples, you ‘just’ need to compute the ideal probability of obtaining that sample from the circuit CC and average the outcomes.

The XEB is nice because it gives 1 for ideal samples from sufficiently random circuits and 0 for uniformly random samples, and it can be estimated accurately from just a few samples. Under the right conditions, it turns out to be a good proxy for the many-body fidelity of the quantum state prepared just before the measurement.

This tells us that we should expect an XEB score of (1error per gate)# gatescnd(1-\text{error per gate})^{\text{\# gates}} \sim c^{- n d } for some noise- and architecture-dependent constant cc. All of the experiments achieved a value of the XEB that was significantly (in the statistical sense) far away from 0 as you can see in the plot below. This shows that something nontrivial is going on in the experiments, because the fidelity we expect for a maximally mixed or random state is 2n2^{-n} which is less than 101410^{-14} % for all the experiments.

The complexity of simulating these experiments is roughly governed by an exponential in either the number of qubits or the maximum bipartite entanglement generated. Figure 5 of the Quantinuum paper has a nice comparison.

It is not easy to say how much leverage an XEB significantly lower than 1 gives a classical spoofer. But one can certainly use it to judiciously change the circuit a tiny bit to make it easier to simulate.

Even then, reproducing the low scores between 0.05 % and 0.2 % of the experiments is extremely hard on classical computers. To the best of my knowledge, producing samples that match the experimental XEB score has only been achieved for the first experiment from 2019 (PCZ). This simulation already exploited the relatively low XEB score to simplify the computation, but even for the slightly larger 56 qubit experiments these techniques may not be feasibly run. So to the best of my knowledge, the only one of the experiments which may actually have been simulated is the 2019 experiment by the Google team.

If there are better methods, or computers, or more willingness to spend money on simulating random circuits today, though, I would be very excited to hear about it!

Proxy of a proxy of a benchmark

Now, you may be wondering: “How do you even compute the XEB or fidelity in a quantum advantage experiment in the first place? Doesn’t it require computing outcome probabilities of the supposedly hard quantum circuits?” And that is indeed a very good question. After all, the quantum advantage of random circuit sampling is based on the hardness of computing these probabilities. This is why, to get an estimate of the XEB in the advantage regime, the experiments needed to use proxies and extrapolation from classically tractable regimes.

This will be important for Part 2 of this series, where I will discuss the evidence we have for quantum advantage, so let me give you some more detail. To extrapolate, one can just run smaller circuits of increasing sizes and extrapolate to the size in the advantage regime. Alternatively, one can run circuits with the same number of gates but with added structure that makes them classically simulatable and extrapolate to the advantage circuits. Extrapolation is based on samples from different experiments from the quantum advantage experiments. All of the experiments did this.

A separate estimate of the XEB score is based on proxies. An XEB proxy uses the samples from the advantage experiments, but computes a different quantity than the XEB that can actually be computed and for which one can collect independent numerical and theoretical evidence that it matches the XEB in the relevant regime. For example, the Google experiments averaged outcome probabilities of modified circuits that were related to the true circuits but easier to simulate.

The Quantinuum experiment did something entirely different, which is to estimate the fidelity of the advantage experiment by inverting the circuit on the quantum computer and measuring the probability of coming back to the initial state.

All of the methods used to estimate the XEB of the quantum advantage experiments required some independent verification based on numerics on smaller sizes and induction to larger sizes, as well as theoretical arguments.

In the end, the advantage claims are thus based on a proxy of a proxy of the quantum fidelity. This is not to say that the advantage claims do not hold. In fact, I will argue in my next post that this is just the way science works. I will also tell you more about the evidence that the experiments I described here actually demonstrate quantum advantage and discuss some skeptical arguments.


Let me close this first post with a few notes.

In describing the quantum supremacy experiments, I focused on random circuit sampling which is run on programmable digital quantum computers. What I neglected to talk about is boson sampling and Gaussian boson sampling, which are run on photonic devices and have also been experimentally demonstrated. The reason for this is that I think random circuits are conceptually cleaner since they are run on processors that are in principle capable of running an arbitrary quantum computation while the photonic devices used in boson sampling are much more limited and bear more resemblance to analog simulators.

I want to continue my poll here, so feel free to write in the comments whether or not you believe that quantum advantage has been demonstrated (by these experiments) and if not, why.

References

[G1] Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019).

[Preskill] Preskill, J. Quantum computing and the entanglement frontier. arXiv:1203.5813 (2012).

[qSim] Choi, J. et al. Exploring the many-body localization transition in two dimensions. Science 352, 1547–1552 (2016). .

[U1] Wu, Y. et al. Strong Quantum Computational Advantage Using a Superconducting Quantum Processor. Phys. Rev. Lett. 127, 180501 (2021).

[G2] Morvan, A. et al. Phase transitions in random circuit sampling. Nature 634, 328–333 (2024).

[U2] Gao, D. et al. Establishing a New Benchmark in Quantum Computational Advantage with 105-qubit Zuchongzhi 3.0 Processor. Phys. Rev. Lett. 134, 090601 (2025).

[Q] DeCross, M. et al. Computational Power of Random Quantum Circuits in Arbitrary Geometries. Phys. Rev. X 15, 021052 (2025).

[PCZ] Pan, F., Chen, K. & Zhang, P. Solving the sampling problem of the Sycamore quantum circuits. Phys. Rev. Lett. 129, 090502 (2022).

Nicole’s guide to interviewing for faculty positions

Snow is haunting weather forecasts, home owners are taking down Christmas lights, stores are discounting exercise equipment, and faculty-hiring committees are winnowing down applications. In-person interviews often take place between January and March but can extend from December to April. If you applied for faculty positions this past fall and you haven’t begun preparing for interviews, begin. This blog post relates my advice about in-person interviews. It most directly addresses assistant professorships in theoretical physics at R1 North American universities, but the advice generalizes to other contexts. 

Top takeaway: Your interviewers aim to confirm that they’ll enjoy having you as a colleague. They’ll want to take pleasure in discussing a colloquium with you over coffee, consult you about your area of expertise, take pride in your research achievements, and understand you even if your specialty differs from theirs. You delight in learning and sharing about physics, right? Focus on that delight, and let it shine.

Anatomy of an interview: The typical interview lasts for one or two days. Expect each day to begin between 8:00 and 10:00 AM and to end between 7:00 and 8:30 PM. Yes, you’re justified in feeling exhausted just thinking about such a day. Everyone realizes that faculty interviews are draining, including the people who’ve packed your schedule. But fear not, even if you’re an introvert horrified at the thought of talking for 12 hours straight! Below, I share tips for maintaining your energy level. Your interview will probably involve many of the following components:

  • One-on-one meetings with faculty members: Vide infra for details and advice.
  • A meeting with students: Such meetings often happen over lunch or coffee.
  • Scientific talk: Vide infra.
  • Chalk talk: Vide infra.
  • Dinner: Faculty members will typically take you out to dinner. However, as an undergrad, I once joined a student dinner with a faculty candidate. Expect dinner to last a couple of hours, ending between 8:00 and 8:30 PM.
  • Breakfast: Interviews rarely extend to breakfast, in my experience. But I once underwent an interview whose itinerary was so packed, a faculty member squeezed himself onto the schedule by coming to my hotel’s restaurant for banana bread and yogurt.

After receiving the interview invitation, politely request that your schedule include breaks. First, of course, you’ll thank the search-committee chair (who probably issued the invitation), convey your enthusiasm, and opine about possible interview dates. After accomplishing those tasks, as a candidate, I asked that a 5-to-10-minute break separate consecutive meetings and that 30–45 minutes of quiet time precede my talk (or talks). Why? For two reasons.

First, the search committee was preparing to pack my interview day (or days) to the gills. I’d have to talk for about twelve hours straight. And—much as I adore the physics community, adore learning about physics from colleagues, and adore sharing physics—I’m an introvert. Such a schedule exhausts me. It would probably exhaust all but the world champions of extroversion, and few physicists could even qualify for that competition. After nearly every meeting, I’d find a bathroom, close my eyes, and breathe. (I might also peek at my notes about my next interviewee; vide infra.) The alone time replenished my energy.

Second, committees often schedule interviews back to back. Consecutive interviews might take place in different buildings, though, and walking between buildings doesn’t take zero minutes. Also, physicists love explaining their research. Interviewer #1 might therefore run ten minutes over their allotted time before realizing they had to shepherd me to another building in zero minutes. My lateness would disrespect Interviewer #2. Furthermore, many interviews last only 30 minutes each. Given 30 - 10 - (\gtrsim 0) \approx 15 minutes, Interviewer #2 and I could scarcely make each other’s acquaintance. So I smuggled travel time into my schedule.

Feel awkward about requesting breaks? Don’t worry; everyone knows that interview days are draining. Explain honestly, simply, and respectfully that you’re excited about meeting everyone and that breaks will keep you energized throughout the long day.

Research your interviewers: A week before your interview, the hiring committee should have begun drafting a schedule for you. The schedule might continue to evolve until—and during—your interview. But request the schedule a week in advance, and research everyone on it.

When preparing for an interview, I’d create a Word/Pages document with one page per person. On Interviewer X’s page, I’d list relevant information culled from their research-group website, university faculty pages, arXiv page, and Google Scholar page. Does X undertake theoretical or experimental research? Which department do they belong to? Which experimental platform/mathematical toolkit do they specialize in? Which of their interests overlap with which of mine? Which papers of theirs intrigue me most? Could any of their insights inform my research or vice versa? Do we share any coauthors who might signal shared research goals? I aimed to be able to guide a conversation that both X and I would enjoy and benefit from.

Ask your advisors if they know anybody on your schedule or in the department you’re visiting. Advisors know and can contextualize many of their peers. For example, perhaps X grew famous for discovery Y, founded subfield Z, or harbors a covert affection for the foundations of quantum physics. An advisor of yours might even have roomed with X in college.

Prepare an elevator pitch for your research program: Cross my heart and hope to die, the following happened to me when I visited another institution (although not to interview). My host and I stepped into elevator occupied by another faculty member. Our conversation could have served as the poster child for the term “elevator pitch”:

Host: Hi, Other Faculty Member; good to see you. By the way, this is Nicole from Maryland. She’s giving the talk today.

Other Faculty Member: Ah, good to meet you, Nicole. What do you work on?

Be able to answer that question—to synopsize your research program—before leaving the elevator. Feel free start with your subfield: artificial active matter, the many-body physics of quantum information, dark-matter detection, etc. But the subfield doesn’t suffice. Oodles of bright-eyed, bushy-tailed young people study the many-body physics of quantum information. How does your research stand out? Do you apply a unique toolkit? Are you pursuing a unique goal? Can you couple together more qubits than any other experimentalist using the same platform? Make Other Faculty Member think, Ah. I’d like to attend that talk.

Dress neatly and academically: Interview clothing should demonstrate respect, while showing that you understand the department’s culture and belong there. Almost no North American physicists wear ties, even to present colloquia, so I advise against ties. Nor do I recommend suits. 

To those presenting as male, I’d recommend slacks; a button-down shirt; dark shoes (neither sneakers nor patent leather); and a corduroy or knit pullover, a corduroy or knit vest, or a sports jacket. If you prefer a skirt or dress, I’d recommend that it reach at least your knees. Wear comfortable shoes; you’ll stand and walk a great deal. Besides, many interviews take place during the winter, a season replete with snow and mud. I wore knee-height black leather boots that had short, thick heels.

Look the part. Act the part. Help your interviewers envision you in the position you want.

Pack snacks: A student group might whisk you off to lunch at 11:45, but dinner might not begin until 6:30. Don’t let your blood-sugar level drop too low. On my interview days, I packed apple slices and nuts: a balance of unprocessed sugar, protein, and fat.

One-on-one meetings: The hiring committee will cram these into your schedule like sardines into a tin. Typically, you’ll meet with each faculty member for approximately 30 minutes. The faculty member might work in your area of expertise, might belong to the committee (and so might subscribe to a random area of expertise), or might simply be curious about you. Prepare for these one-on-one meetings in advance, as described above. Review your notes on the morning of your interview. Be able to initiate and sustain a conversation of interest to you and your interlocutor, as well as to follow their lead. Your interlocutor might want to share their research, ask technical questions about your work, or hear a bird’s-eye overview of your research program. 

Other topics, such as teaching and faculty housing, might crop up. Feel free to address these subjects if your interlocutor introduces them. If you’re directing the conversation, though, I’d focus mostly on physics. You can ask about housing and other logistics if you receive an offer, and these topics often arise at faculty dinners.

The job talk: The interview will center on a scientific talk. You might present a seminar (perhaps billed as a “special seminar”) or a colloquium. The department will likely invite all its members to attend. Focus mostly on the research you’ve accomplished. Motivate your research program, to excite even attendees from outside your field. (This blog post describes what I look for in a research program when evaluating applications.) But also demonstrate your technical muscle; show how your problems qualify as difficult and how you’ve innovated solutions. Hammer home your research’s depth, but also dedicate a few minutes to its breadth, to demonstrate your research maturity. At the end, offer a glimpse of your research plans. The hiring committee might ask you to dwell more on those in a chalk talk (vide infra). 

Practice your talk alone many times, practice in front of an audience, revise the talk, practice it alone again many times, and practice it in front of another audience. And then—you guessed it—practice the talk again. Enlist listeners from multiple subfields of physics, including yours. Also, enlist grad students, postdocs, and faculty members. Different listeners can help ensure that you’re explaining concepts understandably, that you’ve brushed up on the technicalities, and that you’re motivating your research convincingly.

A faculty member once offered the following advice about questions asked during job talks: if you don’t know an answer, you can offer to look it up after the talk. But you can play this “get out of jail free” card only once. I’ll expand on the advice: if you promise to look up an answer, then follow through, and email the answer to the inquirer. Also, even if you don’t know an answer, you can answer a related question that’ll satisfy the inquirer partially. For example, suppose someone asks whether a particular experiment supports a prediction you’ve made. Maybe you haven’t checked—but maybe you have checked numerical simulations of similar experiments.

The chalk talk: The hiring committee might or might not request a chalk talk. I have the impression that experimentalists receive the request more than theorists do. Still, I presented a couple of chalk talks as a theorist. Only the hiring committee, or at least only faculty members, will attend such a talk. They’ll probably have attended your scientific talk, so don’t repeat much of it. 

The name “chalk talk” can deceive us in two ways. First, one committee requested that I prepare slides for my chalk talk. Another committee did limit me to chalk, though. Second, the chalk “talk” may end up a conversation, rather than a presentation.

The hiring-committee chair should stipulate in advance what they want from your chalk talk. If they don’t, ask for clarification. Common elements include the following:

  • Describe the research program you’ll pursue over the next five years.
  • Where will you apply for funding? Offer greater detail than “the NSF”: under which NSF programs does your research fall? Which types of NSF grants will you apply for at which times?
  • How will you grow your group? How many undergrads, master’s students, PhD students, and postdocs will you hire during each of the next five years? When will your group reach a steady state? How will the steady state look?
  • Describe the research project you’ll give your first PhD/master’s/undergraduate student.
  • What do you need in a startup package? (A startup package consists of university-sourced funding. It enables you to hire personnel, buy equipment, and pay other expenses before landing your first grants.)
  • Which experimental/computational equipment will you purchase first? How much will it cost?
  • Which courses do you want to teach? Identify undergraduate courses, core graduate-level courses, and one or two specialized seminars.

Sample interview questions: Sketch your answers to the following questions in bullet points. Writing the answers out will ensure that you think through them and will help you remember them. Using bullet points will help you pinpoint takeaways.

  • The questions under “The chalk talk”
  • What sort of research do you do?
  • What are you most excited about?
  • Where do you think your field is headed? How will it look in five, ten, or twenty years?
  • Which paper are you proudest of?
  • How will you distinguish your research program from your prior supervisors’ programs?
  • Do you envision opportunities for theory–experiment collaborations?
  • What teaching experience do you have? (Research mentorship counts as teaching. Some public outreach can count, too.)
  • Which mathematical tools do you use most?
  • How do you see yourself fitting into the department? (Does the department host an institute for your subfield? Does the institute have oodles of theorists whom you’ll counterbalance as an experimentalist? Will you bridge multiple research groups through your interdisciplinary work? Will you anchor a new research group that the department plans to build over the next decade?)

Own your achievements, but don’t boast: At a workshop late in my PhD, I heard a professor describe her career. She didn’t color her accomplishments artificially; she didn’t sound arrogant; she didn’t even sound as though she aimed to impress her audience. She sounded as though the workshop organizer had tasked her with describing her work and she was following his instructions straightforwardly, honestly, and simply. Her achievements spoke for themselves. They might as well have been reciting Shakespeare, they so impressed me. Perhaps we early-career researchers need another few decades before we can hope to emulate that professor’s poise and grace. But when compelled to describe what I’ve done, I lift my gaze mentally to her.

My schooling imprinted on me an appreciation for modesty. Therefore, the need to own my work publicly used to trouble me. But your interviewers need to know of your achievements: they need to respect you, to see that you deserve a position in their department. Don’t downplay your contributions to collaborations, and don’t shy away from claiming your proofs. But don’t brag or downplay your collaborators’ contributions. Describe your work straightforwardly; let it speak for itself.

Evaluators shouldn’t ask about your family: Their decision mustn’t depend on whether you’re a single adult who can move at the drop of a hat, whether you’re engaged to someone who’ll have to approve the move, or whether you have three children rooted in their school district. This webpage elaborates on the US’s anti-discrimination policy. What if an evaluator asks a forbidden question? One faculty member has recommended the response, “Does the position depend on that information?”

Follow up: Thank each of your interviewers individually, via email, within 24 hours of the conversation. Time is to faculty members as water is to Californians during wildfire season. As an interviewee, I felt grateful to all the faculty who dedicated time to me. (I mailed hand-written thank-you cards in addition to writing emails, but I’d expect almost nobody else to do that.)

How did I compose thank-you messages? I’d learned some nugget from every meeting, and I’d enjoyed some element of almost every meeting. I described what I learned and enjoyed, and I expressed the gratitude I felt.

Try to enjoy yourself: A committee chose your application from amongst hundreds. Cherish the compliment. Cherish the opportunity to talk physics with smart people. During my interviews, I learned about quantum information, thermodynamics, cosmology, biophysics,  and dark-matter detection. I connected with faculty members whom I still enjoy greeting at conferences; unknowingly recruited a PhD student into quantum thermodynamics during a job talk; and, for the first time, encountered a dessert shaped like sushi (at a faculty dinner. I stuck with a spicy tuna roll, but the dessert roll looked stunning). Retain an attitude of gratitude, and you won’t regret your visit.

Quantum computing in the second quantum century

On December 10, I gave a keynote address at the Q2B 2025 Conference in Silicon Valley. This is a transcript of my remarks. The slides I presented are here.

The first century

We are nearing the end of the International Year of Quantum Science and Technology, so designated to commemorate the 100th anniversary of the discovery of quantum mechanics in 1925. The story goes that 23-year-old Werner Heisenberg, seeking relief from severe hay fever, sailed to the remote North Sea Island of Helgoland, where a crucial insight led to his first, and notoriously obscure, paper describing the framework of quantum mechanics.

In the years following, that framework was clarified and extended by Heisenberg and others. Notably among them was Paul Dirac, who emphasized that we have a theory of almost everything that matters in everyday life. It’s the Schrödinger equation, which captures the quantum behavior of many electrons interacting electromagnetically with one another and with atomic nuclei. That describes everything in chemistry and materials science and all that is built on those foundations. But, as Dirac lamented, in general the equation is too complicated to solve for more than a few electrons.

Somehow, over 50 years passed before Richard Feynman proposed that if we want a machine to help us solve quantum problems, it should be a quantum machine, not a classical machine. The quest for such a machine, he observed, is “a wonderful problem because it doesn’t look so easy,” a statement that still rings true.

I was drawn into that quest about 30 years ago. It was an exciting time. Efficient quantum algorithms for the factoring and discrete log problems were discovered, followed rapidly by the first quantum error-correcting codes and the foundations of fault-tolerant quantum computing. By late 1996, it was firmly established that a noisy quantum computer could simulate an ideal quantum computer efficiently if the noise is not too strong or strongly correlated. Many of us were then convinced that powerful fault-tolerant quantum computers could eventually be built and operated.

Three decades later, as we enter the second century of quantum mechanics, how far have we come? Today’s quantum devices can perform some tasks beyond the reach of the most powerful existing conventional supercomputers. Error correction had for decades been a playground for theorists; now informative demonstrations are achievable on quantum platforms. And the world is investing heavily in advancing the technology further.

Current NISQ machines can perform quantum computations with thousands of two-qubit gates, enabling early explorations of highly entangled quantum matter, but still with limited commercial value. To unlock a wide variety of scientific and commercial applications, we need machines capable of performing billions or trillions of two-qubit gates. Quantum error correction is the way to get there.

I’ll highlight some notable developments over the past year—among many others I won’t have time to discuss. (1) We’re seeing intriguing quantum simulations of quantum dynamics in regimes that are arguably beyond the reach of classical simulations. (2) Atomic processors, both ion traps and neutral atoms in optical tweezers, are advancing impressively. (3) We’re acquiring a deeper appreciation of the advantages of nonlocal connectivity in fault-tolerant protocols. (4) And resource estimates for cryptanalytically relevant quantum algorithms have dropped sharply.

Quantum machines for science

A few years ago, I was not particularly excited about running applications on the quantum platforms that were then available; now I’m more interested. We have superconducting devices from IBM and Google with over 100 qubits and two-qubit error rates approaching 10^{-3}. The Quantinuum ion trap device has even better fidelity as well as higher connectivity. Neutral-atom processors have many qubits; they lag behind now in fidelity, but are improving.

Users face tradeoffs: The high connectivity and fidelity of ion traps is an advantage, but their clock speeds are orders of magnitude slower than for superconducting processors. That limits the number of times you can run a given circuit, and therefore the attainable statistical accuracy when estimating expectations of observables.

Verifiable quantum advantage

Much attention has been paid to sampling from the output of random quantum circuits, because this task is provably hard classically under reasonable assumptions. The trouble is that, in the high-complexity regime where a quantum computer can reach far beyond what classical computers can do, the accuracy of the quantum computation cannot be checked efficiently. Therefore, attention is now shifting toward verifiable quantum advantage — tasks where the answer can be checked. If we solved a factoring or discrete log problem, we could easily check the quantum computer’s output with a classical computation, but we’re not yet able to run these quantum algorithms in the classically hard regime. We might settle instead for quantum verification, meaning that we check the result by comparing two quantum computations and verifying the consistency of the results.

A type of classical verification of a quantum circuit was demonstrated recently by BlueQubit on a Quantinuum processor. In this scheme, a designer builds a family of so-called “peaked” quantum circuits such that, for each such circuit and for a specific input, one output string occurs with unusually high probability. An agent with a quantum computer who knows the circuit and the right input can easily identify the preferred output string by running the circuit a few times. But the quantum circuits are cleverly designed to hide the peaked output from a classical agent — one may argue heuristically that the classical agent, who has a description of the circuit and the right input, will find it hard to predict the preferred output. Thus quantum agents, but not classical agents, can convince the circuit designer that they have reliable quantum computers. This observation provides a convenient way to benchmark quantum computers that operate in the classically hard regime.

The notion of quantum verification was explored by the Google team using Willow. One can execute a quantum circuit acting on a specified input, and then measure a specified observable in the output. By repeating the procedure sufficiently many times, one obtains an accurate estimate of the expectation value of that output observable. This value can be checked by any other sufficiently capable quantum computer that runs the same circuit. If the circuit is strategically chosen, then the output value may be very sensitive to many-qubit interference phenomena, in which case one may argue heuristically that accurate estimation of that output observable is a hard task for classical computers. These experiments, too, provide a tool for validating quantum processors in the classical hard regime. The Google team even suggests that such experiments may have practical utility for inferring molecular structure from nuclear magnetic resonance data.

Correlated fermions in two dimensions

Quantum simulations of fermionic systems are especially compelling, since electronic structure underlies chemistry and materials science. These systems can be hard to simulate in more than one dimension, particularly in parameter regimes where fermions are strongly correlated, or in other words profoundly entangled. The two-dimensional Fermi-Hubbard model is a simplified caricature of two-dimensional materials that exhibit high-temperature superconductivity and hence has been much studied in recent decades. Large-scale tensor-network simulations are reasonably successful at capturing static properties of this model, but the dynamical properties are more elusive.

Dynamics in the Fermi-Hubbard model has been simulated recently on both Quantinuum (here and here) and Google processors. Only a 6 x 6 lattice of electrons was simulated, but this is already well beyond the scope of exact classical simulation. Comparing (error-mitigated) quantum circuits with over 4000 two-qubit gates to heuristic classical tensor-network and Majorana path methods, discrepancies were noted, and the Phasecraft team argues that the quantum simulation results are more trustworthy. The Harvard group also simulated models of fermionic dynamics, but were limited to relatively low circuit depths due to atom loss. It’s encouraging that today’s quantum processors have reached this interesting two-dimensional strongly correlated regime, and with improved gate fidelity and noise mitigation we can go somewhat further, but expanding system size substantially in digital quantum simulation will require moving toward fault-tolerant implementations. We should also note that there are analog Fermi-Hubbard simulators with thousands of lattice sites, but digital simulators provide greater flexibility in the initial states we can prepare, the observables we can access, and the Hamiltonians we can reach.

When it comes to many-particle quantum simulation, a nagging question is: “Will AI eat quantum’s lunch?” There is surging interest in using classical artificial intelligence to solve quantum problems, and that seems promising. How will AI impact our quest for quantum advantage in this problem space? This question is part of a broader issue: classical methods for quantum chemistry and materials have been improving rapidly, largely because of better algorithms, not just greater processing power. But for now classical AI applied to strongly correlated matter is hampered by a paucity of training data.  Data from quantum experiments and simulations will likely enhance the power of classical AI to predict properties of new molecules and materials. The practical impact of that predictive power is hard to clearly foresee.

The need for fundamental research

Today is December 10th, the anniversary of Alfred Nobel’s death. The Nobel Prize award ceremony in Stockholm concluded about an hour ago, and the Laureates are about to sit down for a well-deserved sumptuous banquet. That’s a fitting coda to this International Year of Quantum. It’s useful to be reminded that the foundations for today’s superconducting quantum processors were established by fundamental research 40 years ago into macroscopic quantum phenomena. No doubt fundamental curiosity-driven quantum research will continue to uncover unforeseen technological opportunities in the future, just as it has in the past.

I have emphasized superconducting, ion-trap, and neutral atom processors because those are most advanced today, but it’s vital to continue to pursue alternatives that could suddenly leap forward, and to be open to new hardware modalities that are not top-of-mind at present. It is striking that programmable, gate-based quantum circuits in neutral-atom optical-tweezer arrays were first demonstrated only a few years ago, yet that platform now appears especially promising for advancing fault-tolerant quantum computing. Policy makers should take note!

The joy of nonlocal connectivity

As the fault-tolerant era dawns, we increasingly recognize the potential advantages of the nonlocal connectivity resulting from atomic movement in ion traps and tweezer arrays, compared to geometrically local two-dimensional processing in solid-state devices. Over the past few years, many contributions from both industry and academia have clarified how this connectivity can reduce the overhead of fault-tolerant protocols.

Even when using the standard surface code, the ability to implement two-qubit logical gates transversally—rather than through lattice surgery—significantly reduces the number of syndrome-measurement rounds needed for reliable decoding, thereby lowering the time overhead of fault tolerance. Moreover, the global control and flexible qubit layout in tweezer arrays increase the parallelism available to logical circuits.

Nonlocal connectivity also enables the use of quantum low-density parity-check (qLDPC) codes with higher encoding rates, reducing the number of physical qubits needed per logical qubit for a target logical error rate. These codes now have acceptably high accuracy thresholds, practical decoders, and—thanks to rapid theoretical progress this year—emerging constructions for implementing universal logical gate sets. (See for example here, here, here, here.)

A serious drawback of tweezer arrays is their comparatively slow clock speed, limited by the timescales for atom transport and qubit readout. A millisecond-scale syndrome-measurement cycle is a major disadvantage relative to microsecond-scale cycles in some solid-state platforms. Nevertheless, the reductions in logical-gate overhead afforded by atomic movement can partially compensate for this limitation, and neutral-atom arrays with thousands of physical qubits already exist.

To realize the full potential of neutral-atom processors, further improvements are needed in gate fidelity and continuous atom loading to maintain large arrays during deep circuits. Encouragingly, active efforts on both fronts are making steady progress.

Approaching cryptanalytic relevance

Another noteworthy development this year was a significant improvement in the physical qubit count required to run a cryptanalytically relevant quantum algorithm, reduced by Gidney to less than 1 million physical qubits from the 20 million Gidney and Ekerå had estimated earlier. This applies under standard assumptions: a two-qubit error rate of 10^{-3} and 2D geometrically local processing. The improvement was achieved using three main tricks. One was using approximate residue arithmetic to reduce the number of logical qubits. (This also suppresses the success probability and therefore lengthens the time to solution by a factor of a few.) Another was using a more efficient scheme to reduce the number of physical qubits for each logical qubit in cold storage. And the third was a recently formulated scheme for reducing the spacetime cost of non-Clifford gates. Further cost reductions seem possible using advanced fault-tolerant constructions, highlighting the urgency of accelerating migration from vulnerable cryptosystems to post-quantum cryptography.

Looking forward

Over the next 5 years, we anticipate dramatic progress toward scalable fault-tolerant quantum computing, and scientific insights enabled by programmable quantum devices arriving at an accelerated pace. Looking further ahead, what might the future hold? I was intrigued by a 1945 letter from John von Neumann concerning the potential applications of fast electronic computers. After delineating some possible applications, von Neumann added: “Uses which are not, or not easily, predictable now, are likely to be the most important ones … they will … constitute the most surprising extension of our present sphere of action.” Not even a genius like von Neumann could foresee the digital revolution that lay ahead. Predicting the future course of quantum technology is even more hopeless because quantum information processing entails an even larger step beyond past experience.

As we contemplate the long-term trajectory of quantum science and technology, we are hampered by our limited imaginations. But one way to loosely characterize the difference between the past and the future of quantum science is this: For the first hundred years of quantum mechanics, we achieved great success at understanding the behavior of weakly correlated many-particle systems, leading for example to transformative semiconductor and laser technologies. The grand challenge and opportunity we face in the second quantum century is acquiring comparable insight into the complex behavior of highly entangled states of many particles, behavior well beyond the scope of current theory or computation. The wonders we encounter in the second century of quantum mechanics, and their implications for human civilization, may far surpass those of the first century. So we should gratefully acknowledge the quantum pioneers of the past century, and wish good fortune to the quantum explorers of the future.

Credit: Iseult-Line Delfosse LLC, QC Ware

Nicole’s guide to writing research statements

Sunflowers are blooming, stores are trumpeting back-to-school sales, and professors are scrambling to chart out the courses they planned to develop in July. If you’re applying for an academic job this fall, now is the time to get your application ducks in a row. Seeking a postdoctoral or faculty position? Your applications will center on research statements. Often, a research statement describes your accomplishments and sketches your research plans. What do evaluators look for in such documents? Here’s my advice, which targets postdoctoral fellowships and faculty positions, especially for theoretical physicists.

  • Keep your audience in mind. Will a quantum information theorist, a quantum scientist, a general physicist, a general scientist, or a general academic evaluate your statement? What do they care about? What technical language do and don’t they understand?
  • What thread unites all the projects you’ve undertaken? Don’t walk through your research history chronologically, stepping from project to project. Cast the key projects in the form of a story—a research program. What vision underlies the program?
  • Here’s what I want to see when I read a description of a completed project.
    • The motivation for the project: This point ensures that the reader will care enough to read the rest of the description.
    • Crucial background information
    • The physical setup
    • A statement of the problem
    • Why the problem is difficult or, if relevant, how long the problem has remained open
    • Which mathematical toolkit you used to solve the problem or which conceptual insight unlocked the solution
    • Which technical or conceptual contribution you provided
    • Whom you collaborated with: Wide collaboration can signal a researcher’s maturity. If you collaborated with researchers at other institutions, name the institutions and, if relevant, their home countries. If you led the project, tell me that, too. If you collaborated with a well-known researcher, mentioning their name might help the reader situate your work within the research landscape they know. But avoid name-dropping, which lacks such a pedagogical purpose and which can come across as crude.
    • Your result’s significance/upshot/applications/impact: Has a lab based an experiment on your theoretical proposal? Does your simulation method outperform its competitors by X% in runtime? Has your mathematical toolkit found applications in three subfields of quantum physics? Consider mentioning whether a competitive conference or journal has accepted your results: QIP, STOC, Physical Review Letters, Nature Physics, etc. But such references shouldn’t serve as a crutch in conveying your results’ significance. You’ll impress me most by dazzling me with your physics; name-dropping venues instead can convey arrogance.
  • Not all past projects deserve the same amount of space. Tell a cohesive story. For example, you might detail one project, then synopsize two follow-up projects in two sentences.
  • A research statement must be high-level, because you don’t have space to provide details. Use mostly prose; and communicate intuition, including with simple examples. But sprinkle in math, such as notation that encapsulates a phrase in one concise symbol.

  • Be concrete, and illustrate with examples. Many physicists—especially theorists—lean toward general, abstract statements. The more general a statement is, we reason, the more systems it describes, so the more powerful it is. But humans can’t visualize and intuit about abstractions easily. Imagine a reader who has four minutes to digest your research statement before proceeding to the next 50 applications. As that reader flys through your writing, vague statements won’t leave much of an impression. So draw, in words, a picture that readers can visualize. For instance, don’t describe only systems, subsystems, and control; invoke atoms, cavities, and lasers. After hooking your reader with an image, you can generalize from it.
  • A research statement not only describes past projects, but also sketches research plans. Since research covers terra incognita, though, plans might sound impossible. How can you predict the unknown—especially the next five years of the unknown (as required if you’re applying for a faculty position), especially if you’re a theorist? Show that you’ve developed a map and a compass. Sketch the large-scale steps that you anticipate taking. Which mathematical toolkits will you leverage? What major challenge do you anticipate, and how do you hope to overcome it? Let me know if you’ve undertaken preliminary studies. Do numerical experiments support a theorem you conjecture?
  • When I was applying for faculty positions, a mentor told me the following: many a faculty member can identify a result (or constellation of results) that secured them an offer, as well as a result that earned them tenure. Help faculty-hiring committees identify the offer result and the tenure result.
  • Introduce notation before using it. If you use notation and introduce it afterward, the reader will encounter the notation; stop to puzzle over it; tentatively continue; read the introduction of the notation; return to the earlier use of the notation, to understand it; and then continue forward, including by rereading the introduction of the notation. This back-and-forth breaks up the reading process, which should flow smoothly.
  • Avoid verbs that fail to relate that you accomplished anything: “studied,” “investigated,” “worked on,” etc. What did you prove, show, demonstrate, solve, calculate, compute, etc.?

  • Tailor a version of your research statement to every position. Is Fellowship Committee X seeking biophysicists, statistical physicists, mathematical physicists, or interdisciplinary scientists? Also, respect every application’s guidelines about length.
  • If you have room, end the statement with a recap and a statement of significance. Yes, you’ll be repeating ideas mentioned earlier. But your reader’s takeaway hinges on the last text they read. End on a strong note, presenting a coherent vision.

  • Read examples. Which friends and colleagues, when applying for positions, have achieved success that you’d like to emulate? Ask if those individuals would share their research statements. Don’t take offense if they refuse; research statements are personal.

  • Writing is rewriting, a saying goes. Draft your research statement early, solicit feedback from a couple of mentors, edit the draft, and solicit more feedback.

What does it mean to create a topological qubit?

I’ve worked on topological quantum computation, one of Alexei Kitaev’s brilliant innovations, for around 15 years now.  It’s hard to find a more beautiful physics problem, combining spectacular quantum phenomena (non-Abelian anyons) with the promise of transformative technological advances (inherently fault-tolerant quantum computing hardware).  Problems offering that sort of combination originally inspired me to explore quantum matter as a graduate student. 

Non-Abelian anyons are emergent particles born within certain exotic phases of matter.  Their utility for quantum information descends from three deeply related defining features:

  • Nucleating a collection of well-separated non-Abelian anyons within a host platform generates a set of quantum states with the same energy (at least to an excellent approximation).  Local measurements give one essentially no information about which of those quantum states the system populates—i.e., any evidence of what the system is doing is hidden from the observer and, crucially, the environment.  In turn, qubits encoded in that space enjoy intrinsic resilience against local environmental perturbations. 
  • Swapping the positions of non-Abelian anyons manipulates the state of the qubits.  Swaps can be enacted either by moving anyons around each other as in a shell game, or by performing a sequence of measurements that yields the same effect.  Exquisitely precise qubit operations follow depending only on which pairs the user swaps and in what order.  Properties (1) and (2) together imply that non-Abelian anyons offer a pathway both to fault-tolerant storage and manipulation of quantum information. 
  • A pair of non-Abelian anyons brought together can “fuse” into multiple different kinds of particles, for instance a boson or a fermion.  Detecting the outcome of such a fusion process provides a method for reading out the qubit states that are otherwise hidden when all the anyons are mutually well-separated.  Alternatively, non-local measurements (e.g., interferometry) can effectively fuse even well-separated anyons, thus also enabling qubit readout. 

I entered the field back in 2009 during the last year of my postdoc.  Topological quantum computing—once confined largely to the quantum Hall realm—was then in the early stages of a renaissance driven by an explosion of new candidate platforms as well as measurement and manipulation schemes that promised to deliver long-sought control over non-Abelian anyons.  The years that followed were phenomenally exciting, with broadly held palpable enthusiasm for near-term prospects not yet tempered by the practical challenges that would eventually rear their head. 

A PhD comics cartoon on non-Abelian anyons from 2014.

In 2018, near the height of my optimism, I gave an informal blackboard talk in which I speculated on a new kind of forthcoming NISQ era defined by the birth of a Noisy Individual Semi-topological Qubit.  To less blatantly rip off John Preskill’s famous acronym, I also—jokingly of course—proposed the alternative nomenclature POST-Q (Piece Of S*** Topological Qubit) era to describe the advent of such a device.  The rationale behind those playfully sardonic labels is that the inaugural topological qubit would almost certainly be far from ideal, just as the original transistor appears shockingly crude when compared to modern electronics.  You always have to start somewhere.  But what does it mean to actually create a topological qubit, and how do you tell that you’ve succeeded—especially given likely POST-Q-era performance?

To my knowledge those questions admit no widely accepted answers, despite implications for both quantum science and society.  I would like to propose defining an elementary topological qubit as follows:

A device that leverages non-Abelian anyons to demonstrably encode and manipulate a single qubit in a topologically protected fashion. 

Some of the above words warrant elaboration.  As alluded to above, non-Abelian anyons can passively encode quantum information—a capability that by itself furnishes a quantum memory.  That’s the “encode” part.  The “manipulate” criterion additionally entails exploiting another aspect of what makes non-Abelian anyons special—their behavior under swaps—to enact gate operations.  Both the encoding and manipulation should benefit from intrinsic fault-tolerance, hence the “topologically protected fashion” qualifier.  And very importantly, these features should be “demonstrably” verified.  For instance, creating a device hosting the requisite number of anyons needed to define a qubit does not guarantee the all-important property of topological protection.  Hurdles can still arise, among them: if the anyons are not sufficiently well-separated, then the qubit states will lack the coveted immunity from environmental perturbations; thermal and/or non-equilibrium effects might still induce significant errors (e.g., by exciting the system into other unwanted states); and measurements—for readout and possibly also manipulation—may lack the fidelity required to fruitfully exploit topological protection even if present in the qubit states themselves. 

The preceding discussion raises a natural follow-up question: How do you verify topological protection in practice?  One way forward involves probing qubit lifetimes, and fidelities of gates resulting from anyon swaps, upon varying some global control knob like magnetic field or gate voltage.  As the system moves deeper into the phase of matter hosting non-Abelian anyons, both the lifetime and gate fidelities ought to improve dramatically—reflecting the onset of bona fide topological protection.  First-generation “semi-topological” devices will probably fare modestly at best, though one can at least hope to recover general trends in line with this expectation. 

By the above proposed definition, which I contend is stringent yet reasonable, realization of a topological qubit remains an ongoing effort.  Fortunately the journey to that end offers many significant science and engineering milestones worth celebrating in their own right.  Examples include:

Platform verification.  This most indirect milestone evidences the formation of a non-Abelian phase of matter through (thermal or charge) Hall conductance measurements, detection of some anticipated quantum phase transition, etc. 

Detection of non-Abelian anyons. This step could involve conductance, heat capacity, magnetization, or other types of measurements designed to support the emergence of either individual anyons or a collection of anyons.  Notably, such techniques need not reveal the precise quantum state encoded by the anyons—which presents a subtler challenge. 

Establishing readout capabilities. Here one would demonstrate experimental techniques, interferometry for example, that in principle can address that key challenge of quantum state readout, even if not directly applied yet to a system hosting non-Abelian anyons. 

Fusion protocols.  Readout capabilities open the door to more direct tests of the hallmark behavior predicted for a putative topological qubit.  One fascinating experiment involves protocols that directly test non-Abelian anyon fusion properties.  Successful implementation would solidify readout capabilities applied to an actual candidate topological qubit device. 

Probing qubit lifetimes.  Fusion protocols further pave the way to measuring the qubit coherence times, e.g., T_1 and T_2—addressing directly the extent of topological protection of the states generated by non-Abelian anyons.  Behavior clearly conforming to the trends highlighted above could certify the device as a topological quantum memory.  (Personally, I most anxiously await this milestone.)

Fault-tolerant gates from anyon swaps.  Likely the most advanced milestone, successfully implementing anyon swaps, again with appropriate trends in gate fidelity, would establish the final component of an elementary topological qubit. 

Most experiments to date focus on the first two items above, platform verification and anyon detection.  Microsoft’s recent Nature paper, together with the simultaneous announcement of supplementary new results, combine efforts in those areas with experiments aiming to establish interferometric readout capabilities needed for a topological qubit.  Fusion, (idle) qubit lifetime measurements, and anyon swaps have yet to be demonstrated in any candidate topological quantum computing platform, but at least partially feature in Microsoft’s future roadmap.  It will be fascinating to see how that effort evolves, especially given the aggressive timescales predicted by Microsoft for useful topological quantum hardware.  Public reactions so far range from cautious optimism to ardent skepticism; data will hopefully settle the situation one way or another in the near future.  My own take is that while Microsoft’s progress towards qubit readout is a welcome advance that has value regardless of the nature of the system to which those techniques are currently applied, convincing evidence of topological protection may still be far off. 

In the meantime, I maintain the steadfast conviction that topological qubits are most certainly worth pursuing—in a broad range of platforms.  Non-Abelian quantum Hall states seem resurgent candidates, and should not be discounted.  Moreover, the advent of ultra-pure, highly tunable 2D materials provide new settings in which one can envision engineering non-Abelian anyon devices with complementary advantages (and disadvantages) compared to previously explored settings.  Other less obvious contenders may also rise at some point.  The prospect of discovering new emergent phenomena mitigating the need for quantum error correction warrants continued effort with an open mind.

Beyond NISQ: The Megaquop Machine

On December 11, I gave a keynote address at the Q2B 2024 Conference in Silicon Valley. This is a transcript of my remarks. The slides I presented are here. The video of the talk is here.

NISQ and beyond

I’m honored to be back at Q2B for the 8th year in a row.

The Q2B conference theme is “The Roadmap to Quantum Value,” so I’ll begin by showing a slide from last year’s talk. As best we currently understand, the path to economic impact is the road through fault-tolerant quantum computing. And that poses a daunting challenge for our field and for the quantum industry.

We are in the NISQ era. And NISQ technology already has noteworthy scientific value. But as of now there is no proposed application of NISQ computing with commercial value for which quantum advantage has been demonstrated when compared to the best classical hardware running the best algorithms for solving the same problems. Furthermore, currently there are no persuasive theoretical arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.

NISQ, meaning Noisy Intermediate-Scale Quantum, is a deliberately vague term. By design, it has no precise quantitative meaning, but it is intended to convey an idea: We now have quantum machines such that brute force simulation of what the quantum machine does is well beyond the reach of our most powerful existing conventional computers. But these machines are not error-corrected, and noise severely limits their computational power.

In the future we can envision FASQ* machines, Fault-Tolerant Application-Scale Quantum computers that can run a wide variety of useful applications, but that is still a rather distant goal. What term captures the path along the road from NISQ to FASQ? Various terms retaining the ISQ format of NISQ have been proposed [here, here, here], but I would prefer to leave ISQ behind as we move forward, so I’ll speak instead of a megaquop or gigaquop machine and so on meaning one capable of executing a million or a billion quantum operations, but with the understanding that mega means not precisely a million but somewhere in the vicinity of a million.

Naively, a megaquop machine would have an error rate per logical gate of order 10^{-6}, which we don’t expect to achieve anytime soon without using error correction and fault-tolerant operation. Or maybe the logical error rate could be somewhat larger, as we expect to be able to boost the simulable circuit volume using various error mitigation techniques in the megaquop era just as we do in the NISQ era. Importantly, the megaquop machine would be capable of achieving some tasks beyond the reach of classical, NISQ, or analog quantum devices, for example by executing circuits with of order 100 logical qubits and circuit depth of order 10,000.

What resources are needed to operate it? That depends on many things, but a rough guess is that tens of thousands of high-quality physical qubits could suffice. When will we have it? I don’t know, but if it happens in just a few years a likely modality is Rydberg atoms in optical tweezers, assuming they continue to advance in both scale and performance.

What will we do with it? I don’t know, but as a scientist I expect we can learn valuable lessons by simulating the dynamics of many-qubit systems on megaquop machines. Will there be applications that are commercially viable as well as scientifically instructive? That I can’t promise you.

The road to fault tolerance

To proceed along the road to fault tolerance, what must we achieve? We would like to see many successive rounds of accurate error syndrome measurement such that when the syndromes are decoded the error rate per measurement cycle drops sharply as the code increases in size. Furthermore, we want to decode rapidly, as will be needed to execute universal gates on protected quantum information. Indeed, we will want the logical gates to have much higher fidelity than physical gates, and for the logical gate fidelities to improve sharply as codes increase in size. We want to do all this at an acceptable overhead cost in both the number of physical qubits and the number of physical gates. And speed matters — the time on the wall clock for executing a logical gate should be as short as possible.

A snapshot of the state of the art comes from the Google Quantum AI team. Their recently introduced Willow superconducting processor has improved transmon lifetimes, measurement errors, and leakage correction compared to its predecessor Sycamore. With it they can perform millions of rounds of surface-code error syndrome measurement with good stability, each round lasting about a microsecond. Most notably, they find that the logical error rate per measurement round improves by a factor of 2 (a factor they call Lambda) when the code distance increases from 3 to 5 and again from 5 to 7, indicating that further improvements should be achievable by scaling the device further. They performed accurate real-time decoding for the distance 3 and 5 codes. To further explore the performance of the device they also studied the repetition code, which corrects only bit flips, out to a much larger code distance. As the hardware continues to advance we hope to see larger values of Lambda for the surface code, larger codes achieving much lower error rates, and eventually not just quantum memory but also logical two-qubit gates with much improved fidelity compared to the fidelity of physical gates.

Last year I expressed concern about the potential vulnerability of superconducting quantum processors to ionizing radiation such as cosmic ray muons. In these events, errors occur in many qubits at once, too many errors for the error-correcting code to fend off. I speculated that we might want to operate a superconducting processor deep underground to suppress the muon flux, or to use less efficient codes that protect against such error bursts.

The good news is that the Google team has demonstrated that so-called gap engineering of the qubits can reduce the frequency of such error bursts by orders of magnitude. In their studies of the repetition code they found that, in the gap-engineered Willow processor, error bursts occurred about once per hour, as opposed to once every ten seconds in their earlier hardware.  Whether suppression of error bursts via gap engineering will suffice for running deep quantum circuits in the future is not certain, but this progress is encouraging. And by the way, the origin of the error bursts seen every hour or so is not yet clearly understood, which reminds us that not only in superconducting processors but in other modalities as well we are likely to encounter mysterious and highly deleterious rare events that will need to be understood and mitigated.

Real-time decoding

Fast real-time decoding of error syndromes is important because when performing universal error-corrected computation we must frequently measure encoded blocks and then perform subsequent operations conditioned on the measurement outcomes. If it takes too long to decode the measurement outcomes, that will slow down the logical clock speed. That may be a more serious problem for superconducting circuits than for other hardware modalities where gates can be orders of magnitude slower.

For distance 5, Google achieves a latency, meaning the time from when data from the final round of syndrome measurement is received by the decoder until the decoder returns its result, of about 63 microseconds on average. In addition, it takes about another 10 microseconds for the data to be transmitted via Ethernet from the measurement device to the decoding workstation. That’s not bad, but considering that each round of syndrome measurement takes only a microsecond, faster would be preferable, and the decoding task becomes harder as the code grows in size.

Riverlane and Rigetti have demonstrated in small experiments that the decoding latency can be reduced by running the decoding algorithm on FPGAs rather than CPUs, and by integrating the decoder into the control stack to reduce communication time. Adopting such methods may become increasingly important as we scale further. Google DeepMind has shown that a decoder trained by reinforcement learning can achieve a lower logical error rate than a decoder constructed by humans, but it’s unclear whether that will work at scale because the cost of training rises steeply with code distance. Also, the Harvard / QuEra team has emphasized that performing correlated decoding across multiple code blocks can reduce the depth of fault-tolerant constructions, but this also increases the complexity of decoding, raising concern about whether such a scheme will be scalable.

Trading simplicity for performance

The Google processors use transmon qubits, as do superconducting processors from IBM and various other companies and research groups. Transmons are the simplest superconducting qubits and their quality has improved steadily; we can expect further improvement with advances in materials and fabrication. But a logical qubit with very low error rate surely will be a complicated object due to the hefty overhead cost of quantum error correction. Perhaps it is worthwhile to fashion a more complicated physical qubit if the resulting gain in performance might actually simplify the operation of a fault-tolerant quantum computer in the megaquop regime or well beyond. Several versions of this strategy are being pursued.

One approach uses cat qubits, in which the encoded 0 and 1 are coherent states of a microwave resonator, well separated in phase space, such that the noise afflicting the qubit is highly biased. Bit flips are exponentially suppressed as the mean photon number of the resonator increases, while the error rate for phase flips induced by loss from the resonator increases only linearly with the photon number. This year the AWS team built a repetition code to correct phase errors for cat qubits that are passively protected against bit flips, and showed that increasing the distance of the repetition code from 3 to 5 slightly improves the logical error rate. (See also here.)

Another helpful insight is that error correction can be more effective if we know when and where the errors occur in a quantum circuit. We can apply this idea using a dual rail encoding of the qubits. With two microwave resonators, for example, we can encode a qubit by placing a single photon in either the first resonator (the 10) state, or the second resonator (the 01 state). The dominant error is loss of a photon, causing either the 01 or 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred without disturbing a coherent superposition of 01 and 10. In a device built by the Yale / QCI team, loss errors are detected over 99% of the time and all undetected errors are relatively rare. Similar results were reported by the AWS team, encoding a dual-rail qubit in a pair of transmons instead of resonators.

Another idea is encoding a finite-dimensional quantum system in a state of a resonator that is highly squeezed in two complementary quadratures, a so-called GKP encoding. This year the Yale group used this scheme to encode 3-dimensional and 4-dimensional systems with decay rate better by a factor of 1.8 than the rate of photon loss from the resonator. (See also here.)

A fluxonium qubit is more complicated than a transmon in that it requires a large inductance which is achieved with an array of Josephson junctions, but it has the advantage of larger anharmonicity, which has enabled two-qubit gates with better than three 9s of fidelity, as the MIT team has shown.

Whether this trading of simplicity for performance in superconducting qubits will ultimately be advantageous for scaling to large systems is still unclear. But it’s appropriate to explore such alternatives which might pay off in the long run.

Error correction with atomic qubits

We have also seen progress on error correction this year with atomic qubits, both in ion traps and optical tweezer arrays. In these platforms qubits are movable, making it possible to apply two-qubit gates to any pair of qubits in the device. This opens the opportunity to use more efficient coding schemes, and in fact logical circuits are now being executed on these platforms. The Harvard / MIT / QuEra team sampled circuits with 48 logical qubits on a 280-qubit device –- that big news broke during last year’s Q2B conference. Atom computing and Microsoft ran an algorithm with 28 logical qubits on a 256-qubit device. Quantinuum and Microsoft prepared entangled states of 12 logical qubits on a 56-qubit device.

However, so far in these devices it has not been possible to perform more than a few rounds of error syndrome measurement, and the results rely on error detection and postselection. That is, circuit runs are discarded when errors are detected, a scheme that won’t scale to large circuits. Efforts to address these drawbacks are in progress. Another concern is that the atomic movement slows the logical cycle time. If all-to-all coupling enabled by atomic movement is to be used in much deeper circuits, it will be important to speed up the movement quite a lot.

Toward the megaquop machine

How can we reach the megaquop regime? More efficient quantum codes like those recently discovered by the IBM team might help. These require geometrically nonlocal connectivity and are therefore better suited for Rydberg optical tweezer arrays than superconducting processors, at least for now. Error mitigation strategies tailored for logical circuits, like those pursued by Qedma, might help by boosting the circuit volume that can be simulated beyond what one would naively expect based on the logical error rate. Recent advances from the Google team, which reduce the overhead cost of logical gates, might also be helpful.

What about applications? Impactful applications to chemistry typically require rather deep circuits so are likely to be out of reach for a while yet, but applications to materials science provide a more tempting target in the near term. Taking advantage of symmetries and various circuit optimizations like the ones Phasecraft has achieved, we might start seeing informative results in the megaquop regime or only slightly beyond.

As a scientist, I’m intrigued by what we might conceivably learn about quantum dynamics far from equilibrium by doing simulations on megaquop machines, particularly in two dimensions. But when seeking quantum advantage in that arena we should bear in mind that classical methods for such simulations are also advancing impressively, including in the past year (for example, here and here).

To summarize, advances in hardware, control, algorithms, error correction, error mitigation, etc. are bringing us closer to megaquop machines, raising a compelling question for our community: What are the potential uses for these machines? Progress will require innovation at all levels of the stack.  The capabilities of early fault-tolerant quantum processors will guide application development, and our vision of potential applications will guide technological progress. Advances in both basic science and systems engineering are needed. These are still the early days of quantum computing technology, but our experience with megaquop machines will guide the way to gigaquops, teraquops, and beyond and hence to widely impactful quantum value that benefits the world.

I thank Dorit Aharonov, Sergio Boixo, Earl Campbell, Roland Farrell, Ashley Montanaro, Mike Newman, Will Oliver, Chris Pattison, Rob Schoelkopf, and Qian Xu for helpful comments.

*The acronym FASQ was suggested to me by Andrew Landahl.

The megaquop machine (image generated by ChatGPT.
The megaquop machine (image generated by ChatGPT).

Crossing the quantum chasm: From NISQ to fault tolerance

On December 6, I gave a keynote address at the Q2B 2023 Conference in Silicon Valley. Here is a transcript of my remarks. The slides I presented are here. A video of my presentation is here.

Toward quantum value

The theme of this year’s Q2B meeting is “The Roadmap to Quantum Value.” I interpret “quantum value” as meaning applications of quantum computing that have practical utility for end-users in business. So I’ll begin by reiterating a point I have made repeatedly in previous appearances at Q2B. As best we currently understand, the path to economic impact is the road through fault-tolerant quantum computing. And that poses daunting challenges for our field and for the quantum industry.

We are in the NISQ era. NISQ (rhymes with “risk’”) is an acronym meaning “Noisy Intermediate-Scale Quantum.” Here “intermediate-scale” conveys that current quantum computing platforms with of order 100 qubits are difficult to simulate by brute force using the most powerful currently existing supercomputers. “Noisy” reminds us that today’s quantum processors are not error-corrected, and noise is a serious limitation on their computational power. NISQ technology already has noteworthy scientific value. But as of now there is no proposed application of NISQ computing with commercial value for which quantum advantage has been demonstrated when compared to the best classical hardware running the best algorithms for solving the same problems. Furthermore, currently there are no persuasive theoretical arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.

A useful survey of quantum computing applications, over 300 pages long, recently appeared, providing rough estimates of end-to-end run times for various quantum algorithms. This is hardly the last word on the subject — new applications are continually proposed, and better implementations of existing algorithms continually arise. But it is a valuable snapshot of what we understand today, and it is sobering.

There can be quantum advantage in some applications of quantum computing to optimization, finance, and machine learning. But in this application area, the speedups are typically at best quadratic, meaning the quantum run time scales as the square root of the classical run time. So the advantage kicks in only for very large problem instances and deep circuits, which we won’t be able to execute without error correction.

Larger polynomial advantage and perhaps superpolynomial advantage is possible in applications to chemistry and materials science, but these may require at least hundreds of very well-protected logical qubits, and hundreds of millions of very high-fidelity logical gates, if not more. Quantum fault tolerance will be needed to run these applications, and fault tolerance has a hefty cost in both the number of physical qubits and the number of physical gates required. We should also bear in mind that the speed of logical gates is relevant, since the run time as measured by the wall clock will be an important determinant of the value of quantum algorithms.

Overcoming noise in quantum devices

Already in today’s quantum processors steps are taken to address limitations imposed by the noise — we use error mitigation methods like zero noise extrapolation or probabilistic error cancellation. These methods work effectively at extending the size of the circuits we can execute with useful fidelity. But the asymptotic cost scales exponentially with the size of the circuit, so error mitigation alone may not suffice to reach quantum value. Quantum error correction, on the other hand, scales much more favorably, like a power of a logarithm of the circuit size. But quantum error correction is not practical yet. To make use of it, we’ll need better two-qubit gate fidelities, many more physical qubits, robust systems to control those qubits, as well as the ability to perform fast and reliable mid-circuit measurements and qubit resets; all these are technically demanding goals.

To get a feel for the overhead cost of fault-tolerant quantum computing, consider the surface code — it’s presumed to be the best near-term prospect for achieving quantum error correction, because it has a high accuracy threshold and requires only geometrically local processing in two dimensions. Once the physical two-qubit error rate is below the threshold value of about 1%, the probability of a logical error per error correction cycle declines exponentially as we increase the code distance d:

Plogical = (0.1)(Pphysical/Pthreshold)(d+1)/2

where the number of physical qubits in the code block (which encodes a single protected qubit) is the distance squared.

Suppose we wish to execute a circuit with 1000 qubits and 100 million time steps. Then we want the probability of a logical error per cycle to be 10-11. Assuming the physical error rate is 10-3, better than what is currently achieved in multi-qubit devices, from this formula we infer that we need a code distance of 19, and hence 361 physical qubits to encode each logical qubit, and a comparable number of ancilla qubits for syndrome measurement — hence over 700 physical qubits per logical qubit, or a total of nearly a million physical qubits.  If the physical error rate improves to 10-4 someday, that cost is reduced, but we’ll still need hundreds of thousands of physical qubits if we rely on the surface code to protect this circuit.

Progress toward quantum error correction

The study of error correction is gathering momentum, and I’d like to highlight some recent experimental and theoretical progress. Specifically, I’ll remark on three promising directions, all with the potential to hasten the arrival of the fault-tolerant era: erasure conversion, biased noise, and more efficient quantum codes.

Erasure conversion

Error correction is more effective if we know when and where the errors occurred. To appreciate the idea, consider the case of a classical repetition code that protects against bit flips. If we don’t know which bits have errors we can decode successfully by majority voting, assuming that fewer than half the bits have errors. But if errors are heralded then we can decode successfully by just looking at any one of the undamaged bits. In quantum codes the details are more complicated but the same principle applies — we can recover more effectively if so-called erasure errors dominate; that is, if we know which qubits are damaged and in which time steps. “Erasure conversion” means fashioning a processor such that the dominant errors are erasure errors.

We can make use of this idea if the dominant errors exit the computational space of the qubit, so that an error can be detected without disturbing the coherence of undamaged qubits. One realization is with Alkaline earth Rydberg atoms in optical tweezers, where 0 is encoded as a low energy state, and 1 is a highly excited Rydberg state. The dominant error is the spontaneous decay of the 1 to a lower energy state. But if the atomic level structure and the encoding allow, 1 usually decays not to a 0, but rather to another state g. We can check whether the g state is occupied, to detect whether or not the error occurred, without disturbing a coherent superposition of 0 and 1.

Erasure conversion can also be arranged in superconducting devices, by using a so-called dual-rail encoding of the qubit in a pair of transmons or a pair of microwave resonators. With two resonators, for example, we can encode a qubit by placing a single photon in one resonator or the other. The dominant error is loss of the photon, causing either the 01 state or the 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred, without disturbing a coherent superposition of 01 and 10.

Erasure detection has been successfully demonstrated in recent months, for both atomic (here and here) and superconducting (here and here) qubit encodings.

Biased noise

Another setting in which the effectiveness of quantum error correction can be enhanced is when the noise is highly biased. Quantum error correction is more difficult than classical error correction partly because more types of errors can occur — a qubit can flip in the standard basis, or it can flip in the complementary basis, what we call a phase error. In suitably designed quantum hardware the bit flips are highly suppressed, so we can concentrate the error-correcting power of the code on protecting against phase errors. For this scheme to work, it is important that phase errors occurring during the execution of a quantum gate do not propagate to become bit-flip errors. And it was realized just a few years ago that such bias-preserving gates are possible for qubits encoded in continuous variable systems like microwave resonators.

Specifically, we may consider a cat code, in which the encoded 0 and encoded 1 are coherent states, well separated in phase space. Then bit flips are exponentially suppressed as the mean photon number in the resonator increases. The main source of error, then, is photon loss from the resonator, which induces a phase error for the cat qubit, with an error rate that increases only linearly with photon number. We can then strike a balance, choosing a photon number in the resonator large enough to provide physical protection against bit flips, and then use a classical code like the repetition code to build a logical qubit well protected against phase flips as well.

Work on such repetition cat codes is ongoing (see here, here, and here), and we can expect to hear about progress in that direction in the coming months.

More efficient codes

Another exciting development has been the recent discovery of quantum codes that are far more efficient than the surface code. These include constant-rate codes, in which the number of protected qubits scales linearly with the number of physical qubits in the code block, in contrast to the surface code, which protects just a single logical qubit per block. Furthermore, such codes can have constant relative distance, meaning that the distance of the code, a rough measure of how many errors can be corrected, scales linearly with the block size rather than the square root scaling attained by the surface code.

These new high-rate codes can have a relatively high accuracy threshold, can be efficiently decoded, and schemes for executing fault-tolerant logical gates are currently under development.

A drawback of the high-rate codes is that, to extract error syndromes, geometrically local processing in two dimensions is not sufficient — long-range operations are needed. Nonlocality can be achieved through movement of qubits in neutral atom tweezer arrays or ion traps, or one can use the native long-range coupling in an ion trap processor. Long-range coupling is more challenging to achieve in superconducting processors, but should be possible.

An example with potential near-term relevance is a recently discovered code with distance 12 and 144 physical qubits. In contrast to the surface code with similar distance and length which encodes just a single logical qubit, this code protects 12 logical qubits, a significant improvement in encoding efficiency.

The quest for practical quantum error corrections offers numerous examples like these of co-design. Quantum error correction schemes are adapted to the features of the hardware, and ideas about quantum error correction guide the realization of new hardware capabilities. This fruitful interplay will surely continue.

An exciting time for Rydberg atom arrays

In this year’s hardware news, now is a particularly exciting time for platforms based on Rydberg atoms trapped in optical tweezer arrays. We can anticipate that Rydberg platforms will lead the progress in quantum error correction for at least the next few years, if two-qubit gate fidelities continue to improve. Thousands of qubits can be controlled, and geometrically nonlocal operations can be achieved by reconfiguring the atomic positions. Further improvement in error correction performance might be possible by means of erasure conversion. Significant progress in error correction using Rydberg platforms is reported in a paper published today.

But there are caveats. So far, repeatable error syndrome measurement has not been demonstrated. For that purpose, continuous loading of fresh atoms needs to be developed. And both the readout and atomic movement are relatively slow, which limits the clock speed.

Movability of atomic qubits will be highly enabling in the short run. But in the longer run, movement imposes serious limitations on clock speed unless much faster movement can be achieved. As things currently stand, one can’t rapidly accelerate an atom without shaking it loose from an optical tweezer, or rapidly accelerate an ion without heating its motional state substantially. To attain practical quantum computing using Rydberg arrays, or ion traps, we’ll eventually need to make the clock speed much faster.

Cosmic rays!

To be fair, other platforms face serious threats as well. One is the vulnerability of superconducting circuits to ionizing radiation. Cosmic ray muons for example will occasionally deposit a large amount of energy in a superconducting circuit, creating many phonons which in turn break Cooper pairs and induce qubit errors in a large region of the chip, potentially overwhelming the error-correcting power of the quantum code. What can we do? We might go deep underground to reduce the muon flux, but that’s expensive and inconvenient. We could add an additional layer of coding to protect against an event that wipes out an entire surface code block; that would increase the overhead cost of error correction. Or maybe modifications to the hardware can strengthen robustness against ionizing radiation, but it is not clear how to do that.

Outlook

Our field and the quantum industry continue to face a pressing question: How will we scale up to quantum computing systems that can solve hard problems? The honest answer is: We don’t know yet. All proposed hardware platforms need to overcome serious challenges. Whatever technologies may seem to be in the lead over, say, the next 10 years might not be the best long-term solution. For that reason, it remains essential at this stage to develop a broad array of hardware platforms in parallel.

Today’s NISQ technology is already scientifically useful, and that scientific value will continue to rise as processors advance. The path to business value is longer, and progress will be gradual. Above all, we have good reason to believe that to attain quantum value, to realize the grand aspirations that we all share for quantum computing, we must follow the road to fault tolerance. That awareness should inform our thinking, our strategy, and our investments now and in the years ahead.

Crossing the quantum chasm (image generated using Midjourney)

Quantum connections

We were seated in the open-air back of a boat, motoring around the Stockholm archipelago. The Swedish colors fluttered above our heads; the occasional speedboat zipped past, rocking us in its wake; and wildflowers dotted the bank on either side. Suddenly, a wood-trimmed boat glided by, and the captain waved from his perch.

The gesture surprised me. If I were in a vehicle of the sort most familiar to me—a car—I wouldn’t wave to other drivers. In a tram, I wouldn’t wave to passengers on a parallel track. Granted, trams and cars are closed, whereas boats can be open-air. But even as a pedestrian in a downtown crossing, I wouldn’t wave to everyone I passed. Yet, as boat after boat pulled alongside us, we received salutation after salutation.

The outing marked the midpoint of the Quantum Connections summer school. Physicists Frank Wilczek, Antti Niemi, and colleagues coordinate the school, which draws students and lecturers from across the globe. Although sponsored by Stockholm University, the school takes place at a century-old villa whose name I wish I could pronounce: Högberga Gård. The villa nestles atop a cliff on an island in the archipelago. We ventured off the island after a week of lectures.

Charlie Marcus lectured about materials formed from superconductors and semiconductors; John Martinis, about superconducting qubits; Jianwei Pan, about quantum advantages; and others, about symmetries, particle statistics, and more. Feeling like an ant among giants, I lectured about quantum thermodynamics. Two other lectures linked quantum physics with gravity—and in a way you might not expect. I appreciated the opportunity to reconnect with the lecturer: Igor Pikovski.

Cruising around Stockholm

Igor doesn’t know it, but he’s one of the reasons why I joined the Harvard-Smithsonian Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP) as an ITAMP Postdoctoral Fellow in 2018. He’d held the fellowship beginning a few years before, and he’d earned a reputation for kindness and consideration. Also, his research struck me as some of the most fulfilling that one could undertake.

If you’ve heard about the intersection of quantum physics and gravity, you’ve probably heard of approaches other than Igor’s. For instance, physicists are trying to construct a theory of quantum gravity, which would describe black holes and the universe’s origin. Such a “theory of everything” would reduce to Einstein’s general theory of relativity when applied to planets and would reduce to quantum theory when applied to atoms. In another example, physicists leverage quantum technologies to observe properties of gravity. Such technologies enabled the observatory LIGO to register gravitational waves—ripples in space-time. 

Igor and his colleagues pursue a different goal: to observe phenomena whose explanations depend on quantum theory and on gravity.

In his lectures, Igor illustrated with an experiment first performed in 1975. The experiment relies on what happens if you jump: You gain energy associated with resisting the Earth’s gravitational pull—gravitational potential energy. A quantum object’s energy determines how the object’s quantum state changes in time. The experimentalists applied this fact to a beam of neutrons. 

They put the beam in a superposition of two locations: closer to the Earth’s surface and farther away. The closer component changed in time in one way, and the farther component changed another way. After a while, the scientists recombined the components. The two interfered with each other similarly to the waves created by two raindrops falling near each other on a puddle. The interference evidenced gravity’s effect on the neutrons’ quantum state.

Summer-school venue. I’d easily say it’s gorgeous but not easily pronounce its name.

The experimentalists approximated gravity as dominated by the Earth alone. But other masses can influence the gravitational field noticeably. What if you put a mass in a superposition of different locations? What would happen to space-time?

Or imagine two quantum particles too far apart to interact with each other significantly. Could a gravitational field entangle the particles by carrying quantum correlations from one to the other?

Physicists including Igor ponder these questions…and then ponder how experimentalists could test their predictions. The more an object influences gravity, the more massive the object tends to be, and the more easily the object tends to decohere—to spill the quantum information that it holds into its surroundings.

The “gravity-quantum interface,” as Igor entitled his lectures, epitomizes what I hoped to study in college, as a high-school student entranced by physics, math, and philosophy. What’s more curious and puzzling than superpositions, entanglement, and space-time? What’s more fundamental than quantum theory and gravity? Little wonder that connecting them inspires wonder.

But we humans are suckers for connections. I appreciated the opportunity to reconnect with a colleague during the summer school. Boaters on the Stockholm archipelago waved to our cohort as they passed. And who knows—gravitational influences may even have rippled between the boats, entangling us a little.

Requisite physicist-visiting-Stockholm photo

With thanks to the summer-school organizers, including Pouya Peighami and Elizabeth Yang, for their invitation and hospitality.

These are a few of my favorite steampunk books

As a physicist, one grows used to answering audience questions at the end of a talk one presents. As a quantum physicist, one grows used to answering questions about futuristic technologies. As a quantum-steampunk physicist, one grows used to the question “Which are your favorite steampunk books?”

Literary Hub has now published my answer.

According to its website, “Literary Hub is an organizing principle in the service of literary culture, a single, trusted, daily source for all the news, ideas and richness of contemporary literary life. There is more great literary content online than ever before, but it is scattered, easily lost—with the help of its editorial partners, Lit Hub is a site readers can rely on for smart, engaged, entertaining writing about all things books.”

My article, “Five best books about the romance of Victorian science,” appeared there last week. You’ll find fiction, nonfiction as imaginative as fiction, and crossings of the border between the two. 

My contribution to literature about the romance of Victorian science—my (mostly) nonfiction book, Quantum Steampunk: The Physics Of Yesterday’s Tomorrow—was  published two weeks ago. Where’s a hot-air-balloon emoji when you need one?

One equation to rule them all?

In lieu of composing a blog post this month, I’m publishing an article in Quanta Magazine. The article provides an introduction to fluctuation relations, souped-up variations on the second law of thermodynamics, which helps us understand why time flows in only one direction. The earliest fluctuation relations described classical systems, such as single strands of DNA. Many quantum versions have been proved since. Their proliferation contrasts with the stereotype of physicists as obsessed with unification—with slimming down a cadre of equations into one über-equation. Will one quantum fluctuation relation emerge to rule them all? Maybe, and maybe not. Maybe the multiplicity of quantum fluctuation relations reflects the richness of quantum thermodynamics.

You can read more in Quanta Magazine here and yet more in chapter 9 of my book. For recent advances in fluctuation relations, as opposed to the broad introduction there, check out earlier Quantum Frontiers posts here, here, here, here, and here.