Astrobiology meets quantum computation?

The origin of life appears to share little with quantum computation, apart from the difficulty of achieving it and its potential for clickbait. Yet similar notions of complexity have recently garnered attention in both fields. Each topic’s researchers expect only special systems to generate high values of such complexity, or complexity at high rates: organisms, in one community, and quantum computers (and perhaps black holes), in the other. 

Each community appears fairly unaware of its counterpart. This article is intended to introduce the two. Below, I review assembly theory from origin-of-life studies, followed by quantum complexity. I’ll then compare and contrast the two concepts. Finally, I’ll suggest that origin-of-life scientists can quantize assembly theory using quantum complexity. The idea is a bit crazy, but, well, so what?

Assembly theory in origin-of-life studies

Imagine discovering evidence of extraterrestrial life. How could you tell that you’d found it? You’d have detected a bunch of matter—a bunch of particles, perhaps molecules. What about those particles could evidence life?

This question motivated Sara Imari Walker and Lee Cronin to develop assembly theory. (Most of my assembly-theory knowledge comes from Sara, about whom I wrote this blog post years ago and with whom I share a mentor.) Assembly theory governs physical objects, from proteins to self-driving cars. 

Imagine assembling a protein from its constituent atoms. First, you’d bind two atoms together. Then, you might bind another two atoms together. Eventually, you’d bind two pairs together. Your sequence of steps would form an algorithm for assembling the protein. Many algorithms can generate the same protein. One algorithm has the least number of steps. That number is called the protein’s assembly number.

Different natural processes tend to create objects that have different assembly numbers. Stars form low-assembly-number objects by fusing two hydrogen atoms together into helium. Similarly, random processes have high probabilities of forming low-assembly-number objects. For example, geological upheavals can bring a shard of iron near a lodestone. The iron will stick to the magnetized stone, forming a two-component object.

My laptop has an enormous assembly number. Why can such an object exist? Because of information, Sara and Lee emphasize. Human beings amassed information about materials science, Boolean logic, the principles of engineering, and more. That information—which exists only because organisms exists—helped engender my laptop.

If any object has a high enough assembly number, Sara and Lee posit, that object evidences life. Absent life, natural processes have too low a probability of randomly throwing together molecules into the shape of a computer. How high is “high enough”? Approximately fifteen, experiments by Lee’s group suggest. (Why do those experiments point to the number fifteen? Sara’s group is working on a theory for predicting the number.)

In summary, assembly number quantifies complexity in origin-of-life studies, according to Sara and Lee. The researchers propose that only living beings create high-assembly-number objects.

Quantum complexity in quantum computation

Quantum complexity defines a stage in the equilibration of many-particle quantum systems. Consider a clump of N quantum particles isolated from its environment. The clump will be in a pure quantum state | \psi(0) \rangle at a time t = 0. The particles will interact, evolving the clump’s state as a function  | \psi(t) \rangle

Quantum many-body equilibration is more complicated than the equilibration undergone by your afternoon pick-me-up as it cools.

The interactions will equilibrate the clump internally. One stage of equilibration centers on local observables O. They’ll come to have expectation values \langle \psi(t) | O | \psi(t) \rangle approximately equal to thermal expectation values {\rm Tr} ( O \, \rho_{\rm th} ), for a thermal state \rho_{\rm th} of the clump. During another stage of equilibration, the particles correlate through many-body entanglement. 

The longest known stage centers on the quantum complexity of | \psi(t) \rangle. The quantum complexity is the minimal number of basic operations needed to prepare | \psi(t) \rangle from a simple initial state. We can define “basic operations” in many ways. Examples include quantum logic gates that act on two particles. Another example is an evolution for one time step under a Hamiltonian that couples together at most k particles, for some k independent of N. Similarly, we can define “a simple initial state” in many ways. We could count as simple only the N-fold tensor product | 0 \rangle^{\otimes N} of our favorite single-particle state | 0 \rangle. Or we could call any N-fold tensor product simple, or any state that contains at-most-two-body entanglement, and so on. These choices don’t affect the quantum complexity’s qualitative behavior, according to string theorists Adam Brown and Lenny Susskind.

How quickly can the quantum complexity of | \psi(t) \rangle grow? Fast growth stems from many-body interactions, long-range interactions, and random coherent evolutions. (Random unitary circuits exemplify random coherent evolutions: each gate is chosen according to the Haar measure, which we can view roughly as uniformly random.) At most, quantum complexity can grow linearly in time. Random unitary circuits achieve this rate. Black holes may; they scramble information quickly. The greatest possible complexity of any N-particle state scales exponentially in N, according to a counting argument

A highly complex state | \psi(t) \rangle looks simple from one perspective and complicated from another. Human scientists can easily measure only local observables O. Such observables’ expectation values \langle \psi(t) | O | \psi(t) \rangle  tend to look thermal in highly complex states, \langle \psi(t) | O | \psi(t) \rangle \approx {\rm Tr} ( O \, \rho_{\rm th} ), as implied above. The thermal state has the greatest von Neumann entropy, - {\rm Tr} ( \rho \log \rho), of any quantum state \rho that obeys the same linear constraints as | \psi(t) \rangle (such as having the same energy expectation value). Probed through simple, local observables O, highly complex states look highly entropic—highly random—similarly to a flipped coin.

Yet complex states differ from flipped coins significantly, as revealed by subtler analyses. An example underlies the quantum-supremacy experiment published by Google’s quantum-computing group in 2018. Experimentalists initialized 53 qubits (quantum two-level systems) in a tensor product. The state underwent many gates, which prepared a highly complex state. Then, the experimentalists measured the z-component \sigma_z of each qubit’s spin, randomly obtaining a -1 or a 1. One trial yielded a 53-bit string. The experimentalists repeated this process many times, using the same gates in each trial. From all the trials’ bit strings, the group inferred the probability p(s) of obtaining a given string s in the next trial. The distribution \{ p(s) \} resembles the uniformly random distribution…but differs from it subtly, as revealed by a cross-entropy analysis. Classical computers can’t easily generate \{ p(s) \}; hence the Google group’s claiming to have achieved quantum supremacy/advantage. Quantum complexity differs from simple randomness, that difference is difficult to detect, and the difference can evidence quantum computers’ power.

A fridge that holds one of Google’s quantum computers.

Comparison and contrast

Assembly number and quantum complexity resemble each other as follows:

  1. Each function quantifies the fewest basic operations needed to prepare something.
  2. Only special systems (organisms) can generate high assembly numbers, according to Sara and Lee. Similarly, only special systems (such as quantum computers and perhaps black holes) can generate high complexity quickly, quantum physicists expect.
  3. Assembly number may distinguish products of life from products of abiotic systems. Similarly, quantum complexity helps distinguish quantum computers’ computational power from classical computers’.
  4. High-assembly-number objects are highly structured (think of my laptop). Similarly, high-complexity quantum states are highly structured in the sense of having much many-body entanglement.
  5. Organisms generate high assembly numbers, using information. Similarly, using information, organisms have created quantum computers, which can generate quantum complexity quickly.

Assembly number and quantum complexity differ as follows:

  1. Classical objects have assembly numbers, whereas quantum states have quantum complexities.
  2. In the absence of life, random natural processes have low probabilities of producing high-assembly-number objects. That is, randomness appears to keep assembly numbers low. In contrast, randomness can help quantum complexity grow quickly.
  3. Highly complex quantum states look very random, according to simple, local probes. High-assembly-number objects do not.
  4. Only organisms generate high assembly numbers, according to Sara and Lee. In contrast, abiotic black holes may generate quantum complexity quickly.

Another feature shared by assembly-number studies and quantum computation merits its own paragraph: the importance of robustness. Suppose that multiple copies of a high-assembly-number (or moderate-assembly-number) object exist. Not only does my laptop exist, for example, but so do many other laptops. To Sara, such multiplicity signals the existence of some stable mechanism for creating that object. The multiplicity may provide extra evidence for life (including life that’s discovered manufacturing), as opposed to an unlikely sequence of random forces. Similarly, quantum computing—the preparation of highly complex states—requires stability. Decoherence threatens quantum states, necessitating quantum error correction. Quantum error correction differs from Sara’s stable production mechanism, but both evidence the importance of robustness to their respective fields.

A modest proposal

One can generalize assembly number to quantum states, using quantum complexity. Imagine finding a clump of atoms while searching for extraterrestrial life. The atoms need not have formed molecules, so the clump can have a low classical assembly number. However, the clump can be in a highly complex quantum state. We could detect the state’s complexity only (as far as I know) using many copies of the state, so imagine finding many clumps of atoms. Preparing highly complex quantum states requires special conditions, such as a quantum computer. The clump might therefore evidence organisms who’ve discovered quantum physics. Using quantum complexity, one might extend the assembly number to identify quantum states that may evidence life. However, quantum complexity, or a high rate of complexity generation, alone may not evidence life—for example, if achievable by black holes. Fortunately, a black hole seems unlikely to generate many identical copies of a highly complex quantum state. So we seem to have a low probability of mistakenly attributing a highly complex quantum state, sourced by a black hole, to organisms (atop our low probability of detecting any complex quantum state prepared by anyone other than us).

Would I expect a quantum assembly number to greatly improve humanity’s search for extraterrestrial life? I’m no astrobiology expert (NASA videos notwithstanding), but I’d expect probably not. Still, astrobiology requires chemistry, which requires quantum physics. Quantum complexity seems likely to find applications in the assembly-number sphere. Besides, doesn’t juxtaposing the search for extraterrestrial life and the understanding of life’s origins with quantum computing sound like fun? And a sense of fun distinguishes certain living beings from inanimate matter about as straightforwardly as assembly number does.

With thanks to Jim Al-Khalili, Paul Davies, the From Physics to Life collaboration, and UCLA for hosting me at the workshop that spurred this article.

May I have this dance?

This July, I came upon a museum called the Haus der Musik in one of Vienna’s former palaces. The museum contains a room dedicated to Johann Strauss II, king of the waltz. The room, dimly lit, resembles a twilit gazebo. I could almost believe that a hidden orchestra was playing the rendition of “The Blue Danube” that filled the room. Glass cases displayed dance cards and accessories that dancers would bring to a nineteenth-century ball.

A ball. Who hasn’t read about one in a novel or seen one in a film? A throng of youngsters and their chaperones, rustling in silk. The glint of candles, the vigor of movement, the thrill of interaction, the anxiety of establishing one’s place in society.

Victoria and Albert at a ball in the film The Young Victoria

Another throng gathered a short walk from the Haus der Musik this summer. The Vienna University of Technology hosted the conference Quantum Thermodynamics (QTD) in the heart of the city. Don’t tell the other annual conferences, but QTD is my favorite. It spotlights the breed of quantum thermodynamics that’s surged throughout the past decade—the breed saturated with quantum information theory. I began attending QTD as a PhD student, and the conference shifts from city to city from year to year. I reveled in returning in person for the first time since the pandemic began.

Yet this QTD felt different. First, instead of being a PhD student, I brought a PhD student of my own. Second, granted, I enjoyed catching up with colleagues-cum-friends as much as ever. I especially relished seeing the “classmates” who belonged to my academic generation. Yet we were now congratulating each other on having founded research groups, and we were commiserating about the workload of primary investigators. 

Third, I found myself a panelist in the annual discussion traditionally called “Quo vadis, quantum thermodynamics?” The panel presented bird’s-eye views on quantum thermodynamics, analyzing trends and opining on the direction our field was taking (or should take).1 Fourth, at the end of the conference, almost the last sentence spoken into any microphone was “See you in Maryland next year.” Colleagues and I will host QTD 2024.


One of my dearest quantum-thermodynamic “classmates,” Nelly Ng, participated in the panel discussion, too. We met as students (see these two blog posts), and she’s now an assistant professor at Nanyang Technological University. Photo credit: Jakub Czartowski.

The day after QTD ended, I boarded an Austrian Airlines flight. Waltzes composed by Strauss played over the loudspeakers. They flipped a switch in my mind: I’d come of age, I thought. I’d attended QTD 2017 as a debutante, presenting my first invited talk at the conference series. I’d danced through QTD 2018 in Santa Barbara, as well as the online iterations held during the pandemic. I’d reveled in the vigor of scientific argumentation, the thrill of learning, the glint of slides shining on projector screens (not really). Now, I was beginning to shoulder responsibilities like a ballgown-wearing chaperone.

As I came of age, so did QTD. The conference series budded around the time I started grad school and embarked upon quantum-thermodynamics research. In 2017, approximately 80 participants attended QTD. This year, 250 people registered to attend in person, and others attended online. Two hundred fifty! Quantum thermodynamics scarcely existed as a field of research fifteen years ago.

I’ve heard that organizers of another annual conference, Quantum Information Processing (QIP), reacted similarly to a 250-person registration list some years ago. Aram Harrow, a professor and quantum information theorist at MIT, has shared stories about co-organizing the first QIPs. As a PhD student, he’d sat in his advisor’s office, taking notes, while the local quantum-information theorists chose submissions to highlight. Nowadays, a small army of reviewers and subreviewers processes the hordes of submissions. And, from what I heard about this year’s attendance, you almost might as well navigate a Disney theme park on a holiday as the QIP crowd. 

Will QTD continue to grow like QIP? Would such growth strengthen or fracture the community? Perhaps we’ll discuss those questions at a “Quo vadis?” session in Maryland next year. But I, at least, hope to continue always to grow—and to dance.2


Ludwig Boltzmann, a granddaddy of thermodynamics, worked in Vienna. I’ve waited for years to make a pilgrimage.

1My opinion: Now that quantum thermodynamics has showered us with fundamental insights, we should apply it in practical applications. How? Collaborators and I suggest one path here.

2I confess to having danced the waltz step (gleaned during my 14 years of ballet training) around that Strauss room in the Haus der Musik. I didn’t waltz around the conference auditorium, though.

Can Thermodynamics Resolve the Measurement Problem?

At the recent Quantum Thermodynamics conference in Vienna (coming next year to the University of Maryland!), during an expert panel Q&A session, one member of the audience asked “can quantum thermodynamics address foundational problems in quantum theory?”

That stuck with me, because that’s exactly what my research is about. So naturally, I’d say the answer is yes! In fact, here in the group of Marcus Huber at the Technical University of Vienna, we think thermodynamics may have something to say about the biggest quantum foundations problem of all: the measurement problem.

It’s sort of the iconic mystery of quantum mechanics: we know that an electron can be in two places at once – in a ‘superposition’ – but when we measure it, it’s only ever seen to be in one place, picked seemingly at random from the two possibilities. We say the state has ‘collapsed’.

What’s going on here? Thanks to Bell’s legendary theorem, we know that the answer can’t just be that it was always actually in one place and we just didn’t know which option it was – it really was in two places at once until it was measured1. But also, we don’t see this effect for sufficiently large objects. So how can this ‘two-places-at-once’ thing happen at all, and why does it stop happening once an object gets big enough?

Here, we already see hints that thermodynamics is involved, because even classical thermodynamics says that big systems behave differently from small ones. And interestingly, thermodynamics also hints that the narrative so far can’t be right. Because when taken at face value, the ‘collapse’ model of measurement breaks all three laws of thermodynamics.

Imagine an electron in a superposition of two energy levels: a combination of being in its ground state and first excited state. If we measure it and it ‘collapses’ to being only in the ground state, then its energy has decreased: it went from having some average of the ground and excited energies to just having the ground energy. The first law of thermodynamics says (crudely) that energy is conserved, but the loss of energy is unaccounted for here.

Next, the second law says that entropy always increases. One form of entropy represents your lack of information about a system’s state. Before the measurement, the system was in one of two possible states, but afterwards it was in only one state. So speaking very broadly, our uncertainty about its state, and hence the entropy, is reduced. (The third law is problematic here, too.)

There’s a clear explanation here: while the system on its own decreases its entropy and doesn’t conserve energy, in order to measure something, we must couple the system to a measuring device. That device’s energy and entropy changes must account for the system’s changes.

This is the spirit of our measurement model2. We explicitly include the detector as a quantum object in the record-keeping of energy and information flow. In fact, we also include the entire environment surrounding both system and device – all the lab’s stray air molecules, photons, etc. Then the idea is to describe a measurement process as propagating a record of a quantum system’s state into the surroundings without collapsing it.

A schematic representation of a system spreading information into an environment (from Schwarzhans et al., with permission)

But talking about quantum systems interacting with their environments is nothing new. The “decoherence” model from the 70s, which our work builds on, says quantum objects become less quantum when buffeted by a larger environment.

The problem, though, is that decoherence describes how information is lost into an environment, and so usually the environment’s dynamics aren’t explicitly calculated: this is called an open-system approach. By contrast, in the closed-system approach we use, you model the dynamics of the environment too, keeping track of all information. This is useful because conventional collapse dynamics seems to destroy information, but every other fundamental law of physics seems to say that information can’t be destroyed.

This all allows us to track how information flows from system to surroundings, using the “Quantum Darwinism” (QD) model of W.H. Żurek. Whereas decoherence describes how environments affect systems, QD describes how quantum systems impact their environments by spreading information into them. The QD model says that the most ‘classical’ information – the kind most consistent with classical notions of ‘being in one place’, etc. – is the sort most likely to ‘survive’ the decoherence process.

QD then further asserts that this is the information that’s most likely to be copied into the environment. If you look at some of a system’s surroundings, this is what you’d most likely see. (The ‘Darwinism’ name is because certain states are ‘selected for’ and ‘replicate’3.)

So we have a description of what we want the post-measurement state to look like: a decohered system, with its information redundantly copied into its surrounding environment. The last piece of the puzzle, then, is to ask how a measurement can create this state. Here, we finally get to the dynamics part of the thermodynamics, and introduce equilibration.

Earlier we said that even if the system’s entropy decreases, the detector’s entropy (or more broadly the environment’s) should go up to compensate. Well, equilibration maximizes entropy. In particular, equilibration describes how a system tends towards a particular ‘equilibrium’ state, because the system can always increase its entropy by getting closer to it.

It’s usually said that systems equilibrate if put in contact with an external environment (e.g. a can of beer cooling in a fridge), but we’re actually interested in a different type of equilibration called equilibration on average. There, we’re asking for the state that a system stays roughly close to, on average, over long enough times, with no outside contact. That means it never actually decoheres, it just looks like it does for certain observables. (This actually implies that nothing ever actually decoheres, since open systems are only an approximation you make when you don’t want to track all of the environment.)

Equilibration is the key to the model. In fact, we call our idea the Measurement-Equilibration Hypothesis (MEH): we’re asserting that measurement is an equilibration process. Which makes the final question: what does all this mean for the measurement problem?

In the MEH framework, when someone ‘measures’ a quantum system, they allow some measuring device, plus a chaotic surrounding environment, to interact with it. The quantum system then equilibrates ‘on average’ with the environment, and spreads information about its classical states into the surroundings. Since you are a macroscopically large human, any measurement you do will induce this sort of equilibration to happen, meaning you will only ever have access to the classical information in the environment, and never see superpositions. But no collapse is necessary, and no information is lost: rather some information is only much more difficult to access in all the environment noise, as happens all the time in the classical world.

It’s tempting to ask what ‘happens’ to the outcomes we don’t see, and how nature ‘decides’ which outcome to show to us. Those are great questions, but in our view, they’re best left to philosophers4. For the question we care about: why measurements look like a ‘collapse’, we’re just getting started with our Measurement-Equilibration Hypothesis – there’s still lots to do in our explorations of it. We think the answers we’ll uncover in doing so will form an exciting step forward in our understanding of the weird and wonderful quantum world.

Members of the MEH team at a kick-off meeting for the project in Vienna in February 2023. Left to right: Alessandro Candeloro, Marcus Huber, Emanuel Schwarzhans, Tom Rivlin, Sophie Engineer, Veronika Baumann, Nicolai Friis, Felix C. Binder, Mehul Malik, Maximilian P.E. Lock, Pharnam Bakhshinezhad

Acknowledgements: Big thanks to the rest of the MEH team for all the help and support, in particular Dr. Emanuel Schwarzhans and Dr. Lock for reading over this piece!)

Here are a few choice references (by no means meant to be comprehensive!)

Quantum Thermodynamics (QTD) Conference 2023: https://qtd2023.conf.tuwien.ac.at/
QTD 2024: https://qtd-hub.umd.edu/event/qtd-conference-2024/
Bell’s Theorem: https://plato.stanford.edu/entries/bell-theorem/
The first MEH paper: https://arxiv.org/abs/2302.11253
A review of decoherence: https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.75.715
Quantum Darwinism: https://www.nature.com/articles/nphys1202
Measurements violate the 3rd law: https://quantum-journal.org/papers/q-2020-01-13-222/
More on the 3rd and QM: https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.4.010332
Equilibration on average: https://iopscience.iop.org/article/10.1088/0034-4885/79/5/056001/meta
Objectivity: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.91.032122

  1. There is a perfectly valid alternative with other weird implications: that it was always just in one place, but the world is intrinsically non-local. Most physicists prefer to save locality over realism, though. ↩︎
  2. First proposed in this paper by Schwarzhans, Binder, Huber, and Lock: https://arxiv.org/abs/2302.11253 ↩︎
  3. In my opinion… it’s a brilliant theory with a terrible name! Sure, there’s something akin to ‘selection pressure’ and ‘reproduction’, but there aren’t really any notions of mutation, adaptation, fitness, generations… Alas, the name has stuck. ↩︎
  4. I actually love thinking about this question, and the interpretations of quantum mechanics more broadly, but it’s fairly orthogonal to the day-to-day research on this model. ↩︎

The Book of Mark, Chapter 2

Late in the summer of 2021, I visited a physics paradise in a physical paradise: the Kavli Institute for Theoretical Physics (KITP). The KITP sits at the edge of the University of California, Santa Barbara like a bougainvillea bush at the edge of a yard. I was eating lunch outside the KITP one afternoon, across the street from the beach. PhD student Arman Babakhani, whom a colleague had just introduced me to, had joined me.

The KITP’s Kohn Hall

What physics was I working on nowadays? Arman wanted to know.

Thermodynamic exchanges. 

The world consists of physical systems exchanging quantities with other systems. When a rose blooms outside the Santa Barbara mission, it exchanges pollen with the surrounding air. The total amount of pollen across the rose-and-air whole remains constant, so we call the amount a conserved quantity. Quantum physicists usually analyze conservation of particles, energy, and magnetization. But quantum systems can conserve quantities that participate in uncertainty relations. Such quantities are called incompatible, because you can’t measure them simultaneously. The x-, y-, and z-components of a qubit’s spin are incompatible.

The Santa Barbara mission…
…and its roses

Exchanging and conserving incompatible quantities, systems can violate thermodynamic expectations. If one system is much larger than the other, we expect the smaller system to thermalize; yet incompatibility invalidates derivations of the thermal state’s form. Incompatibility reduces the thermodynamic entropy produced by exchanges. And incompatibility can raise the average amount entanglement in the pair of systems—the total system.

If the total system conserves incompatible quantities, what happens to the eigenstate thermalization hypothesis (ETH)? Last month’s blog post overviewed the ETH, a framework for understanding how quantum many-particle systems thermalize internally. That post labeled Mark Srednicki, a professor at the KITP, a high priest of the ETH. I want, I told Arman, to ask Mark what happens when you combine the ETH with incompatible conserved quantities.

I’ll do it, Arman said.

Soon after, I found myself in the fishbowl. High up in the KITP, a room filled with cushy seats overlooks the ocean. The circular windows lend the room its nickname. Arrayed on the armchairs and couches were Mark, Arman, Mark’s PhD student Fernando Iniguez, and Mark’s recent PhD student Chaitanya Murthy. The conversation went like this:

Mark was frustrated about not being able to answer the question. I was delighted to have stumped him. Over the next several weeks, the group continued meeting, and we emailed out notes for everyone to criticize. I particulary enjoyed watching Mark and Chaitanya interact. They’d grown so intellectually close throughout Chaitanya’s PhD studies, they reminded me of an old married couple. One of them had to express only half an idea for the other to realize what he’d meant and to continue the thread. Neither had any qualms with challenging the other, yet they trusted each other’s judgment.1

In vintage KITP fashion, we’d nearly completed a project by the time Chaitanya and I left Santa Barbara. Physical Review Letters published our paper this year, and I’m as proud of it as a gardener of the first buds from her garden. Here’s what we found.

Southern California spoiled me for roses.

Incompatible conserved quantities conflict with the ETH and the ETH’s prediction of internal thermalization. Why? For three reasons. First, when inferring thermalization from the ETH, we assume that the Hamiltonian lacks degeneracies (that no energy equals any other). But incompatible conserved quantities force degeneracies on the Hamiltonian.2 

Second, when inferring from the ETH that the system thermalizes, we assume that the system begins in a microcanonical subspace. That’s an eigenspace shared by the conserved quantities (other than the Hamiltonian)—usually, an eigenspace of the total particle number or the total spin’s z-component. But, if incompatible, the conserved quantities share no eigenbasis, so they might not share eigenspaces, so microcanonical subspaces won’t exist in abundance.

Third, let’s focus on a system of N qubits. Say that the Hamiltonian conserves the total spin components S_x, S_y, and S_z. The Hamiltonian obeys the Wigner–Eckart theorem, which sounds more complicated than it is. Suppose that the qubits begin in a state | s_\alpha, \, m \rangle labeled by a spin quantum number s_\alpha and a magnetic spin quantum number m. Let a particle hit the qubits, acting on them with an operator \mathcal{O} . With what probability (amplitude) do the qubits end up with quantum numbers s_{\alpha'} and m'? The answer is \langle s_{\alpha'}, \, m' | \mathcal{O} | s_\alpha, \, m \rangle. The Wigner–Eckart theorem dictates this probability amplitude’s form. 

| s_\alpha, \, m \rangle and | s_{\alpha'}, \, m' \rangle are Hamiltonian eigenstates, thanks to the conservation law. The ETH is an ansatz for the form of \langle s_{\alpha'}, \, m' | \mathcal{O} | s_\alpha, \, m \rangle—of the elements of matrices that represent operators \mathcal{O} relative to the energy eigenbasis. The ETH butts heads with the Wigner–Eckart theorem, which also predicts the matrix element’s form.

The Wigner–Eckart theorem wins, being a theorem—a proved claim. The ETH is, as the H in the acronym relates, only a hypothesis.

If conserved quantities are incompatible, we have to kiss the ETH and its thermalization predictions goodbye. But must we set ourselves adrift entirely? Can we cling to no buoy from physics’s best toolkit for quantum many-body thermalization?

No, and yes, respectively. Our clan proposed a non-Abelian ETH for Hamiltonians that conserve incompatible quantities—or, equivalently, that have non-Abelian symmetries. The non-Abelian ETH depends on s_\alpha and on Clebsch–Gordan coefficients—conversion factors between total-spin eigenstates | s_\alpha, \, m \rangle and product states | s_1, \, m_1 \rangle \otimes | s_2, \, m_2 \rangle.

Using the non-Abelian ETH, we proved that many systems thermalize internally, despite conserving incompatible quantities. Yet the incompatibility complicates the proof enormously, extending it from half a page to several pages. Also, under certain conditions, incompatible quantities may alter thermalization. According to the conventional ETH, time-averaged expectation values \overline{ \langle \mathcal{O} \rangle }_t come to equal thermal expectation values \langle \mathcal{O} \rangle_{\rm th} to within O( N^{-1} ) corrections, as I explained last month. The correction can grow polynomially larger in the system size, to O( N^{-1/2} ), if conserved quantities are incompatible. Our conclusion holds under an assumption that we argue is physically reasonable.

So incompatible conserved quantities do alter the ETH, yet another thermodynamic expectation. Physicist Jae Dong Noh began checking the non-Abelian ETH numerically, and more testing is underway. And I’m looking forward to returning to the KITP this fall. Tales do say that paradise is a garden.

View through my office window at the KITP

1Not that married people always trust each other’s judgment.

2The reason is Schur’s lemma, a group-theoretic result. Appendix A of this paper explains the details.

Caltech’s Ginsburg Center

Editor’s note: On 10 August 2023, Caltech celebrated the groundbreaking for the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement, which will open in 2025. At a lunch following the ceremony, John Preskill made these remarks.

Rendering of the facade of the Ginsburg Center

Hello everyone. I’m John Preskill, a professor of theoretical physics at Caltech, and I’m honored to have this opportunity to make some brief remarks on this exciting day.

In 2025, the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement will open on the Caltech campus. That will certainly be a cause for celebration. Quite fittingly, in that same year, we’ll have something else to celebrate — the 100th anniversary of the formulation of quantum mechanics in 1925. In 1900, it had become clear that the physics of the 19th century had serious shortcomings that needed to be addressed, and for 25 years a great struggle unfolded to establish a firm foundation for the science of atoms, electrons, and light; the momentous achievements of 1925 brought that quest to a satisfying conclusion. No comparably revolutionary advance in fundamental science has occurred since then.

For 98 years now we’ve built on those achievements of 1925 to arrive at a comprehensive understanding of much of the physical world, from molecules to materials to atomic nuclei and exotic elementary particles, and much else besides. But a new revolution is in the offing. And the Ginsburg Center will arise at just the right time and at just the right place to drive that revolution forward.

Up until now, most of what we’ve learned about the quantum world has resulted from considering the behavior of individual particles. A single electron propagating as a wave through a crystal, unfazed by barriers that seem to stand in its way. Or a single photon, bouncing hundreds of times between mirrors positioned kilometers apart, dutifully tracking the response of those mirrors to gravitational waves from black holes that collided in a galaxy billions of light years away. Understanding that single-particle physics has enabled us to explore nature in unprecedented ways, and to build information technologies that have profoundly transformed our lives.

At the groundbreaking: Physics, Math and Astronomy Chair Fiona Harrison, California Assemblymember Chris Holden, President Tom Rosenbaum, Charlotte Ginsburg, Dr. Allen Ginsburg, Pasadena Mayor Victor Gordo, Provost Dave Tirrell.

What’s happening now is that we’re getting increasingly adept at instructing particles to move in coordinated ways that can’t be accurately described in terms of the behavior of one particle at a time. The particles, as we like to say, can become entangled. Many particles, like electrons or photons or atoms, when highly entangled, exhibit an extraordinary complexity that we can’t capture with the most powerful of today’s supercomputers, or with our current theories of how Nature works. That opens extraordinary opportunities for new discoveries and new applications.

We’re very proud of the role Caltech has played in setting the stage for the next quantum revolution. Richard Feynman envisioning quantum computers that far surpass the computers we have today. Kip Thorne proposing ways to use entangled photons to perform extraordinarily precise measurements. Jeff Kimble envisioning and executing ingenious methods for entangling atoms and photons. Jim Eisenstein creating and studying extraordinary phenomena in a soup of entangled electrons. And much more besides. But far greater things are yet to come.

How can we learn to understand and exploit the behavior of many entangled particles that work together? For that, we’ll need many scientists and engineers who work together. I joined the Caltech faculty in August 1983, almost exactly 40 years ago. These have been 40 good years, but I’m having more fun now than ever before. My training was in elementary particle physics. But as our ability to manipulate the quantum world advances, I find that I have more and more in common with my colleagues from different specialties. To fully realize my own potential as a researcher and a teacher, I need to stay in touch with atomic physics, condensed matter physics, materials science, chemistry, gravitational wave physics, computer science, electrical engineering, and much else. Even more important, that kind of interdisciplinary community is vital for broadening the vision of the students and postdocs in our research groups.

Nurturing that community — that’s what the Ginsburg Center is all about. That’s what will happen there every day. That sense of a shared mission, enhanced by colocation, will enable the Ginsburg Center to lead the way as quantum science and technology becomes increasingly central to Caltech’s research agenda in the years ahead, and increasingly important for science and engineering around the globe. And I just can’t wait for 2025.

Caltech is very fortunate to have generous and visionary donors like the Ginsburgs and the Sherman Fairchild Foundation to help us realize our quantum dreams.

Dr. Allen and Charlotte Ginsburg

It from Qubit: The Last Hurrah

Editor’s note: Since 2015, the Simons Foundation has supported the “It from Qubit” collaboration, a group of scientists drawing on ideas from quantum information theory to address deep issues in fundamental physics. The collaboration held its “Last Hurrah” event at Perimeter Institute last week. Here is a transcript of remarks by John Preskill at the conference dinner.

It from Qubit 2023 at Perimeter Institute

This meeting is forward-looking, as it should be, but it’s fun to look back as well, to assess and appreciate the progress we’ve made. So my remarks may meander back and forth through the years. Settle back — this may take a while.

We proposed the It from Qubit collaboration in March 2015, in the wake of several years of remarkable progress. Interestingly, that progress was largely provoked by an idea that most of us think is wrong: Black hole firewalls. Wrong perhaps, but challenging to grapple with.

This challenge accelerated a synthesis of quantum computing, quantum field theory, quantum matter, and quantum gravity as well. By 2015, we were already appreciating the relevance to quantum gravity of concepts like quantum error correction, quantum computational complexity, and quantum chaos. It was natural to assemble a collaboration in which computer scientists and information theorists would participate along with high-energy physicists.

We built our proposal around some deep questions where further progress seemed imminent, such as these:

Does spacetime emerge from entanglement?
Do black holes have interiors?
What is the information-theoretical structure of quantum field theory?
Can quantum computers simulate all physical phenomena?

On April 30, 2015 we presented our vision to the Simons Foundation, led by Patrick [Hayden] and Matt [Headrick], with Juan [Maldacena], Lenny [Susskind] and me tagging along. We all shared at that time a sense of great excitement; that feeling must have been infectious, because It from Qubit was successfully launched.

Some It from Qubit investigators at a 2015 meeting.

Since then ideas we talked about in 2015 have continued to mature, to ripen. Now our common language includes ideas like islands and quantum extremal surfaces, traversable wormholes, modular flow, the SYK model, quantum gravity in the lab, nonisometric codes, the breakdown of effective field theory when quantum complexity is high, and emergent geometry described by Von Neumann algebras. In parallel, we’ve seen a surge of interest in quantum dynamics in condensed matter, focused on issues like how entanglement spreads, and how chaotic systems thermalize — progress driven in part by experimental advances in quantum simulators, both circuit-based and analog.

Why did we call ourselves “It from Qubit”? Patrick explained that in our presentation with a quote from John Wheeler in 1990. Wheeler said,

“It from bit” symbolizes the idea that every item of the physical world has at bottom—a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

As is often the case with Wheeler, you’re not quite sure what he’s getting at. But you can glean that Wheeler envisioned that progress in fundamental physics would be hastened by bringing in ideas from information theory. So we updated Wheeler’s vision by changing “it from bit” to “it from qubit.”

As you may know, Richard Feynman had been Wheeler’s student, and he once said this about Wheeler: “Some people think Wheeler’s gotten crazy in his later years, but he’s always been crazy.” So you can imagine how flattered I was when Graeme Smith said the exact same thing about me.

During the 1972-73 academic year, I took a full-year undergraduate course from Wheeler at Princeton that covered everything in physics, so I have a lot of Wheeler stories. I’ll just tell one, which will give you some feel for his teaching style. One day, Wheeler arrives in class dressed immaculately in a suit and tie, as always, and he says: “Everyone take out a sheet of paper, and write down all the equations of physics – don’t leave anything out.” We dutifully start writing equations. The Schrödinger equation, Newton’s laws, Maxwell’s equations, the definition of entropy and the laws of thermodynanics, Navier-Stokes … we had learned a lot. Wheeler collects all the papers, and puts them in a stack on a table at the front of the classroom. He gestures toward the stack and says imploringly “Fly!” [Long pause.] Nothing happens. He tries again, even louder this time: “Fly!” [Long pause.] Nothing happens. Then Wheeler concludes: “On good authority, this stack of papers contains all the equations of physics. But it doesn’t fly. Yet, the universe flies. Something must be missing.”

Channeling Wheeler at the banquet, I implore my equations to fly. Photo by Jonathan Oppenheim.

He was an odd man, but inspiring. And not just odd, but also old. We were 19 and could hardly believe he was still alive — after all, he had worked with Bohr on nuclear fission in the 1930s! He was 61. I’m wiser now, and know that’s not really so old.

Now let’s skip ahead to 1998. Just last week, Strings 2023 happened right here at PI. So it’s fitting to mention that a pivotal Strings meeting occurred 25 years ago, Strings 1998 in Santa Barbara. The participants were in a celebratory mood, so much so that Jeff Harvey led hundreds of physicists in a night of song and dance. It went like this [singing to the tune of “The Macarena”]:

You start with the brane
and the brane is BPS.
Then you go near the brane
and the space is AdS.
Who knows what it means?
I don’t, I confess.
Ehhhh! Maldacena!

You can’t blame them for wanting to celebrate. Admittedly I wasn’t there, so how did I know that hundreds of physicists were singing and dancing? I read about it in the New York Times!

It was significant that by 1998, the Strings meetings had already been held annually for 10 years. You might wonder how that came about. Let’s go back to 1984. Those of you who are too young to remember might not realize that in the late 70s and early 80s string theory was in eclipse. It had initially been proposed as a model of hadrons, but after the discovery of asymptotic freedom in 1973 quantum chromodynamics became accepted as the preferred theory of the strong interactions. (Maybe the QCD string will make a comeback someday – we’ll see.) The community pushing string theory forward shrunk to a handful of people around the world. That changed very abruptly in August 1984. I tried to capture that sudden change in a poem I wrote for John Schwarz’s 60th birthday in 2001. I’ll read it — think of this as a history lesson.

Thirty years ago or more
John saw what physics had in store.
He had a vision of a string
And focused on that one big thing.

But then in nineteen-seven-three
Most physicists had to agree
That hadrons blasted to debris
Were well described by QCD.

The string, it seemed, by then was dead.
But John said: “It’s space-time instead!
The string can be revived again.
Give masses twenty powers of ten!

Then Dr. Green and Dr. Black,
Writing papers by the stack,
Made One, Two-A, and Two-B glisten.
Why is it none of us would listen?

We said, “Who cares if super tricks
Bring D to ten from twenty-six?
Your theory must have fatal flaws.
Anomalies will doom your cause.”

If you weren’t there you couldn’t know
The impact of that mighty blow:
“The Green-Schwarz theory could be true —
It works for S-O-thirty-two!”

Then strings of course became the rage
And young folks of a certain age
Could not resist their siren call:
One theory that explains it all.

Because he never would give in,
Pursued his dream with discipline,
John Schwarz has been a hero to me.
So … please don’t spell it with a  “t”!

And 39 years after the revolutionary events of 1984, the intellectual feast launched by string theory still thrives.

In the late 1980s and early 1990s, many high-energy physicists got interested in the black hole information problem. Of course, the problem was 15 years old by then; it arose when Hawking radiation was discovered, as Hawking himself pointed out shortly thereafter. But many of us were drawn to this problem while we waited for the Superconducting Super Collider to turn on. As I have sometimes done when I wanted to learn something, in 1990 I taught a course on quantum field theory in curved spacetime, the main purpose of which was to explain the origin of Hawking radiation, and then for a few years I tried to understand whether information can escape from black holes and if so how, as did many others in those days. That led to a 1992 Aspen program co-organized by Andy Strominger and me on “Quantum Aspects of Black Holes.” Various luminaries were there, among them Hawking, Susskind, Sidney Coleman, Kip Thorne, Don Page, and others. Andy and I were asked to nominate someone from our program to give the Aspen Center colloquium, so of course we chose Lenny, and he gave an engaging talk on “The Puzzle of Black Hole Evaporation.”

At the end of the talk, Lenny reported on discussions he’d had with various physicists he respected about the information problem, and he summarized their views. Of course, Hawking said information is lost. ‘t Hooft said that the S-matrix must be unitary for profound reasons we needed to understand. Polchinski said in 1992 that information is lost and there is no way to retrieve it. Yakir Aharonov said that the information resides in a stable Planck-sized black hole remnant. Sidney Coleman said a black hole is a lump of coal — that was the code in 1992 for what we now call the central dogma of black hole physics, that as seen from the outside a black hole is a conventional quantum system. And – remember this was Lenny’s account of what he claimed people had told him – Frank Wilczek said this is a technical problem, I’ll soon have it solved, while Ed Witten said he did not find the problem interesting.

We talked a lot that summer about the no-cloning principle, and our discomfort with the notion that the quantum information encoded in an infalling encyclopedia could be in two places at once on the same time slice, seen inside the black hole by infalling observers and seen outside the black hole by observers who peruse the Hawking radiation. That potential for cloning shook the faith of the self-appointed defenders of unitarity. Andy and I wrote a report at the end of the workshop with a pessimistic tone:

There is an emerging consensus among the participants that Hawking is essentially right – that the information loss paradox portends a true revolution in fundamental physics. If so, then one must go further, and develop a sensible “phenomenological” theory of information loss. One must reconcile the fact of information loss with established principles of physics, such as locality and energy conservation. We expect that many people, stimulated by their participation in the workshop, will now focus attention on this challenge.

I posted a paper on the arXiv a month later with a similar outlook.

There was another memorable event a year later, in June 1993, a conference at the ITP in Santa Barbara (there was no “K” back then), also called “Quantum Aspects of Black Holes.” Among those attending were Susskind, Gibbons, Polchinski, Thorne, Wald, Israel, Bekenstein, and many others. By then our mood was brightening. Rather pointedly, Lenny said to me that week: “Why is this meeting so much better than the one you organized last year?” And I replied, “Because now you think you know the answer!”

That week we talked about “black hole complementarity,” our hope that quantum information being available both inside and outside the horizon could be somehow consistent with the linearity of quantum theory. Complementarity then was a less radical, less wildly nonlocal idea than it became later on. We envisioned that information in an infalling body could stick to the stretched horizon, but not, as I recall, that the black hole interior would be somehow encoded in Hawking radiation emitted long ago — that came later. But anyway, we felt encouraged.

Joe Polchinski organized a poll of the participants, where one could choose among four options.

  1. Information is lost (unitarity violated)
  2. Information escapes (causality violated)
  3. Planck-scale black hole remnants
  4. None of the above

The poll results favored unitarity over information loss by a 60-40 margin. Perhaps not coincidentally, the participants self-identified as 60% high energy physicists and 40% relativists.

The following summer in June 1994, there was a program called Geometry and Gravity at the Newton Institute in Cambridge. Hawking, Gibbons, Susskind, Strominger, Harvey, Sorkin, and (Herman) Verlinde were among the participants. I had more discussions with Lenny that month than any time before or since. I recall sending an email to Paul Ginsparg after one such long discussion in which I said, “When I hear Lenny Susskind speak, I truly believe that information can come out of a black hole.” Secretly, though, having learned about Shor’s algorithm shortly before that program began, I was spending my evenings struggling to understand Shor’s paper. After Cambridge, Lenny visited ‘t Hooft in Utrecht, and returned to Stanford all charged up to write his paper on “The world as a hologram,” in which he credits ‘t Hooft with the idea that “the world is in a sense two-dimensional.”

Important things happened in the next few years: D-branes, counting of black hole microstates, M-theory, and AdS/CFT. But I’ll skip ahead to the most memorable of my visits to Perimeter Institute. (Of course, I always like coming here, because in Canada you use the same electrical outlets we do …)

In June 2007, there was a month-long program at PI called “Taming the Quantum World.” I recall that Lucien Hardy objected to that title — he preferred “Let the Beast Loose” — which I guess is a different perspective on the same idea. I talked there about fault-tolerant quantum computing, but more importantly, I shared an office with Patrick Hayden. I already knew Patrick well — he had been a Caltech postdoc — but I was surprised and pleased that he was thinking about black holes. Patrick had already reached crucial insights concerning the behavior of a black hole that is profoundly entangled with its surroundings. That sparked intensive discussions resulting in a paper later that summer called “Black holes as mirrors.” In the acknowledgments you’ll find this passage:

We are grateful for the hospitality of the Perimeter Institute, where we had the good fortune to share an office, and JP thanks PH for letting him use the comfortable chair.

We intended for that paper to pique the interest of both the quantum information and quantum gravity communities, as it seemed to us that the time was ripe to widen the communication channel between the two. Since then, not only has that communication continued, but a deeper synthesis has occurred; most serious quantum gravity researchers are now well acquainted with the core concepts of quantum information science.

That John Schwarz poem I read earlier reminds me that I often used to write poems. I do it less often lately. Still, I feel that you are entitled to hear something that rhymes tonight. But I quickly noticed our field has many words that are quite hard to rhyme, like “chaos” and “dogma.” And perhaps the hardest of all: “Takayanagi.” So I decided to settle for some limericks — that’s easier for me than a full-fledged poem.

This first one captures how I felt when I first heard about AdS/CFT: excited but perplexed.

Spacetime is emergent they say.
But emergent in what sort of way?
It’s really quite cool,
The bulk has a dual!
I might understand that someday.

For a quantum information theorist, it was pleasing to learn later on that we can interpret the dictionary as an encoding map, such that the bulk degrees of freedom are protected when a portion of the boundary is erased.

Almheiri and Harlow and Dong
Said “you’re thinking about the map wrong.”
It’s really a code!
That’s the thing that they showed.
Should we have known that all along?

(It is easier to rhyme “Dong” than “Takayanagi”.) To see that connection one needed a good grasp of both AdS/CFT and quantum error-correcting codes. In 2014 few researchers knew both, but those guys did.

For all our progress, we still don’t have a complete answer to a key question that inspired IFQ. What’s inside a black hole?

Information loss has been denied.
Locality’s been cast aside.
When the black hole is gone
What fell in’s been withdrawn.
I’d still like to know: what’s inside?

We’re also still lacking an alternative nonperturbative formulation of the bulk; we can only say it’s something that’s dual to the boundary. Until we can define both sides of the correspondence, the claim that two descriptions are equivalent, however inspiring, will remain unsatisfying.

Duality I can embrace.
Complexity, too, has its place.
That’s all a good show
But I still want to know:
What are the atoms of space?

The question, “What are the atoms of space?” is stolen from Joe Polchinski, who framed it to explain to a popular audience what we’re trying to answer. I miss Joe. He was a founding member of It from Qubit, an inspiring scientific leader, and still an inspiration for all of us today.

The IFQ Simons collaboration may fade away, but the quest that has engaged us these past 8 years goes on. IFQ is the continuation of a long struggle, which took on great urgency with Hawking’s formulation of the information loss puzzle nearly 50 years ago. Understanding quantum gravity and its implications is a huge challenge and a grand quest that humanity is obligated to pursue. And it’s fun and it’s exciting, and I sincerely believe that we’ve made remarkable progress in recent years, thanks in large part to you, the IFQ community. We are privileged to live at a time when truths about the nature of space and time are being unveiled. And we are privileged to be part of this community, with so many like-minded colleagues pulling in the same direction, sharing the joy of facing this challenge.

Where is it all going? Coming back to our pitch to the Simons Foundation in 2015, I was very struck by Juan’s presentation that day, and in particular his final slide. I liked it so much that I stole it and used in my presentations for a while. Juan tried to explain what we’re doing by means of an analogy to biological science. How are the quantumists like the biologists?

Well, bulk quantum gravity is life. We all want to understand life. The boundary theory is chemistry, which underlies life. The quantum information theorists are chemists; they want to understand chemistry in detail. The quantum gravity theorists are biologists, they think chemistry is fine, if it can really help them to understand life. What we want is: molecular biology, the explanation for how life works in terms of the underlying chemistry. The black hole information problem is our fruit fly, the toy problem we need to solve before we’ll be ready to take on a much bigger challenge: finding the cure for cancer; that is, understanding the big bang.

How’s it going? We’ve made a lot of progress since 2015. We haven’t cured cancer. Not yet. But we’re having a lot of fun along the way there.

I’ll end with this hope, addressed especially to those who were not yet born when AdS/CFT was first proposed, or were still scampering around in your playpens. I’ll grant you a reprieve, you have another 8 years. By then: May you cure cancer!

So I propose this toast: To It from Qubit, to our colleagues and friends, to our quest, to curing cancer, to understanding the universe. I wish you all well. Cheers!

The Book of Mark

Mark Srednicki doesn’t look like a high priest. He’s a professor of physics at the University of California, Santa Barbara (UCSB); and you’ll sooner find him in khakis than in sacred vestments. Humor suits his round face better than channeling divine wrath would; and I’ve never heard him speak in tongues—although, when an idea excites him, his hands rise to shoulder height of their own accord, as though halfway toward a priestly blessing. Mark belongs less on a ziggurat than in front of a chalkboard. Nevertheless, he called himself a high priest.

Specifically, Mark jokingly called himself a high priest of the eigenstate thermalization hypothesis, a framework for understanding how quantum many-body systems thermalize internally. The eigenstate thermalization hypothesis has an unfortunate number of syllables, so I’ll call it the ETH. The ETH illuminates closed quantum many-body systems, such as a clump of N ultracold atoms. The clump can begin in a pure product state | \psi(0) \rangle, then evolve under a chaotic1 Hamiltonian H. The time-t state | \psi(t) \rangle will remain pure; its von Neumann entropy will always vanish. Yet entropy grows according to the second law of thermodynamics. Breaking the second law amounts almost to a enacting a miracle, according to physicists. Does the clump of atoms deserve consideration for sainthood?

No—although the clump’s state remains pure, a small subsystem’s state does not. A subsystem consists of, for example, a few atoms. They’ll entangle with the other atoms, which serve as an effective environment. The entanglement will mix the few atoms’ state, whose von Neumann entropy will grow.

The ETH predicts this growth. The ETH is an ansatz about H and an operator O—say, an observable of the few-atom subsystem. We can represent O as a matrix relative to the energy eigenbasis. The matrix elements have a certain structure, if O and H satisfy the ETH. Suppose that the operators do and that H lacks degeneracies—that no two energy eigenvalues equal each other. We can prove that O thermalizes: Imagine measuring the expectation value \langle \psi(t) | O | \psi(t) \rangle at each of many instants t. Averaging over instants produces the time-averaged expectation value \overline{ \langle O \rangle_t }

Another average is the thermal average—the expectation value of O in the appropriate thermal state. If H conserves just itself,2 the appropriate thermal state is the canonical state, \rho_{\rm can} := e^{-\beta H}/ Z. The average energy \langle \psi(0) | H | \psi(0) \rangle defines the inverse temperature \beta, and Z normalizes the state. Hence the thermal average is \langle O \rangle_{\rm th}  :=  {\rm Tr} ( O \rho_{\rm can} )

The time average approximately equals the thermal average, according to the ETH: \overline{ \langle O \rangle_t }  =  \langle O \rangle_{\rm th} + O \big( N^{-1} \big). The correction is small in the total number N of atoms. Through the lens of O, the atoms thermalize internally. Local observables tend to satisfy the ETH, and we can easily observe only local observables. We therefore usually observe thermalization, consistently with the second law of thermodynamics.

I agree that Mark Srednicki deserves the title high priest of the ETH. He and Joshua Deutsch independently dreamed up the ETH in 1994 and 1991. Since numericists reexamined it in 2008, studies and applications of the ETH have exploded like a desert religion. Yet Mark had never encountered the question I posed about it in 2021. Next month’s blog post will share the good news about that question.

1Nonintegrable.

2Apart from trivial quantities, such as projectors onto eigenspaces of H.

What is the logical gate speed of a photonic quantum computer?

Terry Rudolph, PsiQuantum & Imperial College London

During a recent visit to the wild western town of Pasadena I got into a shootout at high-noon trying to explain the nuances of this question to a colleague. Here is a more thorough (and less risky) attempt to recover!

tl;dr Photonic quantum computers can perform a useful computation orders of magnitude faster than a superconducting qubit machine. Surprisingly, this would still be true even if every physical timescale of the photonic machine was an order of magnitude longer (i.e. slower) than those of the superconducting one. But they won’t be.

SUMMARY

  • There is a misconception that the slow rate of entangled photon production from many current (“postselected”) experiments is somehow relevant to the logical speed of a photonic quantum computer. It isn’t, because those experiments don’t use an optical switch.
  • If we care about how fast we can solve useful problems then photonic quantum computers will eventually win that race. Not only because in principle their components can run faster, but because of fundamental architectural flexibilities which mean they need to do fewer things.
  • Unlike most quantum systems for which relevant physical timescales are determined by “constants of nature” like interaction strengths, the relevant photonic timescales are determined by “classical speeds” (optical switch speeds, electronic signal latencies etc). Surprisingly, even if these were slower – which there is no reason for them to be – the photonic machine can still compute faster.
  • In a simple world the speed of a photonic quantum computer would just be the speed at which it’s possible to make small (fixed sized) entangled states. GHz rates for such are plausible and correspond to the much slower MHz code-cycle rates of a superconducting machine. But we want to leverage two unique photonic features: Availability of long delays (e.g. optical fiber) and ease of nonlocal operations, and as such the overall story is much less simple.
  • If what floats your boat are really slow things, like cold atoms, ions etc., then the hybrid photonic/matter architecture outlined here is the way you can build a quantum computer with a faster logical gate speed than (say) a superconducting qubit machine. You should be all over it.
  • Magnifying the number of logical qubits in a photonic quantum computer by 100 could be done simply by making optical fiber 100 times less lossy. There are reasons to believe that such fiber is possible (though not easy!). This is just one example of the “photonics is different, photonics is different, ” mantra we should all chant every morning as we stagger out of bed.
  • The flexibility of photonic architectures means there is much more unexplored territory in quantum algorithms, compiling, error correction/fault tolerance, system architectural design and much more. If you’re a student you’d be mad to work on anything else!

Sorry, I realize that’s kind of an in-your-face list, some of which is obviously just my opinion! Lets see if I can make it yours too 🙂

I am not going to reiterate all the standard stuff about how photonics is great because of how manufacturable it is, its high temperature operation, easy networking modularity blah blah blah. That story has been told many times elsewhere. But there are subtleties to understanding the eventual computational speed of a photonic quantum computer which have not been explained carefully before. This post is going to slowly lead you through them.

I will only be talking about useful, large-scale quantum computing – by which I mean machines capable of, at a minimum, implementing billions of logical quantum gates on hundreds of logical qubits.

PHYSICAL TIMESCALES

In a quantum computer built from matter – say superconducting qubits, ions, cold atoms, nuclear/electronic spins and so on, there is always at least one natural and inescapable timescale to point to. This typically manifests as some discrete energy levels in the system, the levels that make the two states of the qubit. Related timescales are determined by the interaction strengths of a qubit with its neighbors, or with external fields used to control it. One of the most important timescales is that of measurement – how fast can we determine the state of the qubit? This generally means interacting with the qubit via a sequence of electromagnetic fields and electronic amplification methods to turn quantum information classical.  Of course, measurements in quantum theory are a pernicious philosophical pit – some people claim they are instantaneous, others that they don’t even happen! Whatever. What we care about is: How long does it take for a readout signal to get to a computer that records the measurement outcome as classical bits, processes them, and potentially changes some future action (control field) interacting with the computer?

For building a quantum computer from optical frequency photons there are no energy levels to point to. The fundamental qubit states correspond to a single photon being either “here” or “there”, but we cannot trap and hold them at fixed locations, so unlike, say, trapped atoms these aren’t discrete energy eigenstates. The frequency of the photons does, in principle, set some kind of timescale (by energy-time uncertainty), but it is far too small to be constraining. The most basic relevant timescales are set by how fast we can produce, control (switch) or detect the photons. While these depend on the bandwidth of the photons used – itself a very flexible design choice – typical components operate in GHz regimes. Another relevant timescale is that we can store photons in a standard optical fiber for tens of microseconds before its probability of getting lost exceeds (say) 10%.

There is a long chain of things that need to be strung together to get from component-level physical timescales to the computational speed of a quantum computer built from them. The first step on the journey is to delve a little more into the world of fault tolerance.

TIMESCALES RELEVANT FOR FAULT TOLERANCE

The timescales of measurement are important because they determine the rate at which entropy can be removed from the system. All practical schemes for fault tolerance rely on performing repeated measurements during the computation to combat noise and imperfection. (Here I will only discuss surface-code fault tolerance, much of what I say though remains true more generally.) In fact, although at a high level one might think a quantum computer is doing some nice unitary logic gates, microscopically the machine is overwhelmingly just a device for performing repeated measurements on small subsets of qubits.

In matter-based quantum computers the overall story is relatively simple. There is a parameter d, the “code distance”, dependent primarily on the quality of your hardware, which is somewhere in the range of 20-40. It takes d^2 qubits to make up a logical qubit, so let’s say 1000 of them per logical qubit. (We need to make use of an equivalent number of ancillary qubits as well). Very roughly speaking, we repeat twice the following: each physical qubit gets involved in a small number (say 4-8) of two-qubit gates with neighboring qubits, and then some subset of qubits undergo a single-qubit measurement. Most of these gates can happen simultaneously, so (again, roughly!) the time for this whole process is the time for a handful of two-qubit gates plus a measurement. It is known as a code cycle and the time it takes we denote T_{cc}. For example, in superconducting qubits this timescale is expected to be about 1 microsecond, for ion-trap qubits about 1 millisecond. Although variations exist, lets stick to considering a basic architecture which requires repeating this whole process on the order of d times in order to complete one logical operation (i.e., a logical gate). So, the time for a logical gate would be d\times T_{cc}, this sets the effective logical gate speed.

If you zoom out, each code cycle for a single logical qubit is therefore built up in a modular fashion from d^2 copies of the same simple quantum process – a process that involves a handful of physical qubits and gates over a handful of time steps, and which outputs a classical bit of information – a measurement outcome. I have ignored the issue of what happens to those measurement outcomes. Some of them will be sent to a classical computer and processed (decoded) then fed back to control systems and so on. That sets another relevant timescale (the reaction time) which can be of concern in some approaches, but early generations of photonic machines – for reasons outlined later – will use long delay lines, and it is not going to be constraining.

In a photonic quantum computer we also build up a single logical qubit code cycle from d^2 copies of some quantum stuff. In this case it is from d^2 copies of an entangled state of photons that we call a resource state. The number of entangled photons comprising one resource state depends a lot on how nice and clean they are, lets fix it and say we need a 20-photon entangled state. (The noisier the method for preparing resource states the larger they will need to be).  No sequence of gates is performed on these photons. Rather, photons from adjacent resource states get interfered at a beamsplitter and immediately detected – a process we call fusion. You can see a toy version in this animation:

Highly schematic depiction of photonic fusion based quantum computing. An array of 25 resource state generators each repeatedly create resource states of 6 entangled photons, depicted as a hexagonal ring. Some of the photons in each ring are immediately fused (the yellow flashes) with photons from adjacent resource states, the fusion measurement outputs classical bits of information. One photon from each ring gets delayed for one clock cycle and fused with a photon from the next clock cycle.

Measurements destroy photons, so to ensure continuity from one time step to the next some photons in a resource state get delayed by one time step to fuse with a photon from the subsequent resource state – you can see the delayed photons depicted as lit up single blobs if you look carefully in the animation.

The upshot is that the zoomed out view of the photonic quantum computer is very similar to that of the matter-based one, we have just replaced the handful of physical qubits/gates of the latter with a 20-photon entangled state. (And in case it wasn’t obvious – building a bigger computer to do a larger computation means generating more of the resource states, it doesn’t mean using larger and larger resource states.)

If that was the end of the story it would be easy to compare the logical gate speeds for matter-based and photonic approaches. We would only need to answer the question “how fast can you spit out and measure resource states?”. Whatever the time for resource state generation, T_{RSG}, the time for a logical gate would be d\times T_{RSG} and the photonic equivalent of T_{cc} would simply be T_{RSG}. (Measurements on photons are fast and so the fusion time becomes effectively negligible compared to T_{RSG}.) An easy argument could then be made that resource state generation at GHz rates is possible, therefore photonic machines are going to be orders of magnitude faster, and this article would be done! And while I personally do think its obvious that one day this is where the story will end, in the present day and age….

… there are two distinct ways in which this picture is far too simple.

FUNKY FEATURES OF PHOTONICS, PART I

 The first over-simplification is based on facing up to the fact that building the hardware to generate a photonic resource state is difficult and expensive. We cannot afford to construct one resource state generator per resource state required at each time step. However, in photonics we are very fortunate that it is possible to store/delay photons in long lengths of optical fiber with very low error rates. This lets us use many resource states all produced by a single resource state generator in such a way that they can all be involved in the same code-cycle. So, for example, all d^2 resource states required for a single code cycle may come from a single resource state generator:

Here the 25 resource state generators of the previous figure are replaced by a single generator that “plays fusion games with itself” by sending some of its output photons into either a delay of length 5 or one of length 25 times the basic clock cycle. We achieve a massive amplification of photonic entanglement simply by increasing the length of optical fiber used. By mildly increasing the complexity of the switching network a photon goes through when it exits the delay, we can also utilize small amounts of (logarithmic) nonlocal connectivity in the network of fusions performed (not depicted), which is critical to doing active volume compiling (discussed later).  

You can see an animation of how this works in the figure – a single resource state generator spits out resource states (depicted again as a 6-qubit hexagonal ring), and you can see a kind of spacetime 3d-printing of entanglement being performed. We call this game interleaving. In the toy example of the figure we see some of the qubits get measured (fused) immediately, some go into a delay of length 5\times T_{RSG} and some go into a delay of length 25\times T_{RSG}.  

So now we have brought another timescale into the photonics picture, the length of time T_{DELAY} that some photons spend in the longest interleaving delay line. We would like to make this as long as possible, but the maximum time is limited by the loss in the delay (typically optical fiber) and the maximum loss our error correcting code can tolerate. A number to have in mind for this (in early machines) is a handful of microseconds – corresponding to a few Km of fiber.

The upshot is that ultimately the temporal quantity that matters most to us in photonic quantum computing is:

What is the total number of resource states produced per second?

It’s important to appreciate we care only about the total rate of resource state production across the whole machine – so, if we take the total number of resource state generators we have built, and divide by T_{RSG}, we get this total rate of resource state generation that we denote \Gamma_{RSG}.  Note that this rate is distinct from any physical clock rate, as, e.g., 100 resource state generators running at 100MHz, or 10 resource state generators running at 1GHz, or 1 resource state generator running at 10GHz all yield the same total rate of resource state production \Gamma_{RSG}=10\mathrm{GHz.}

The second most important temporal quantity is T_{DELAY}, the time of the longest low-loss delay we can use.

We then have that the total number of logical qubits in the machine is:

N_{LOGICAL}=\frac{T_{DELAY}\times\Gamma_{RSG}}{d^2}

You can see this is proportional to T_{DELAY}\times\Gamma_{RSG} which is effectively the total number of resource states “alive” in the machine at any given instant of time, including all the ones stacked up in long delay lines. This is how we leverage optical fiber delays for a massive amplification of the entanglement our hardware has available to compute with.

The time it takes to perform a logical gate is determined both by \Gamma_{RSG} and by the total number of resource states that we need to consume for every logical qubit to undergo a gate. Even logical qubits that appear to not be part of a gate in that time step do, in fact, undergo a gate – the identity gate – because they need to be kept error free while they “idle”.  As such the total number of resource states consumed in a logical time step is just d^3\times N_{LOGICAL} and the logical gate time of the machine is

T_{LOGICAL}=\frac{d^3\times N_{LOGICAL}}{\Gamma_{RSG}} =d\times T_{DELAY}.

Because T_{DELAY} is expected to be about the same as T_{cc} for superconducting qubits (microseconds), the logical gate speeds are comparable.

At least they are, until…………

FUNKY FEATURES OF PHOTONICS, PART II

But wait! There’s more.

The second way in which unique features of photonics play havoc with the simple comparison to matter-based systems is in the exciting possibility of what we call an active-volume architecture.

A few moments ago I said:

Even logical qubits that seem to not be part of a gate in that time step undergo a gate – the identity gate – because they need to be kept error free while they “idle”.  As such the total number of resource states consumed is just d^3\times N_{LOGICAL}

and that was true. Until recently.

It turns out that there is a way of eliminating the majority of consumption of resources expended on idling qubits! This is done by some clever tricks that make use of the possibility of performing a limited number of non-nearest neighbor fusions between photons. It’s possible because photons are not anyway stuck in one place, and they can be passed around readily without interacting with other photons. (Their quantum crosstalk is exactly zero, they do really seem to despise each other.)

What previously was a large volume of resource states being consumed for “thumb-twiddling”, can instead all be put to good use doing non-trivial computational gates.  Here is a simple quantum circuit with what we mean by the active volume highlighted:

Now, for any given computation the amount of active volume will depend very much on what you are computing.  There are always many different circuits decomposing a given computation, some will use more active volume than others. This makes it impossible to talk about “what is the logical gate speed” completely independent of considerations about the computation actually being performed.

In this recent paper https://arxiv.org/abs/2306.08585 Daniel Litinski considers breaking elliptic curve cryptosystems on a quantum computer. In particular, he considers what it would take to run the relevant version of Shor’s algorithm on a superconducting qubit architecture with a T_{cc}=1 microsecond code cycle – the answer is roughly that with 10 million physical superconducting qubits it would take about 4 hours (with an equivalent ion trap computer the time balloons to more than 5 months).

He then compares solving the same problem on a machine with an active volume architecture. Here is a subset of his results:

Recall that T_{DELAY} is the photonics parameter which is roughly equivalent to the code cycle time. Thus taking T_{DELAY}=1 microsecond compares to the expected T_{cc} for superconducting qubits. Imagine we can produce resource states at  \Gamma_{RSG}=3.5\mathrm{THz}. This could be 6000 resource state generators each producing resource states at 1/T_{RSG}=580\mathrm{MHz} or 3500 generators producing them at 1GHz for example. Then the same computation would take 58 seconds, instead of four hours, a speedup by a factor of more than 200!

Now, this whole blog post is basically about addressing confusions out there regarding physical versus computational timescales. So, for the sake of illustration, let me push a purely theoretical envelope: What if we can’t do everything as fast as in the example just stated? What if our rate of total resource state generation was 10 times slower, i.e.  \Gamma_{RSG}=350\mathrm{GHz}? And what if our longest delay is ten times longer, i.e. T_{DELAY}=10 microseconds (so as to be much slower than T_{cc})?  Furthermore, for the sake of illustration, lets consider a ridiculously slow machine that achieves \Gamma_{RSG}=350 \mathrm{GHz} by building 350 billion resource state generators that can each produce resource states at only 1Hz. Yes, you read that right.

The fastest device in this ridiculous machine would only need to be a (very large!) slow optical switch operating at 100KHz (due to the chosen T_{DELAY}).  And yet this ridiculous machine could still solve the problem that takes a superconducting qubit machine four hours, in less than 10 minutes.

To reiterate:

Despite all the “physical stuff going on” in this (hypothetical, active-volume) photonic machine running much slower than all the “physical stuff going on” in the (hypothetical, non-active-volume) superconducting qubit machine, we see the photonic machine can still do the desired computation 25 times faster!

Hopefully the fundamental murkiness of the titular question “what is the logical gate speed of a photonic quantum computer” is now clear! Put simply: Even if it did “fundamentally run slower” (it won’t), it would still be faster. Because it has less stuff to do. It’s worth noting that the 25x increase in speed is clearly not based on physical timescales, but rather on the efficient parallelization achieved through long-range connections in the photonic active-volume device. If we were to scale up the hypothetical 10-million-superconducting-qubit device by a factor of 25, it could potentially also complete computations 25 times faster. However, this would require a staggering 250 million physical qubits or more. Ultimately, the absolute speed limit of quantum computations is set by the reaction time, which refers to the time it takes to perform a layer of single-qubit measurements and some classical processing. Early-generation machines will not be limited by this reaction time, although eventually it will dictate the maximum speed of a quantum computation. But even in this distant-future scenario, the photonic approach remains advantageous. As classical computation and communication speed up beyond the microsecond range, slower physical measurements of matter-based qubits will hinder the reaction time, while fast single-photon detectors won’t face the same bottleneck. 

In the standard photonic architecture we saw that T_{LOGICAL} would scale proportionally with  T_{DELAY} – that is, adding long delays would slow the logical gate speed (while giving us more logical qubits). But remarkably the active-volume architecture allows us to exploit the extra logical qubits without incurring a big negative tradeoff. I still find this unintuitive and miraculous, it just seems to so massively violate Conservation of Trouble.

With all this in mind it is also worth noting as an aside that optical fibers made from (expensive!) exotic glasses or with funky core structures are theoretically calculated to be possible with up to 100 times less loss than conventional fiber – therefore allowing for an equivalent scaling of T_{DELAY}. How many approaches to quantum computing can claim that perhaps one day, by simply swapping out some strands of glass, they could instantaneously multiply the number of logical qubits in the machine from (say) 100 to 10000? Even a (more realistic) factor of 10 would be incredible.

Obviously for pedagogical reasons the above discussion is based around the simplest approaches to logic in both standard and active-volume architectures, but more detailed analysis shows that conclusions regarding total computational time speedup persist even after known optimizations for both approaches.

Now the reason I called the example above a “ridiculous machine” is that even I am not cruel enough to ask our engineers to assemble 350 billion resource state generators. Fewer resource state generators running faster is desirable from the perspective of both sweat and dollars.

We have arrived then at a simple conclusion: what we really need to know is “how fast and at what scale can we generate resource states, with as large a machine as we can afford to build”.

HOW FAST COULD/SHOULD WE AIM TO DO RESOURCE STATE GENERATION?

In the world of classical photonics – such as that used for telecoms, LIDAR and so on – very high speeds are often thrown around: pulsed lasers and optical switches readily run at 100’s of GHz for example. On the quantum side, if we produce single photons via a probabilistic parametric process then similarly high repetition rates have been achieved. (This is because in such a process there are no timescale constraints set by atomic energy levels etc.) Off-the-shelf single photon avalanche photodiode detectors can count photons at multiple GHz.

Seems like we should be aiming to generate resource states at 10’s of GHz right?

Well, yes, one day – one of the main reasons I believe the long-term future of quantum computing is ultimately photonic is because of the obvious attainability of such timescales. [Two others: it’s the only sensible route to a large-scale room temperature machine; eventually there is only so much you can fit in a single cryostat, so ultimately any approach will converge to being a network of photonically linked machines].

In the real world of quantum engineering there are a couple of reasons to slow things down: (i) It relaxes hardware tolerances, since it makes it easier to get things like path lengths aligned, synchronization working, electronics operating in easy regimes etc  (ii) in a similar way to how we use interleaving during a computation to drastically reduce the number of resource state generators we need to build, we can also use (shorter than T_{DELAY} length) delays to reduce the amount of hardware required to assemble the resource states in the first place and (iii) We want to use multiplexing.

Multiplexing is often misunderstood. The way we produce the requisite photonic entanglement is probabilistic. Producing the whole 20-photon resource state in a single step, while possible, would have very low probability. The way to obviate this is to cascade a couple of higher probability, intermediate, steps – selecting out successes (more on this in the appendix). While it has been known since the seminal work of Knill, Laflamme and Milburn two decades ago that this is a sensible thing to do, the obstacle has always been the need for a high performance (fast, low loss) optical switch. Multiplexing introduces a new physical “timescale of convenience” – basically dictated by latencies of electronic processing and signal transmission.

The brief summary therefore is: Yeah, everything internal to making resource states can be done at GHz rates, but multiple design flexibilities mean the rate of resource state generation is itself a parameter that should be tuned/optimized in the context of the whole machine, it is not constrained by fundamental quantum things like interaction energies, rather it is constrained by the speeds of a bunch of purely classical stuff.

I do not want to leave the impression that generation of entangled photons can only be done via the multistage probabilistic method just outlined. Using quantum dots, for example, people can already demonstrate generation of small photonic entangled states at GHz rates (see e.g. https://www.nature.com/articles/s41566-022-01152-2). Eventually, direct generation of photonic entanglement from matter-based systems will be how photonic quantum computers are built, and I should emphasize that its perfectly possible to use small resource states (say, 4 entangled photons) instead of the 20 proposed above, as long as they are extremely clean and pure.  In fact, as the discussion above has hopefully made clear: for quantum computing approaches based on fundamentally slow things like atoms and ions, transduction of matter-based entanglement into photonic entanglement allows – by simply scaling to more systems – evasion of the extremely slow logical gate speeds they will face if they do not do so.

Right now, however, approaches based on converting the entanglement of matter qubits into photonic entanglement are not nearly clean enough, nor manufacturable at large enough scales, to be compatible with utility-scale quantum computing. And our present method of state generation by multiplexing has the added benefit of decorrelating many error mechanisms that might otherwise be correlated if many photons originate from the same device.

So where does all this leave us?

I want to build a useful machine. Lets back-of-the-envelope what that means photonically. Consider we target a machine comprising (say) at least 100 logical qubits capable of billions of logical gates. (From thinking about active volume architectures I learn that what I really want is to produce as many “logical blocks” as possible, which can then be divvied up into computational/memory/processing units in funky ways, so here I’m really just spitballing an estimate to give you an idea).

Staring at  

N_{LOGICAL}=\frac{T_{DELAY}\times\Gamma_{RSG}}{d^2}

and presuming d^2\approx1000 and T_{DELAY} is going to be about 10 microseconds, we need to be producing resource states at a total rate of at least \Gamma_{RSG}=10\mathrm{GHz}.  As I hope is clear by now, as a pure theoretician, I don’t give a damn if that means 10000 resource state generators running at 1MHz, 100 resource state generators running at 100MHz, or 10 resource state generators running at 1GHz. However, the fact this flexibility exists is very useful to my engineering colleagues – who, of course, aim to build the smallest and fastest possible machine they can, thereby shortening the time until we let them head off for a nice long vacation sipping mezcal margaritas on a warm tropical beach.

None of these numbers should seem fundamentally indigestible, though I do not want to understate the challenge: all never-before-done large-scale engineering is extremely hard.

But regardless of the regime we operate in, logical gate speeds are not going to be the issue upon which photonics will be found wanting.

REAL-WORLD QUANTUM COMPUTING DESIGN

Now, I know this blog is read by lots of quantum physics students. If you want to impact the world, working in quantum computing really is a great way to do it. The foundation of everything round you in the modern world was laid in the 40’s and 50’s when early mathematicians, computer scientists, physicists and engineers figured out how we can compute classically. Today you have a unique opportunity to be part of laying the foundation of humanity’s quantum computing future. Of course, I want the best of you to work on a photonic approach specifically (I’m also very happy to suggest places for the worst of you to go work). Please appreciate, therefore, that these final few paragraphs are my very biased – though fortunately totally correct – personal perspective!

The broad features of the photonic machine described above – it’s a network of stuff to make resource states, stuff to fuse them, and some interleaving modules, has been fixed now for several years (see the references).

Once we go down even just one level of detail, a myriad of very-much-not-independent questions arise: What is the best resource state? What series of procedures is optimal for creating that state? What is the best underlying topological code to target? What fusion network can build that code? What other things (like active volume) can exploit the ability for photons to be easily nonlocally connected? What types of encoding of quantum information into photonic states is best? What interferometers generate the most robust small entangled states? What procedures for systematically growing resource states from smaller entangled states are most robust or use the least amount of hardware? How can we best use measurements and classical feedforward/control to mitigate error accumulation?

Those sorts of questions cannot be meaningfully addressed without going down to another level of detail, one in which we do considerable modelling of the imperfect devices from which everything will be built – modelling that starts by detailed parameterization of about 40 component specifications (ranging over things like roughness of silicon photonic waveguide walls, stability of integrated voltage drivers, precision of optical fiber cutting robots,….. Well, the list goes on and on). We then model errors of subsystems built from those components, verify against data, and proceed.

The upshot is none of these questions have unique answers! There just isn’t “one obviously best code” etc. In fact the answers can change significantly with even small variations in performance of the hardware. This opens a very rich design space, where we can establish tradeoffs and choose solutions that optimize a wide variety of practical hardware metrics.

In photonics there is also considerably more flexibility and opportunity than with most approaches on the “quantum side” of things. That is, the quantum aspects of the sources, the quantum states we use for encoding even single qubits, the quantum states we should target for the most robust entanglement, the topological quantum logical states we target and so on, are all “on the table” so to speak.

Exploring the parameter space of possible machines to assemble, while staying fully connected to component level hardware performance, involves both having a very detailed simulation stack, and having smart people to help find new and better schemes to test in the simulations. It seems to me there are far more interesting avenues for impactful research than more established approaches can claim. Right now, on this planet, there are only around 30 people engaged seriously in that enterprise. It’s fun. Perhaps you should join in?

REFERENCES

A surface code quantum computer in silicon https://www.science.org/doi/10.1126/sciadv.1500707. Figure 4 is a clear depiction of the circuits for performing a code cycle appropriate to a generic 2d matter-based architecture.

Fusion-based quantum computation https://arxiv.org/abs/2101.09310

Interleaving: Modular architectures for fault-tolerant photonic quantum computing https://arxiv.org/abs/2103.08612

Active volume: An architecture for efficient fault-tolerant quantum computers with limited non-local connections https://arxiv.org/abs/2211.15465

How to compute a 256-bit elliptic curve private key with only 50 million Toffoli gates https://arxiv.org/abs/2211.15465

Conservation of Trouble: https://arxiv.org/abs/quant-ph/9902010

APPENDIX – A COMMON MISCONCEPTION

Here is a common misconception: Current methods of producing ~20 photon entangled states succeed only a few times per second, so generating resource states for fusion-based quantum computing is many orders of magnitude away from where it needs to be.

This misconception arises from considering experiments which produce photonic entangled states via single-shot spontaneous processes and extrapolating them incorrectly as having relevance to how resource states for photonic quantum computing are assembled.

Such single-shot experiments are hit by a “double whammy”. The first whammy is that the experiments produce some very large and messy state that only has a tiny amplitude in the component of the desired entangled state. Thus, on each shot, even in ideal circumstances, the probability of getting the desired state is very, very small. Because billions of attempts can be made each second (as mentioned, running these devices at GHz speeds is easy) it does occasionally occur. But only a small number of times per second.

The second whammy is that if you are trying to produce a 20-photon state, but each photon gets lost with probability 20%, then the probability of you detecting all the photons – even if you live in a branch of the multiverse where they have been produced – is reduced by a factor of 0.8^{20}. Loss reduces the rate of production considerably.

Now, photonic fusion-based quantum computing could not be based on this type of entangled photon generation anyway, because the production of the resource states needs to be heralded, while these experiments only postselect onto the very tiny part of the total wavefunction with the desired entanglement. But let us put that aside, because the two whammy’s could, in principle, be showstoppers for production of heralded resource states, and it is useful to understand why they are not.

Imagine you can toss coins, and you need to generate 20 coins showing Heads. If you repeatedly toss all 20 coins simultaneously until they all come up heads you’d typically have to do so millions of times before you succeed. This is even more true if each coin also has a 20% chance of rolling off the table (akin to photon loss). But if you can toss 20 coins, set aside (switch out!) the ones that came up heads and re-toss the others, then after only a small number of steps you will have 20 coins all showing heads. This large gap is fundamentally why the first whammy is not relevant: To generate a large photonic entangled state we begin by probabilistically attempting to generate a bunch of small ones. We then select out the success (multiplexing) and combine successes to (again, probabilistically) generate a slightly larger entangled state. We repeat a few steps of this. This possibility has been appreciated for more than twenty years, but hasn’t been done at scale yet because nobody has had a good enough optical switch until now.

The second whammy is taken care of the fact that for fault tolerant photonic fusion-based quantum computing there never is any need to make the resource state such that all photons are guaranteed to be there! The per-photon loss rate can be high (in principle 10’s of percent) – in fact the larger the resource state being built the higher it is allowed to be.

The upshot is that comparing this method of entangled photon generation with the methods which are actually employed is somewhat like a creation scientist claiming monkeys cannot have evolved from bacteria, because it is all so unlikely for suitable mutations to have happened simultaneously!

Acknowledgements

Very grateful to Mercedes Gimeno-Segovia, Daniel Litinski, Naomi Nickerson, Mike Nielsen and Pete Shadbolt for help and feedback.

Let the great world spin

I first heard the song “Fireflies,” by Owl City, shortly after my junior year of college. During the refrain, singer Adam Young almost whispers, “I’d like to make myself believe / that planet Earth turns slowly.” Goosebumps prickled along my neck. Yes, I thought, I’ve studied Foucault’s pendulum.

Léon Foucault practiced physics in France during the mid-1800s. During one of his best-known experiments, he hung a pendulum from high up in a building. Imagine drawing a wide circle on the floor, around the pendulum’s bob.1

Pendulum bob and encompassing circle, as viewed from above.

Imagine pulling the bob out to a point above the circle, then releasing the pendulum. The bob will swing back and forth, tracing out a straight line across the circle.

You might expect the bob to keep swinging back and forth along that line, and to do nothing more, forever (or until the pendulum has spent all its energy on pushing air molecules out of its way). After all, the only forces acting on the bob seem to be gravity and the tension in the pendulum’s wire. But the line rotates; its two tips trace out the circle.

How long the tips take to trace the circle depends on your latitude. At the North and South Poles, the tips take one day.

Why does the line rotate? Because the pendulum dangles from a building on the Earth’s surface. As the Earth rotates, so does the building, which pushes the pendulum. You’ve experienced such a pushing if you’ve ridden in a car. Suppose that the car is zipping along at a constant speed, in an unchanging direction, on a smooth road. With your eyes closed, you won’t feel like you’re moving. The only forces you can sense are gravity and the car seat’s preventing you from sinking into the ground (analogous to the wire tension that prevents the pendulum bob from crashing into the floor). If the car turns a bend, it pushes you sidewise in your seat. This push is called a centrifugal force. The pendulum feels a centrifugal force because the Earth’s rotation is an acceleration like the car’s. The pendulum also feels another force—a Coriolis force—because it’s not merely sitting, but moving on the rotating Earth.

We can predict the rotation of Foucault’s pendulum by assuming that the Earth rotates, then calculating the centrifugal and Coriolis forces induced, and then calculating how those forces will influence the pendulum’s motion. The pendulum evidences the Earth’s rotation as nothing else had before debuting in 1851. You can imagine the stir created by the pendulum when Foucault demonstrated it at the Observatoire de Paris and at the Panthéon monument. Copycat pendulums popped up across the world. One ended up next to my college’s physics building, as shown in this video. I reveled in understanding that pendulum’s motion, junior year.

My professor alluded to a grander Foucault pendulum in Paris. It hangs in what sounded like a temple to the Enlightenment—beautiful in form, steeped in history, and rich in scientific significance. I’m a romantic about the Enlightenment; I adore the idea of creating the first large-scale organizational system for knowledge. So I hungered to make a pilgrimage to Paris.

I made the pilgrimage this spring. I was attending a quantum-chaos workshop at the Institut Pascal, an interdisciplinary institute in a suburb of Paris. One quiet Saturday morning, I rode a train into the city center. The city houses a former priory—a gorgeous, 11th-century, white-stone affair of the sort for which I envy European cities. For over 200 years, the former priory has housed the Musée des Arts et Métiers, a museum of industry and technology. In the priory’s chapel hangs Foucault’s pendulum.2

A pendulum of Foucault’s own—the one he exhibited at the Panthéon—used to hang in the chapel. That pendulum broke in 2010; but still, the pendulum swinging today is all but a holy relic of scientific history. Foucault’s pendulum! Demonstrating that the Earth rotates! And in a jewel of a setting—flooded with light from stained-glass windows and surrounded by Gothic arches below a painted ceiling. I flitted around the little chapel like a pollen-happy bee for maybe 15 minutes, watching the pendulum swing, looking at other artifacts of Foucault’s, wending my way around the carved columns.

Almost alone. A handful of visitors trickled in and out. They contrasted with my visit, the previous weekend, to the Louvre. There, I’d witnessed a Disney World–esque line of tourists waiting for a glimpse of the Mona Lisa, camera phones held high. Nobody was queueing up in the musée’s chapel. But this was Foucault’s pendulum! Demonstrating that the Earth rotates!

I confess to capitalizing on the lack of visitors to take a photo with Foucault’s pendulum and Foucault’s Pendulum, though.

Shortly before I’d left for Paris, a librarian friend had recommended Umberto Eco’s novel Foucault’s Pendulum. It occupied me during many a train ride to or from the center of Paris.

The rest of the museum could model in an advertisement for steampunk. I found automata, models of the steam engines that triggered the Industrial Revolution, and a phonograph of Thomas Edison’s. The gadgets, many formed from brass and dark wood, contrast with the priory’s light-toned majesty. Yet the priory shares its elegance with the inventions, many of which gleam and curve in decorative flutes. 

The grand finale at the Musée des Arts et Métiers.

I tore myself away from the Musée des Arts et Métiers after several hours. I returned home a week later and heard the song “Fireflies” again not long afterward. The goosebumps returned worse. Thanks to Foucault, I can make myself believe that planet Earth turns.

With thanks to Kristina Lynch for tolerating my many, many, many questions throughout her classical-mechanics course.

This story’s title refers to a translation of Goethe’s Faust. In the translation, the demon Mephistopheles tells the title character, “You let the great world spin and riot; / we’ll nest contented in our quiet” (to within punctuational and other minor errors, as I no longer have the text with me). A prize-winning 2009 novel is called Let the Great World Spin; I’ve long wondered whether Faust inspired its title.

1Why isn’t the bottom of the pendulum called the alice?

2After visiting the musée, I learned that my classical-mechanics professor had been referring to the Foucault pendulum that hangs in the Panthéon, rather than to the pendulum in the musée. The musée still contains the pendulum used by Foucault in 1851, whereas the Panthéon has only a copy, so I’m content. Still, I wouldn’t mind making a pilgrimage to the Panthéon. Let me know if more thermodynamic workshops take place in Paris!

Quantum physics proposes a new way to study biology – and the results could revolutionize our understanding of how life works

By guest blogger Clarice D. Aiello, faculty at UCLA

Imagine using your cellphone to control the activity of your own cells to treat injuries and disease. It sounds like something from the imagination of an overly optimistic science fiction writer. But this may one day be a possibility through the emerging field of quantum biology.

Over the past few decades, scientists have made incredible progress in understanding and manipulating biological systems at increasingly small scales, from protein folding to genetic engineering. And yet, the extent to which quantum effects influence living systems remains barely understood.

Quantum effects are phenomena that occur between atoms and molecules that can’t be explained by classical physics. It has been known for more than a century that the rules of classical mechanics, like Newton’s laws of motion, break down at atomic scales. Instead, tiny objects behave according to a different set of laws known as quantum mechanics.

For humans, who can only perceive the macroscopic world, or what’s visible to the naked eye, quantum mechanics can seem counterintuitive and somewhat magical. Things you might not expect happen in the quantum world, like electrons “tunneling” through tiny energy barriers and appearing on the other side unscathed, or being in two different places at the same time in a phenomenon called superposition.

I am trained as a quantum engineer. Research in quantum mechanics is usually geared toward technology. However, and somewhat surprisingly, there is increasing evidence that nature – an engineer with billions of years of practice – has learned how to use quantum mechanics to function optimally. If this is indeed true, it means that our understanding of biology is radically incomplete. It also means that we could possibly control physiological processes by using the quantum properties of biological matter.

Quantumness in biology is probably real

Researchers can manipulate quantum phenomena to build better technology. In fact, you already live in a quantum-powered world: from laser pointers to GPS, magnetic resonance imaging and the transistors in your computer – all these technologies rely on quantum effects.

In general, quantum effects only manifest at very small length and mass scales, or when temperatures approach absolute zero. This is because quantum objects like atoms and molecules lose their “quantumness” when they uncontrollably interact with each other and their environment. In other words, a macroscopic collection of quantum objects is better described by the laws of classical mechanics. Everything that starts quantum dies classical. For example, an electron can be manipulated to be in two places at the same time, but it will end up in only one place after a short while – exactly what would be expected classically.

In a complicated, noisy biological system, it is thus expected that most quantum effects will rapidly disappear, washed out in what the physicist Erwin Schrödinger called the “warm, wet environment of the cell.” To most physicists, the fact that the living world operates at elevated temperatures and in complex environments implies that biology can be adequately and fully described by classical physics: no funky barrier crossing, no being in multiple locations simultaneously.

Chemists, however, have for a long time begged to differ. Research on basic chemical reactions at room temperature unambiguously shows that processes occurring within biomolecules like proteins and genetic material are the result of quantum effects. Importantly, such nanoscopic, short-lived quantum effects are consistent with driving some macroscopic physiological processes that biologists have measured in living cells and organisms. Research suggests that quantum effects influence biological functions, including regulating enzyme activitysensing magnetic fieldscell metabolism and electron transport in biomolecules.

How to study quantum biology

The tantalizing possibility that subtle quantum effects can tweak biological processes presents both an exciting frontier and a challenge to scientists. Studying quantum mechanical effects in biology requires tools that can measure the short time scales, small length scales and subtle differences in quantum states that give rise to physiological changes – all integrated within a traditional wet lab environment.

In my work, I build instruments to study and control the quantum properties of small things like electrons. In the same way that electrons have mass and charge, they also have a quantum property called spin. Spin defines how the electrons interact with a magnetic field, in the same way that charge defines how electrons interact with an electric field. The quantum experiments I have been building since graduate school, and now in my own lab, aim to apply tailored magnetic fields to change the spins of particular electrons.

Research has demonstrated that many physiological processes are influenced by weak magnetic fields. These processes include stem cell development and maturationcell proliferation ratesgenetic material repair and countless others. These physiological responses to magnetic fields are consistent with chemical reactions that depend on the spin of particular electrons within molecules. Applying a weak magnetic field to change electron spins can thus effectively control a chemical reaction’s final products, with important physiological consequences.

Currently, a lack of understanding of how such processes work at the nanoscale level prevents researchers from determining exactly what strength and frequency of magnetic fields cause specific chemical reactions in cells. Current cellphone, wearable and miniaturization technologies are already sufficient to produce tailored, weak magnetic fields that change physiology, both for good and for bad. The missing piece of the puzzle is, hence, a “deterministic codebook” of how to map quantum causes to physiological outcomes.

In the future, fine-tuning nature’s quantum properties could enable researchers to develop therapeutic devices that are noninvasive, remotely controlled and accessible with a mobile phone. Electromagnetic treatments could potentially be used to prevent and treat disease, such as brain tumors, as well as in biomanufacturing, such as increasing lab-grown meat production.

A whole new way of doing science

Quantum biology is one of the most interdisciplinary fields to ever emerge. How do you build community and train scientists to work in this area?

Since the pandemic, my lab at the University of California, Los Angeles and the University of Surrey’s Quantum Biology Doctoral Training Centre have organized Big Quantum Biology meetings to provide an informal weekly forum for researchers to meet and share their expertise in fields like mainstream quantum physics, biophysics, medicine, chemistry and biology.

Research with potentially transformative implications for biology, medicine and the physical sciences will require working within an equally transformative model of collaboration. Working in one unified lab would allow scientists from disciplines that take very different approaches to research to conduct experiments that meet the breadth of quantum biology from the quantum to the molecular, the cellular and the organismal.

The existence of quantum biology as a discipline implies that traditional understanding of life processes is incomplete. Further research will lead to new insights into the age-old question of what life is, how it can be controlled and how to learn with nature to build better quantum technologies.

***

This article is republished from The Conversation under a Creative Commons license. Read the original article.

***

Clarice D. Aiello is a quantum engineer interested in how quantum physics informs biology at the nanoscale. She is an expert on nanosensors that harness room-temperature quantum effects in noisy environments. Aiello received a bachelor’s in physics from the Ecole Polytechnique, France; a master’s degree in physics from the University of Cambridge, Trinity College, UK; and a PhD in electrical engineering from the Massachusetts Institute of Technology. She held postdoctoral appointments in bioengineering at Stanford University and in chemistry at the University of California, Berkeley. Two months before the pandemic, she joined the University of California, Los Angeles, where she leads the Quantum Biology Tech (QuBiT) Lab.

***

The author thanks Nicole Yunger Halpern and Spyridon Michalakis for the opportunity to talk about quantum biology to the physics audience of this wonderful blog!