You can win Tic Tac Toe, if you know quantum physics.

Note: Oliver Zheng is a senior at University High School, Irvine CA. He has been working on AI players for quantum versions of Tic Tac Toe under the supervision of Dr. Spiros Michalakis.

Several years ago, while scrolling through YouTube, I came across a video of Paul Rudd playing something called “Quantum Chess.” I had no idea what it was, nor did I know that it would become one of the most gloriously nerdy rabbit holes I would ever fall into (see: 5D Chess with Multiverse Time Travel).

Over time, I tried to teach myself how to play these multi-layered, multi-dimensional games, but progress was slow. However, while taking a break during a piano lesson last year, I mentioned to my teacher my growing interest in unnecessarily stressful versions of chess. She told me that she happened to be friends with Dr. Xie Chen, professor of theoretical physics at Caltech who was sponsoring a Quantum Gaming project. I immediately jumped at the opportunity to connect with her, and within days was able to have my first online meeting with Dr. Chen. Soon after, I got invited to join the project. Following my introduction to the team, I started reading “Quantum Computation and Quantum Information”, which helped me understand how the theory behind the games worked. When I felt ready, Dr. Chen referred me to Dr. Spiros Michalakis at Caltech, who, funnily enough, was the creator of the quantum chess video. 

I would’ve never imagined that I am two degrees of separation from Paul Rudd, but nonetheless, I wanted to share some of the work I’ve been doing with Spiros on Quantum TiqTaqToe.

What is Quantum TiqTaqToe?

Evert van Nieuwenburg, the creator of Quantum TiqTaqToe whom I also collaborated with, goes in depth about how the game works here, but I will give a short rundown. The general idea is that there is now a split move, where you can put an ‘X’ in two different squares at once — a Schrödinger’s X, if you will. When the board has no more empty squares, the X randomly ‘collapses’ into one of the two squares with equal probability. The game ends when there are three real X’s or three real O’s in a row, just as in regular tic-tac-toe. Depending on the mode you are playing, you might also be able to entangle your X’s with your opponent’s O’s. You can get a better sense of all this by actually playing the game here.

My goal was to find out who wins when both players play optimally. For instance, in normal tic-tac-toe, it is well-known that the first X should go in the middle of the board, and if player O counters successfully, the game should end in a tie. Is the outcome of Quantum TiqTaqToe, too, predetermined to end in a tie if both players play optimally? And, if not, what is the best first move for player X? I sought to answer these questions through the power of computation.

The First Attempt

In the following section, I refer to a ‘game state’ as any unique arrangement of X’s and O’s on a board. The ‘empty game state’ simply means an empty board. ‘Traversing’ through a certain game state means that, at some point in the game, that game state occurs. So, for example, every game traverses through the empty game state, since every game starts with an empty board.

In order to solve the unsolved, one must first solve the solved. As such, my first attempt was to create an algorithm that would figure out the best move to play in regular tic-tac-toe. This first attempt was rather straightforward, and I will explain it here:

Essentially, I developed a model using what is known as “reinforcement learning” to determine the best next move given a certain game state. Here is how it works: To track which set of moves are best for player X and player O, respectively, every game state is assigned a value, initially 0. When a game ends, these values are updated to reflect who won. The more games are played, the better these values reflect the sequence of moves that X and O must make to win or tie. To train this model (machine learning parlance for the algorithm that updates the values/parameters mentioned above), I programmed the computer to play randomly chosen moves for X and O, until the game ended. If, say, player X won, then the value of every game state traversed was increased by 1 to indicate that X was favored. On the other hand, if player O won, then the value of every game state traversed was decreased by 1 to indicate that O was favored. Here is an example:

X wins!

Let’s say that this is the first iteration that the model is trained on. Then, the next time the model sees this game state,

it will recognize that X has an advantage. In the same vein, the model now also thinks that the empty game state is favorable towards X, since, in the one game that was played, when the empty game state was traversed, X won.

If we run these randomized games enough times (I ran ten million iterations), every move in every game state has most likely been made, which means that the model is able to give a meaningful evaluation for any game state. However, there is one major problem with this approach, in that the model only indicates who is favored when they make a random move, not when they make the best move. To illustrate this, let’s examine the following game state:

(O’s turn)

Here, player O has two options: they can win the game by putting their O on the bottom center square, or lose the game by putting it on the right center square. Any seasoned tic-tac-toe player would make the right move in this scenario, and win the game. However, since the model trains on random moves, it thinks that player O will win half the time and lose half the time. Thus, to the model, this game state is not favorable to either player, when in reality it is absolutely favored towards O. 

During my first meeting with Spiros and Evert, they pointed out this flaw in my model. Evert suggested that I study up on something called a minimax algorithm, which circumvents this flaw, and apply it to tic-tac-toe. This set me on the next step of my journey.

Enter Minimax

The content of this section takes inspiration from this article.

In the minimax algorithm, the two players are known as the ‘maximizer’ and the ‘minimizer’. In the case of tic-tac-toe, X would be the maximizer and O the minimizer. The maximizer’s goal is to maximize their score, while the minimizer’s goal is to minimize their score. In tic-tac-toe, the minimax algorithm is implemented so that a win by X is a score of +1, a win by O is a score of -1, and a tie is simply 0. So X, seeking to maximize their score, would want to win, which makes sense.

Now, if X wanted to maximize their score through some move, they would have to consider O’s move, who would try to minimize the score. But before O makes their move, they would have to consider X’s next move. This creates a sort of back-and-forth, recursive dynamic in the minimax algorithm. In order for either player to make the best move, they would have to go through all possible moves they can make, and all possible moves their opponent can make after that, and so on and so forth. Here is a relatively simple example of the minimax algorithm at work:

Let’s start from the top. X has three possible moves they can make, and evaluates each of them. 

In the leftmost branch, the result is either -1 or 0, but which is the real score? Well, we expect O to make their best move, and since they are trying to minimize the score, we expect them to choose the ‘-1’ case. So we can say that this move results in a score of -1. 

In the middle branch, the result is either 1 or 0, and, following the same reasoning as before, O chooses the move corresponding to the minimal score, resulting in a score of 0.

Finally, the last branch results in X winning, so the score is +1.

Now, X can finally choose their best move, and in the interest of maximizing the score, places their X on the bottom right square. Intuitively, this makes sense because it was the only move that wins the game for X outright.

Great, but what would a minimax algorithm look like in Quantum Tiqtaqtoe?

Enter Expecti-Minimax

Expectiminimax contains the same core idea as minimax, but something interesting happens when the game board collapses. The algorithm can’t know for sure what the board will look like after collapse, so all it can do is calculate an expected value of the result (hence the name). Let’s look at an example:

Here, collapse occurs, and one branch (top) results in a tie, while the other (bottom) results in O winning. Since a tie is equal to 0 and an O win is equal to -1, the algorithm treats the score as

Note: the sum is divided by two because both outcomes have a ½ probability of occurring.

Solving the Game

Using the expecti-minimax algorithm, I effectively ‘solved’ the minimal and moderate versions of quantum tiqtaqtoe. However, even though the algorithm will always show the best move, the outcome from game to game might not be the same due to the inherent element of randomness. The most interesting of all my discoveries was probably the first move that the algorithm suggests for X, which I was able to make sense of both intuitively and logically. I challenge you all to find it! (Hint: it is the same for both the minimal and moderate versions.)

It turns out that when X plays optimally, they will always win the minimal version no matter what O plays. Meanwhile, in the moderate version, X will win most of the time, but not all the time. The probability distribution is as follows:

  (Another challenge: why are the denominators powers of two?)

Having satisfied my curiosity (for now), I’m looking forward to creating a new game of my own: 4 by 4 quantum tic-tac-toe. Currently, I am working on an algorithm that will give the best move, but since a 4×4 board is almost two times larger than a 3×3 board, the computational runtime of an expectiminimax algorithm would be far too large. As such, I am exploring the use of heuristics, which is sort of what the human mind uses to approach a game like tic-tac-toe. Because of this reliance on heuristics, there is no longer a guarantee that the algorithm will always make the best move, making this new adventure all the more mysterious and captivating. 

Crossing the quantum chasm: From NISQ to fault tolerance

On December 6, I gave a keynote address at the Q2B 2023 Conference in Silicon Valley. Here is a transcript of my remarks. The slides I presented are here. A video of my presentation is here.

Toward quantum value

The theme of this year’s Q2B meeting is “The Roadmap to Quantum Value.” I interpret “quantum value” as meaning applications of quantum computing that have practical utility for end-users in business. So I’ll begin by reiterating a point I have made repeatedly in previous appearances at Q2B. As best we currently understand, the path to economic impact is the road through fault-tolerant quantum computing. And that poses daunting challenges for our field and for the quantum industry.

We are in the NISQ era. NISQ (rhymes with “risk’”) is an acronym meaning “Noisy Intermediate-Scale Quantum.” Here “intermediate-scale” conveys that current quantum computing platforms with of order 100 qubits are difficult to simulate by brute force using the most powerful currently existing supercomputers. “Noisy” reminds us that today’s quantum processors are not error-corrected, and noise is a serious limitation on their computational power. NISQ technology already has noteworthy scientific value. But as of now there is no proposed application of NISQ computing with commercial value for which quantum advantage has been demonstrated when compared to the best classical hardware running the best algorithms for solving the same problems. Furthermore, currently there are no persuasive theoretical arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.

A useful survey of quantum computing applications, over 300 pages long, recently appeared, providing rough estimates of end-to-end run times for various quantum algorithms. This is hardly the last word on the subject — new applications are continually proposed, and better implementations of existing algorithms continually arise. But it is a valuable snapshot of what we understand today, and it is sobering.

There can be quantum advantage in some applications of quantum computing to optimization, finance, and machine learning. But in this application area, the speedups are typically at best quadratic, meaning the quantum run time scales as the square root of the classical run time. So the advantage kicks in only for very large problem instances and deep circuits, which we won’t be able to execute without error correction.

Larger polynomial advantage and perhaps superpolynomial advantage is possible in applications to chemistry and materials science, but these may require at least hundreds of very well-protected logical qubits, and hundreds of millions of very high-fidelity logical gates, if not more. Quantum fault tolerance will be needed to run these applications, and fault tolerance has a hefty cost in both the number of physical qubits and the number of physical gates required. We should also bear in mind that the speed of logical gates is relevant, since the run time as measured by the wall clock will be an important determinant of the value of quantum algorithms.

Overcoming noise in quantum devices

Already in today’s quantum processors steps are taken to address limitations imposed by the noise — we use error mitigation methods like zero noise extrapolation or probabilistic error cancellation. These methods work effectively at extending the size of the circuits we can execute with useful fidelity. But the asymptotic cost scales exponentially with the size of the circuit, so error mitigation alone may not suffice to reach quantum value. Quantum error correction, on the other hand, scales much more favorably, like a power of a logarithm of the circuit size. But quantum error correction is not practical yet. To make use of it, we’ll need better two-qubit gate fidelities, many more physical qubits, robust systems to control those qubits, as well as the ability to perform fast and reliable mid-circuit measurements and qubit resets; all these are technically demanding goals.

To get a feel for the overhead cost of fault-tolerant quantum computing, consider the surface code — it’s presumed to be the best near-term prospect for achieving quantum error correction, because it has a high accuracy threshold and requires only geometrically local processing in two dimensions. Once the physical two-qubit error rate is below the threshold value of about 1%, the probability of a logical error per error correction cycle declines exponentially as we increase the code distance d:

Plogical = (0.1)(Pphysical/Pthreshold)(d+1)/2

where the number of physical qubits in the code block (which encodes a single protected qubit) is the distance squared.

Suppose we wish to execute a circuit with 1000 qubits and 100 million time steps. Then we want the probability of a logical error per cycle to be 10-11. Assuming the physical error rate is 10-3, better than what is currently achieved in multi-qubit devices, from this formula we infer that we need a code distance of 19, and hence 361 physical qubits to encode each logical qubit, and a comparable number of ancilla qubits for syndrome measurement — hence over 700 physical qubits per logical qubit, or a total of nearly a million physical qubits.  If the physical error rate improves to 10-4 someday, that cost is reduced, but we’ll still need hundreds of thousands of physical qubits if we rely on the surface code to protect this circuit.

Progress toward quantum error correction

The study of error correction is gathering momentum, and I’d like to highlight some recent experimental and theoretical progress. Specifically, I’ll remark on three promising directions, all with the potential to hasten the arrival of the fault-tolerant era: erasure conversion, biased noise, and more efficient quantum codes.

Erasure conversion

Error correction is more effective if we know when and where the errors occurred. To appreciate the idea, consider the case of a classical repetition code that protects against bit flips. If we don’t know which bits have errors we can decode successfully by majority voting, assuming that fewer than half the bits have errors. But if errors are heralded then we can decode successfully by just looking at any one of the undamaged bits. In quantum codes the details are more complicated but the same principle applies — we can recover more effectively if so-called erasure errors dominate; that is, if we know which qubits are damaged and in which time steps. “Erasure conversion” means fashioning a processor such that the dominant errors are erasure errors.

We can make use of this idea if the dominant errors exit the computational space of the qubit, so that an error can be detected without disturbing the coherence of undamaged qubits. One realization is with Alkaline earth Rydberg atoms in optical tweezers, where 0 is encoded as a low energy state, and 1 is a highly excited Rydberg state. The dominant error is the spontaneous decay of the 1 to a lower energy state. But if the atomic level structure and the encoding allow, 1 usually decays not to a 0, but rather to another state g. We can check whether the g state is occupied, to detect whether or not the error occurred, without disturbing a coherent superposition of 0 and 1.

Erasure conversion can also be arranged in superconducting devices, by using a so-called dual-rail encoding of the qubit in a pair of transmons or a pair of microwave resonators. With two resonators, for example, we can encode a qubit by placing a single photon in one resonator or the other. The dominant error is loss of the photon, causing either the 01 state or the 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred, without disturbing a coherent superposition of 01 and 10.

Erasure detection has been successfully demonstrated in recent months, for both atomic (here and here) and superconducting (here and here) qubit encodings.

Biased noise

Another setting in which the effectiveness of quantum error correction can be enhanced is when the noise is highly biased. Quantum error correction is more difficult than classical error correction partly because more types of errors can occur — a qubit can flip in the standard basis, or it can flip in the complementary basis, what we call a phase error. In suitably designed quantum hardware the bit flips are highly suppressed, so we can concentrate the error-correcting power of the code on protecting against phase errors. For this scheme to work, it is important that phase errors occurring during the execution of a quantum gate do not propagate to become bit-flip errors. And it was realized just a few years ago that such bias-preserving gates are possible for qubits encoded in continuous variable systems like microwave resonators.

Specifically, we may consider a cat code, in which the encoded 0 and encoded 1 are coherent states, well separated in phase space. Then bit flips are exponentially suppressed as the mean photon number in the resonator increases. The main source of error, then, is photon loss from the resonator, which induces a phase error for the cat qubit, with an error rate that increases only linearly with photon number. We can then strike a balance, choosing a photon number in the resonator large enough to provide physical protection against bit flips, and then use a classical code like the repetition code to build a logical qubit well protected against phase flips as well.

Work on such repetition cat codes is ongoing (see here, here, and here), and we can expect to hear about progress in that direction in the coming months.

More efficient codes

Another exciting development has been the recent discovery of quantum codes that are far more efficient than the surface code. These include constant-rate codes, in which the number of protected qubits scales linearly with the number of physical qubits in the code block, in contrast to the surface code, which protects just a single logical qubit per block. Furthermore, such codes can have constant relative distance, meaning that the distance of the code, a rough measure of how many errors can be corrected, scales linearly with the block size rather than the square root scaling attained by the surface code.

These new high-rate codes can have a relatively high accuracy threshold, can be efficiently decoded, and schemes for executing fault-tolerant logical gates are currently under development.

A drawback of the high-rate codes is that, to extract error syndromes, geometrically local processing in two dimensions is not sufficient — long-range operations are needed. Nonlocality can be achieved through movement of qubits in neutral atom tweezer arrays or ion traps, or one can use the native long-range coupling in an ion trap processor. Long-range coupling is more challenging to achieve in superconducting processors, but should be possible.

An example with potential near-term relevance is a recently discovered code with distance 12 and 144 physical qubits. In contrast to the surface code with similar distance and length which encodes just a single logical qubit, this code protects 12 logical qubits, a significant improvement in encoding efficiency.

The quest for practical quantum error corrections offers numerous examples like these of co-design. Quantum error correction schemes are adapted to the features of the hardware, and ideas about quantum error correction guide the realization of new hardware capabilities. This fruitful interplay will surely continue.

An exciting time for Rydberg atom arrays

In this year’s hardware news, now is a particularly exciting time for platforms based on Rydberg atoms trapped in optical tweezer arrays. We can anticipate that Rydberg platforms will lead the progress in quantum error correction for at least the next few years, if two-qubit gate fidelities continue to improve. Thousands of qubits can be controlled, and geometrically nonlocal operations can be achieved by reconfiguring the atomic positions. Further improvement in error correction performance might be possible by means of erasure conversion. Significant progress in error correction using Rydberg platforms is reported in a paper published today.

But there are caveats. So far, repeatable error syndrome measurement has not been demonstrated. For that purpose, continuous loading of fresh atoms needs to be developed. And both the readout and atomic movement are relatively slow, which limits the clock speed.

Movability of atomic qubits will be highly enabling in the short run. But in the longer run, movement imposes serious limitations on clock speed unless much faster movement can be achieved. As things currently stand, one can’t rapidly accelerate an atom without shaking it loose from an optical tweezer, or rapidly accelerate an ion without heating its motional state substantially. To attain practical quantum computing using Rydberg arrays, or ion traps, we’ll eventually need to make the clock speed much faster.

Cosmic rays!

To be fair, other platforms face serious threats as well. One is the vulnerability of superconducting circuits to ionizing radiation. Cosmic ray muons for example will occasionally deposit a large amount of energy in a superconducting circuit, creating many phonons which in turn break Cooper pairs and induce qubit errors in a large region of the chip, potentially overwhelming the error-correcting power of the quantum code. What can we do? We might go deep underground to reduce the muon flux, but that’s expensive and inconvenient. We could add an additional layer of coding to protect against an event that wipes out an entire surface code block; that would increase the overhead cost of error correction. Or maybe modifications to the hardware can strengthen robustness against ionizing radiation, but it is not clear how to do that.

Outlook

Our field and the quantum industry continue to face a pressing question: How will we scale up to quantum computing systems that can solve hard problems? The honest answer is: We don’t know yet. All proposed hardware platforms need to overcome serious challenges. Whatever technologies may seem to be in the lead over, say, the next 10 years might not be the best long-term solution. For that reason, it remains essential at this stage to develop a broad array of hardware platforms in parallel.

Today’s NISQ technology is already scientifically useful, and that scientific value will continue to rise as processors advance. The path to business value is longer, and progress will be gradual. Above all, we have good reason to believe that to attain quantum value, to realize the grand aspirations that we all share for quantum computing, we must follow the road to fault tolerance. That awareness should inform our thinking, our strategy, and our investments now and in the years ahead.

Crossing the quantum chasm (image generated using Midjourney)

The power of awe

Mid-afternoon, one Saturday late in September, I forgot where I was. I forgot that I was visiting Seattle for the second time; I forgot that I’d just finished co-organizing a workshop partially about nuclear physics for the first time. I’d arrived at a crowded doorway in the Chihuly Garden and Glass museum, and a froth of blue was towering above the onlookers in front of me. Glass tentacles, ranging from ultramarine through turquoise to clear, extended from the froth. Golden conch shells, starfish, and mollusks rode the waves below. The vision drove everything else from my mind for an instant.

Much had been weighing on my mind that week. The previous day had marked the end of a workshop hosted by the Inqubator for Quantum Simulation (IQuS, pronounced eye-KWISS) at the University of Washington. I’d co-organized the workshop with IQuS member Niklas Mueller, NIST physicist Alexey Gorshkov, and nuclear theorist Raju Venugopalanan (although Niklas deserves most of the credit). We’d entitled the workshop “Thermalization, from Cold Atoms to Hot Quantum Chromodynamics.” Quantum chromodynamics describes the strong force that binds together a nucleus’s constituents, so I call the workshop “Journey to the Center of the Atom” to myself. 

We aimed to unite researchers studying thermal properties of quantum many-body systems from disparate perspectives. Theorists and experimentalists came; and quantum information scientists and nuclear physicists; and quantum thermodynamicists and many-body physicists; and atomic, molecular, and optical physicists. Everyone cared about entanglement, equilibration, and what else happens when many quantum particles crowd together and interact. 

We quantum physicists crowded together and interacted from morning till evening. We presented findings to each other, questioned each other, coagulated in the hallways, drank tea together, and cobbled together possible projects. The week electrified us like a chilly ocean wave but also wearied me like an undertow. Other work called for attention, and I’d be presenting four more talks at four more workshops and campus visits over the next three weeks. The day after the workshop, I worked in my hotel half the morning and then locked away my laptop. I needed refreshment, and little refreshes like art.

Strongly interacting physicists

Chihuly Garden and Glass, in downtown Seattle, succeeded beyond my dreams: the museum drew me into somebody else’s dreams. Dale Chihuly grew up in Washington state during the mid-twentieth century. He studied interior design and sculpture before winning a Fulbright Fellowship to learn glass-blowing techniques in Murano, Italy. After that, Chihuly transformed the world. I’ve encountered glass sculptures of his in Pittsburgh; Florida; Boston; Jerusalem; Washington, DC; and now Seattle—and his reach dwarfs my travels. 

Chihuly chandelier at the Renwick Gallery in Washington, DC

After the first few encounters, I began recognizing sculptures as Chihuly’s before checking their name plates. Every work by his team reflects his style. Tentacles, bulbs, gourds, spheres, and bowls evidence what I never expected glass to do but what, having now seen it, I’m glad it does.

This sentiment struck home a couple of galleries beyond the Seaforms. The exhibit Mille Fiori drew inspiration from the garden cultivated by Chihuly’s mother. The name means A Thousand Flowers, although I spied fewer flowers than what resembled grass, toadstools, and palm fronds. Visitors feel like grasshoppers amongst the red, green, and purple stalks that dwarfed some of us. The narrator of Jules Vernes’s Journey to the Center of the Earth must have felt similarly, encountering mastodons and dinosaurs underground. I encircled the garden before registering how much my mind had lightened. Responsibilities and cares felt miles away—or, to a grasshopper, backyards away. Wonder does wonders.

Mille Fiori

Near the end of the path around the museum, a theater plays documentaries about Chihuly’s projects. The documentaries include interviews with the artist, and several quotes reminded me of the science I’d been trained to seek out: “I really wanted to take glass to its glorious height,” Chihuly said, “you know, really make something special.” “Things—pieces got bigger, pieces got taller, pieces got wider.” He felt driven to push art forms as large as the glass would permit his team. Similarly, my PhD advisor John Preskill encouraged me to “think big.” What physics is worth doing—what would create an impact?

How did a boy from Tacoma, Washington impact not only fellow blown-glass artists—not only artists—not only an exhibition here and there in his home country—but experiences across the globe, including that of a physicist one weekend in September?

One idea from the IQuS workshop caught my eye. Some particle colliders accelerate heavy ions to high energies and then smash the ions together. Examples include lead and gold ions studied at CERN in Geneva. After a collision, the matter expands and cools. Nuclear physicists don’t understand how the matter cools; models predict cooling times longer than those observed. This mismatch has persisted across decades of experiments. The post-collision matter evades attempts at computer simulation; it’s literally a hot mess. Can recent advances in many-body physics help?

The exhibit Persian Ceiling at Chihuly Garden and Glass. Doesn’t it look like it could double as an artist’s rendering of a heavy-ion collision?

Martin Savage, the director of IQuS, hopes so. He hopes that IQuS will impact nuclear physics across the globe. Every university and its uncle boasts a quantum institute nowadays, but IQuS seems to me to have carved out a niche for itself. IQuS has grown up in the bosom of the Institute for Nuclear Theory at the University of Washington, which has guided nuclear theory for decades. IQuS is smashing that history together with the future of quantum simulators. IQuS doesn’t strike me as just another glass bowl in the kitchen of quantum science. A bowl worthy of Chihuly? I don’t know, but I’d like to hope so.

I left Chihuly Garden and Glass with respect for the past week and energy for the week ahead. Whether you find it in physics or in glass or in both—or in plunging into a dormant Icelandic volcano in search of the Earth’s core—I recommend the occasional dose of awe.

Participants in the final week of the workshop

With thanks to Martin Savage, IQuS, and the University of Washington for their hospitality.

Astrobiology meets quantum computation?

The origin of life appears to share little with quantum computation, apart from the difficulty of achieving it and its potential for clickbait. Yet similar notions of complexity have recently garnered attention in both fields. Each topic’s researchers expect only special systems to generate high values of such complexity, or complexity at high rates: organisms, in one community, and quantum computers (and perhaps black holes), in the other. 

Each community appears fairly unaware of its counterpart. This article is intended to introduce the two. Below, I review assembly theory from origin-of-life studies, followed by quantum complexity. I’ll then compare and contrast the two concepts. Finally, I’ll suggest that origin-of-life scientists can quantize assembly theory using quantum complexity. The idea is a bit crazy, but, well, so what?

Assembly theory in origin-of-life studies

Imagine discovering evidence of extraterrestrial life. How could you tell that you’d found it? You’d have detected a bunch of matter—a bunch of particles, perhaps molecules. What about those particles could evidence life?

This question motivated Sara Imari Walker and Lee Cronin to develop assembly theory. (Most of my assembly-theory knowledge comes from Sara, about whom I wrote this blog post years ago and with whom I share a mentor.) Assembly theory governs physical objects, from proteins to self-driving cars. 

Imagine assembling a protein from its constituent atoms. First, you’d bind two atoms together. Then, you might bind another two atoms together. Eventually, you’d bind two pairs together. Your sequence of steps would form an algorithm for assembling the protein. Many algorithms can generate the same protein. One algorithm has the least number of steps. That number is called the protein’s assembly number.

Different natural processes tend to create objects that have different assembly numbers. Stars form low-assembly-number objects by fusing two hydrogen atoms together into helium. Similarly, random processes have high probabilities of forming low-assembly-number objects. For example, geological upheavals can bring a shard of iron near a lodestone. The iron will stick to the magnetized stone, forming a two-component object.

My laptop has an enormous assembly number. Why can such an object exist? Because of information, Sara and Lee emphasize. Human beings amassed information about materials science, Boolean logic, the principles of engineering, and more. That information—which exists only because organisms exists—helped engender my laptop.

If any object has a high enough assembly number, Sara and Lee posit, that object evidences life. Absent life, natural processes have too low a probability of randomly throwing together molecules into the shape of a computer. How high is “high enough”? Approximately fifteen, experiments by Lee’s group suggest. (Why do those experiments point to the number fifteen? Sara’s group is working on a theory for predicting the number.)

In summary, assembly number quantifies complexity in origin-of-life studies, according to Sara and Lee. The researchers propose that only living beings create high-assembly-number objects.

Quantum complexity in quantum computation

Quantum complexity defines a stage in the equilibration of many-particle quantum systems. Consider a clump of N quantum particles isolated from its environment. The clump will be in a pure quantum state | \psi(0) \rangle at a time t = 0. The particles will interact, evolving the clump’s state as a function  | \psi(t) \rangle

Quantum many-body equilibration is more complicated than the equilibration undergone by your afternoon pick-me-up as it cools.

The interactions will equilibrate the clump internally. One stage of equilibration centers on local observables O. They’ll come to have expectation values \langle \psi(t) | O | \psi(t) \rangle approximately equal to thermal expectation values {\rm Tr} ( O \, \rho_{\rm th} ), for a thermal state \rho_{\rm th} of the clump. During another stage of equilibration, the particles correlate through many-body entanglement. 

The longest known stage centers on the quantum complexity of | \psi(t) \rangle. The quantum complexity is the minimal number of basic operations needed to prepare | \psi(t) \rangle from a simple initial state. We can define “basic operations” in many ways. Examples include quantum logic gates that act on two particles. Another example is an evolution for one time step under a Hamiltonian that couples together at most k particles, for some k independent of N. Similarly, we can define “a simple initial state” in many ways. We could count as simple only the N-fold tensor product | 0 \rangle^{\otimes N} of our favorite single-particle state | 0 \rangle. Or we could call any N-fold tensor product simple, or any state that contains at-most-two-body entanglement, and so on. These choices don’t affect the quantum complexity’s qualitative behavior, according to string theorists Adam Brown and Lenny Susskind.

How quickly can the quantum complexity of | \psi(t) \rangle grow? Fast growth stems from many-body interactions, long-range interactions, and random coherent evolutions. (Random unitary circuits exemplify random coherent evolutions: each gate is chosen according to the Haar measure, which we can view roughly as uniformly random.) At most, quantum complexity can grow linearly in time. Random unitary circuits achieve this rate. Black holes may; they scramble information quickly. The greatest possible complexity of any N-particle state scales exponentially in N, according to a counting argument

A highly complex state | \psi(t) \rangle looks simple from one perspective and complicated from another. Human scientists can easily measure only local observables O. Such observables’ expectation values \langle \psi(t) | O | \psi(t) \rangle  tend to look thermal in highly complex states, \langle \psi(t) | O | \psi(t) \rangle \approx {\rm Tr} ( O \, \rho_{\rm th} ), as implied above. The thermal state has the greatest von Neumann entropy, - {\rm Tr} ( \rho \log \rho), of any quantum state \rho that obeys the same linear constraints as | \psi(t) \rangle (such as having the same energy expectation value). Probed through simple, local observables O, highly complex states look highly entropic—highly random—similarly to a flipped coin.

Yet complex states differ from flipped coins significantly, as revealed by subtler analyses. An example underlies the quantum-supremacy experiment published by Google’s quantum-computing group in 2018. Experimentalists initialized 53 qubits (quantum two-level systems) in a tensor product. The state underwent many gates, which prepared a highly complex state. Then, the experimentalists measured the z-component \sigma_z of each qubit’s spin, randomly obtaining a -1 or a 1. One trial yielded a 53-bit string. The experimentalists repeated this process many times, using the same gates in each trial. From all the trials’ bit strings, the group inferred the probability p(s) of obtaining a given string s in the next trial. The distribution \{ p(s) \} resembles the uniformly random distribution…but differs from it subtly, as revealed by a cross-entropy analysis. Classical computers can’t easily generate \{ p(s) \}; hence the Google group’s claiming to have achieved quantum supremacy/advantage. Quantum complexity differs from simple randomness, that difference is difficult to detect, and the difference can evidence quantum computers’ power.

A fridge that holds one of Google’s quantum computers.

Comparison and contrast

Assembly number and quantum complexity resemble each other as follows:

  1. Each function quantifies the fewest basic operations needed to prepare something.
  2. Only special systems (organisms) can generate high assembly numbers, according to Sara and Lee. Similarly, only special systems (such as quantum computers and perhaps black holes) can generate high complexity quickly, quantum physicists expect.
  3. Assembly number may distinguish products of life from products of abiotic systems. Similarly, quantum complexity helps distinguish quantum computers’ computational power from classical computers’.
  4. High-assembly-number objects are highly structured (think of my laptop). Similarly, high-complexity quantum states are highly structured in the sense of having much many-body entanglement.
  5. Organisms generate high assembly numbers, using information. Similarly, using information, organisms have created quantum computers, which can generate quantum complexity quickly.

Assembly number and quantum complexity differ as follows:

  1. Classical objects have assembly numbers, whereas quantum states have quantum complexities.
  2. In the absence of life, random natural processes have low probabilities of producing high-assembly-number objects. That is, randomness appears to keep assembly numbers low. In contrast, randomness can help quantum complexity grow quickly.
  3. Highly complex quantum states look very random, according to simple, local probes. High-assembly-number objects do not.
  4. Only organisms generate high assembly numbers, according to Sara and Lee. In contrast, abiotic black holes may generate quantum complexity quickly.

Another feature shared by assembly-number studies and quantum computation merits its own paragraph: the importance of robustness. Suppose that multiple copies of a high-assembly-number (or moderate-assembly-number) object exist. Not only does my laptop exist, for example, but so do many other laptops. To Sara, such multiplicity signals the existence of some stable mechanism for creating that object. The multiplicity may provide extra evidence for life (including life that’s discovered manufacturing), as opposed to an unlikely sequence of random forces. Similarly, quantum computing—the preparation of highly complex states—requires stability. Decoherence threatens quantum states, necessitating quantum error correction. Quantum error correction differs from Sara’s stable production mechanism, but both evidence the importance of robustness to their respective fields.

A modest proposal

One can generalize assembly number to quantum states, using quantum complexity. Imagine finding a clump of atoms while searching for extraterrestrial life. The atoms need not have formed molecules, so the clump can have a low classical assembly number. However, the clump can be in a highly complex quantum state. We could detect the state’s complexity only (as far as I know) using many copies of the state, so imagine finding many clumps of atoms. Preparing highly complex quantum states requires special conditions, such as a quantum computer. The clump might therefore evidence organisms who’ve discovered quantum physics. Using quantum complexity, one might extend the assembly number to identify quantum states that may evidence life. However, quantum complexity, or a high rate of complexity generation, alone may not evidence life—for example, if achievable by black holes. Fortunately, a black hole seems unlikely to generate many identical copies of a highly complex quantum state. So we seem to have a low probability of mistakenly attributing a highly complex quantum state, sourced by a black hole, to organisms (atop our low probability of detecting any complex quantum state prepared by anyone other than us).

Would I expect a quantum assembly number to greatly improve humanity’s search for extraterrestrial life? I’m no astrobiology expert (NASA videos notwithstanding), but I’d expect probably not. Still, astrobiology requires chemistry, which requires quantum physics. Quantum complexity seems likely to find applications in the assembly-number sphere. Besides, doesn’t juxtaposing the search for extraterrestrial life and the understanding of life’s origins with quantum computing sound like fun? And a sense of fun distinguishes certain living beings from inanimate matter about as straightforwardly as assembly number does.

With thanks to Jim Al-Khalili, Paul Davies, the From Physics to Life collaboration, and UCLA for hosting me at the workshop that spurred this article.

May I have this dance?

This July, I came upon a museum called the Haus der Musik in one of Vienna’s former palaces. The museum contains a room dedicated to Johann Strauss II, king of the waltz. The room, dimly lit, resembles a twilit gazebo. I could almost believe that a hidden orchestra was playing the rendition of “The Blue Danube” that filled the room. Glass cases displayed dance cards and accessories that dancers would bring to a nineteenth-century ball.

A ball. Who hasn’t read about one in a novel or seen one in a film? A throng of youngsters and their chaperones, rustling in silk. The glint of candles, the vigor of movement, the thrill of interaction, the anxiety of establishing one’s place in society.

Victoria and Albert at a ball in the film The Young Victoria

Another throng gathered a short walk from the Haus der Musik this summer. The Vienna University of Technology hosted the conference Quantum Thermodynamics (QTD) in the heart of the city. Don’t tell the other annual conferences, but QTD is my favorite. It spotlights the breed of quantum thermodynamics that’s surged throughout the past decade—the breed saturated with quantum information theory. I began attending QTD as a PhD student, and the conference shifts from city to city from year to year. I reveled in returning in person for the first time since the pandemic began.

Yet this QTD felt different. First, instead of being a PhD student, I brought a PhD student of my own. Second, granted, I enjoyed catching up with colleagues-cum-friends as much as ever. I especially relished seeing the “classmates” who belonged to my academic generation. Yet we were now congratulating each other on having founded research groups, and we were commiserating about the workload of primary investigators. 

Third, I found myself a panelist in the annual discussion traditionally called “Quo vadis, quantum thermodynamics?” The panel presented bird’s-eye views on quantum thermodynamics, analyzing trends and opining on the direction our field was taking (or should take).1 Fourth, at the end of the conference, almost the last sentence spoken into any microphone was “See you in Maryland next year.” Colleagues and I will host QTD 2024.


One of my dearest quantum-thermodynamic “classmates,” Nelly Ng, participated in the panel discussion, too. We met as students (see these two blog posts), and she’s now an assistant professor at Nanyang Technological University. Photo credit: Jakub Czartowski.

The day after QTD ended, I boarded an Austrian Airlines flight. Waltzes composed by Strauss played over the loudspeakers. They flipped a switch in my mind: I’d come of age, I thought. I’d attended QTD 2017 as a debutante, presenting my first invited talk at the conference series. I’d danced through QTD 2018 in Santa Barbara, as well as the online iterations held during the pandemic. I’d reveled in the vigor of scientific argumentation, the thrill of learning, the glint of slides shining on projector screens (not really). Now, I was beginning to shoulder responsibilities like a ballgown-wearing chaperone.

As I came of age, so did QTD. The conference series budded around the time I started grad school and embarked upon quantum-thermodynamics research. In 2017, approximately 80 participants attended QTD. This year, 250 people registered to attend in person, and others attended online. Two hundred fifty! Quantum thermodynamics scarcely existed as a field of research fifteen years ago.

I’ve heard that organizers of another annual conference, Quantum Information Processing (QIP), reacted similarly to a 250-person registration list some years ago. Aram Harrow, a professor and quantum information theorist at MIT, has shared stories about co-organizing the first QIPs. As a PhD student, he’d sat in his advisor’s office, taking notes, while the local quantum-information theorists chose submissions to highlight. Nowadays, a small army of reviewers and subreviewers processes the hordes of submissions. And, from what I heard about this year’s attendance, you almost might as well navigate a Disney theme park on a holiday as the QIP crowd. 

Will QTD continue to grow like QIP? Would such growth strengthen or fracture the community? Perhaps we’ll discuss those questions at a “Quo vadis?” session in Maryland next year. But I, at least, hope to continue always to grow—and to dance.2


Ludwig Boltzmann, a granddaddy of thermodynamics, worked in Vienna. I’ve waited for years to make a pilgrimage.

1My opinion: Now that quantum thermodynamics has showered us with fundamental insights, we should apply it in practical applications. How? Collaborators and I suggest one path here.

2I confess to having danced the waltz step (gleaned during my 14 years of ballet training) around that Strauss room in the Haus der Musik. I didn’t waltz around the conference auditorium, though.

Can Thermodynamics Resolve the Measurement Problem?

At the recent Quantum Thermodynamics conference in Vienna (coming next year to the University of Maryland!), during an expert panel Q&A session, one member of the audience asked “can quantum thermodynamics address foundational problems in quantum theory?”

That stuck with me, because that’s exactly what my research is about. So naturally, I’d say the answer is yes! In fact, here in the group of Marcus Huber at the Technical University of Vienna, we think thermodynamics may have something to say about the biggest quantum foundations problem of all: the measurement problem.

It’s sort of the iconic mystery of quantum mechanics: we know that an electron can be in two places at once – in a ‘superposition’ – but when we measure it, it’s only ever seen to be in one place, picked seemingly at random from the two possibilities. We say the state has ‘collapsed’.

What’s going on here? Thanks to Bell’s legendary theorem, we know that the answer can’t just be that it was always actually in one place and we just didn’t know which option it was – it really was in two places at once until it was measured1. But also, we don’t see this effect for sufficiently large objects. So how can this ‘two-places-at-once’ thing happen at all, and why does it stop happening once an object gets big enough?

Here, we already see hints that thermodynamics is involved, because even classical thermodynamics says that big systems behave differently from small ones. And interestingly, thermodynamics also hints that the narrative so far can’t be right. Because when taken at face value, the ‘collapse’ model of measurement breaks all three laws of thermodynamics.

Imagine an electron in a superposition of two energy levels: a combination of being in its ground state and first excited state. If we measure it and it ‘collapses’ to being only in the ground state, then its energy has decreased: it went from having some average of the ground and excited energies to just having the ground energy. The first law of thermodynamics says (crudely) that energy is conserved, but the loss of energy is unaccounted for here.

Next, the second law says that entropy always increases. One form of entropy represents your lack of information about a system’s state. Before the measurement, the system was in one of two possible states, but afterwards it was in only one state. So speaking very broadly, our uncertainty about its state, and hence the entropy, is reduced. (The third law is problematic here, too.)

There’s a clear explanation here: while the system on its own decreases its entropy and doesn’t conserve energy, in order to measure something, we must couple the system to a measuring device. That device’s energy and entropy changes must account for the system’s changes.

This is the spirit of our measurement model2. We explicitly include the detector as a quantum object in the record-keeping of energy and information flow. In fact, we also include the entire environment surrounding both system and device – all the lab’s stray air molecules, photons, etc. Then the idea is to describe a measurement process as propagating a record of a quantum system’s state into the surroundings without collapsing it.

A schematic representation of a system spreading information into an environment (from Schwarzhans et al., with permission)

But talking about quantum systems interacting with their environments is nothing new. The “decoherence” model from the 70s, which our work builds on, says quantum objects become less quantum when buffeted by a larger environment.

The problem, though, is that decoherence describes how information is lost into an environment, and so usually the environment’s dynamics aren’t explicitly calculated: this is called an open-system approach. By contrast, in the closed-system approach we use, you model the dynamics of the environment too, keeping track of all information. This is useful because conventional collapse dynamics seems to destroy information, but every other fundamental law of physics seems to say that information can’t be destroyed.

This all allows us to track how information flows from system to surroundings, using the “Quantum Darwinism” (QD) model of W.H. Żurek. Whereas decoherence describes how environments affect systems, QD describes how quantum systems impact their environments by spreading information into them. The QD model says that the most ‘classical’ information – the kind most consistent with classical notions of ‘being in one place’, etc. – is the sort most likely to ‘survive’ the decoherence process.

QD then further asserts that this is the information that’s most likely to be copied into the environment. If you look at some of a system’s surroundings, this is what you’d most likely see. (The ‘Darwinism’ name is because certain states are ‘selected for’ and ‘replicate’3.)

So we have a description of what we want the post-measurement state to look like: a decohered system, with its information redundantly copied into its surrounding environment. The last piece of the puzzle, then, is to ask how a measurement can create this state. Here, we finally get to the dynamics part of the thermodynamics, and introduce equilibration.

Earlier we said that even if the system’s entropy decreases, the detector’s entropy (or more broadly the environment’s) should go up to compensate. Well, equilibration maximizes entropy. In particular, equilibration describes how a system tends towards a particular ‘equilibrium’ state, because the system can always increase its entropy by getting closer to it.

It’s usually said that systems equilibrate if put in contact with an external environment (e.g. a can of beer cooling in a fridge), but we’re actually interested in a different type of equilibration called equilibration on average. There, we’re asking for the state that a system stays roughly close to, on average, over long enough times, with no outside contact. That means it never actually decoheres, it just looks like it does for certain observables. (This actually implies that nothing ever actually decoheres, since open systems are only an approximation you make when you don’t want to track all of the environment.)

Equilibration is the key to the model. In fact, we call our idea the Measurement-Equilibration Hypothesis (MEH): we’re asserting that measurement is an equilibration process. Which makes the final question: what does all this mean for the measurement problem?

In the MEH framework, when someone ‘measures’ a quantum system, they allow some measuring device, plus a chaotic surrounding environment, to interact with it. The quantum system then equilibrates ‘on average’ with the environment, and spreads information about its classical states into the surroundings. Since you are a macroscopically large human, any measurement you do will induce this sort of equilibration to happen, meaning you will only ever have access to the classical information in the environment, and never see superpositions. But no collapse is necessary, and no information is lost: rather some information is only much more difficult to access in all the environment noise, as happens all the time in the classical world.

It’s tempting to ask what ‘happens’ to the outcomes we don’t see, and how nature ‘decides’ which outcome to show to us. Those are great questions, but in our view, they’re best left to philosophers4. For the question we care about: why measurements look like a ‘collapse’, we’re just getting started with our Measurement-Equilibration Hypothesis – there’s still lots to do in our explorations of it. We think the answers we’ll uncover in doing so will form an exciting step forward in our understanding of the weird and wonderful quantum world.

Members of the MEH team at a kick-off meeting for the project in Vienna in February 2023. Left to right: Alessandro Candeloro, Marcus Huber, Emanuel Schwarzhans, Tom Rivlin, Sophie Engineer, Veronika Baumann, Nicolai Friis, Felix C. Binder, Mehul Malik, Maximilian P.E. Lock, Pharnam Bakhshinezhad

Acknowledgements: Big thanks to the rest of the MEH team for all the help and support, in particular Dr. Emanuel Schwarzhans and Dr. Lock for reading over this piece!)

Here are a few choice references (by no means meant to be comprehensive!)

Quantum Thermodynamics (QTD) Conference 2023: https://qtd2023.conf.tuwien.ac.at/
QTD 2024: https://qtd-hub.umd.edu/event/qtd-conference-2024/
Bell’s Theorem: https://plato.stanford.edu/entries/bell-theorem/
The first MEH paper: https://arxiv.org/abs/2302.11253
A review of decoherence: https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.75.715
Quantum Darwinism: https://www.nature.com/articles/nphys1202
Measurements violate the 3rd law: https://quantum-journal.org/papers/q-2020-01-13-222/
More on the 3rd and QM: https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.4.010332
Equilibration on average: https://iopscience.iop.org/article/10.1088/0034-4885/79/5/056001/meta
Objectivity: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.91.032122

  1. There is a perfectly valid alternative with other weird implications: that it was always just in one place, but the world is intrinsically non-local. Most physicists prefer to save locality over realism, though. ↩︎
  2. First proposed in this paper by Schwarzhans, Binder, Huber, and Lock: https://arxiv.org/abs/2302.11253 ↩︎
  3. In my opinion… it’s a brilliant theory with a terrible name! Sure, there’s something akin to ‘selection pressure’ and ‘reproduction’, but there aren’t really any notions of mutation, adaptation, fitness, generations… Alas, the name has stuck. ↩︎
  4. I actually love thinking about this question, and the interpretations of quantum mechanics more broadly, but it’s fairly orthogonal to the day-to-day research on this model. ↩︎

The Book of Mark, Chapter 2

Late in the summer of 2021, I visited a physics paradise in a physical paradise: the Kavli Institute for Theoretical Physics (KITP). The KITP sits at the edge of the University of California, Santa Barbara like a bougainvillea bush at the edge of a yard. I was eating lunch outside the KITP one afternoon, across the street from the beach. PhD student Arman Babakhani, whom a colleague had just introduced me to, had joined me.

The KITP’s Kohn Hall

What physics was I working on nowadays? Arman wanted to know.

Thermodynamic exchanges. 

The world consists of physical systems exchanging quantities with other systems. When a rose blooms outside the Santa Barbara mission, it exchanges pollen with the surrounding air. The total amount of pollen across the rose-and-air whole remains constant, so we call the amount a conserved quantity. Quantum physicists usually analyze conservation of particles, energy, and magnetization. But quantum systems can conserve quantities that participate in uncertainty relations. Such quantities are called incompatible, because you can’t measure them simultaneously. The x-, y-, and z-components of a qubit’s spin are incompatible.

The Santa Barbara mission…
…and its roses

Exchanging and conserving incompatible quantities, systems can violate thermodynamic expectations. If one system is much larger than the other, we expect the smaller system to thermalize; yet incompatibility invalidates derivations of the thermal state’s form. Incompatibility reduces the thermodynamic entropy produced by exchanges. And incompatibility can raise the average amount entanglement in the pair of systems—the total system.

If the total system conserves incompatible quantities, what happens to the eigenstate thermalization hypothesis (ETH)? Last month’s blog post overviewed the ETH, a framework for understanding how quantum many-particle systems thermalize internally. That post labeled Mark Srednicki, a professor at the KITP, a high priest of the ETH. I want, I told Arman, to ask Mark what happens when you combine the ETH with incompatible conserved quantities.

I’ll do it, Arman said.

Soon after, I found myself in the fishbowl. High up in the KITP, a room filled with cushy seats overlooks the ocean. The circular windows lend the room its nickname. Arrayed on the armchairs and couches were Mark, Arman, Mark’s PhD student Fernando Iniguez, and Mark’s recent PhD student Chaitanya Murthy. The conversation went like this:

Mark was frustrated about not being able to answer the question. I was delighted to have stumped him. Over the next several weeks, the group continued meeting, and we emailed out notes for everyone to criticize. I particulary enjoyed watching Mark and Chaitanya interact. They’d grown so intellectually close throughout Chaitanya’s PhD studies, they reminded me of an old married couple. One of them had to express only half an idea for the other to realize what he’d meant and to continue the thread. Neither had any qualms with challenging the other, yet they trusted each other’s judgment.1

In vintage KITP fashion, we’d nearly completed a project by the time Chaitanya and I left Santa Barbara. Physical Review Letters published our paper this year, and I’m as proud of it as a gardener of the first buds from her garden. Here’s what we found.

Southern California spoiled me for roses.

Incompatible conserved quantities conflict with the ETH and the ETH’s prediction of internal thermalization. Why? For three reasons. First, when inferring thermalization from the ETH, we assume that the Hamiltonian lacks degeneracies (that no energy equals any other). But incompatible conserved quantities force degeneracies on the Hamiltonian.2 

Second, when inferring from the ETH that the system thermalizes, we assume that the system begins in a microcanonical subspace. That’s an eigenspace shared by the conserved quantities (other than the Hamiltonian)—usually, an eigenspace of the total particle number or the total spin’s z-component. But, if incompatible, the conserved quantities share no eigenbasis, so they might not share eigenspaces, so microcanonical subspaces won’t exist in abundance.

Third, let’s focus on a system of N qubits. Say that the Hamiltonian conserves the total spin components S_x, S_y, and S_z. The Hamiltonian obeys the Wigner–Eckart theorem, which sounds more complicated than it is. Suppose that the qubits begin in a state | s_\alpha, \, m \rangle labeled by a spin quantum number s_\alpha and a magnetic spin quantum number m. Let a particle hit the qubits, acting on them with an operator \mathcal{O} . With what probability (amplitude) do the qubits end up with quantum numbers s_{\alpha'} and m'? The answer is \langle s_{\alpha'}, \, m' | \mathcal{O} | s_\alpha, \, m \rangle. The Wigner–Eckart theorem dictates this probability amplitude’s form. 

| s_\alpha, \, m \rangle and | s_{\alpha'}, \, m' \rangle are Hamiltonian eigenstates, thanks to the conservation law. The ETH is an ansatz for the form of \langle s_{\alpha'}, \, m' | \mathcal{O} | s_\alpha, \, m \rangle—of the elements of matrices that represent operators \mathcal{O} relative to the energy eigenbasis. The ETH butts heads with the Wigner–Eckart theorem, which also predicts the matrix element’s form.

The Wigner–Eckart theorem wins, being a theorem—a proved claim. The ETH is, as the H in the acronym relates, only a hypothesis.

If conserved quantities are incompatible, we have to kiss the ETH and its thermalization predictions goodbye. But must we set ourselves adrift entirely? Can we cling to no buoy from physics’s best toolkit for quantum many-body thermalization?

No, and yes, respectively. Our clan proposed a non-Abelian ETH for Hamiltonians that conserve incompatible quantities—or, equivalently, that have non-Abelian symmetries. The non-Abelian ETH depends on s_\alpha and on Clebsch–Gordan coefficients—conversion factors between total-spin eigenstates | s_\alpha, \, m \rangle and product states | s_1, \, m_1 \rangle \otimes | s_2, \, m_2 \rangle.

Using the non-Abelian ETH, we proved that many systems thermalize internally, despite conserving incompatible quantities. Yet the incompatibility complicates the proof enormously, extending it from half a page to several pages. Also, under certain conditions, incompatible quantities may alter thermalization. According to the conventional ETH, time-averaged expectation values \overline{ \langle \mathcal{O} \rangle }_t come to equal thermal expectation values \langle \mathcal{O} \rangle_{\rm th} to within O( N^{-1} ) corrections, as I explained last month. The correction can grow polynomially larger in the system size, to O( N^{-1/2} ), if conserved quantities are incompatible. Our conclusion holds under an assumption that we argue is physically reasonable.

So incompatible conserved quantities do alter the ETH, yet another thermodynamic expectation. Physicist Jae Dong Noh began checking the non-Abelian ETH numerically, and more testing is underway. And I’m looking forward to returning to the KITP this fall. Tales do say that paradise is a garden.

View through my office window at the KITP

1Not that married people always trust each other’s judgment.

2The reason is Schur’s lemma, a group-theoretic result. Appendix A of this paper explains the details.

Caltech’s Ginsburg Center

Editor’s note: On 10 August 2023, Caltech celebrated the groundbreaking for the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement, which will open in 2025. At a lunch following the ceremony, John Preskill made these remarks.

Rendering of the facade of the Ginsburg Center

Hello everyone. I’m John Preskill, a professor of theoretical physics at Caltech, and I’m honored to have this opportunity to make some brief remarks on this exciting day.

In 2025, the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement will open on the Caltech campus. That will certainly be a cause for celebration. Quite fittingly, in that same year, we’ll have something else to celebrate — the 100th anniversary of the formulation of quantum mechanics in 1925. In 1900, it had become clear that the physics of the 19th century had serious shortcomings that needed to be addressed, and for 25 years a great struggle unfolded to establish a firm foundation for the science of atoms, electrons, and light; the momentous achievements of 1925 brought that quest to a satisfying conclusion. No comparably revolutionary advance in fundamental science has occurred since then.

For 98 years now we’ve built on those achievements of 1925 to arrive at a comprehensive understanding of much of the physical world, from molecules to materials to atomic nuclei and exotic elementary particles, and much else besides. But a new revolution is in the offing. And the Ginsburg Center will arise at just the right time and at just the right place to drive that revolution forward.

Up until now, most of what we’ve learned about the quantum world has resulted from considering the behavior of individual particles. A single electron propagating as a wave through a crystal, unfazed by barriers that seem to stand in its way. Or a single photon, bouncing hundreds of times between mirrors positioned kilometers apart, dutifully tracking the response of those mirrors to gravitational waves from black holes that collided in a galaxy billions of light years away. Understanding that single-particle physics has enabled us to explore nature in unprecedented ways, and to build information technologies that have profoundly transformed our lives.

At the groundbreaking: Physics, Math and Astronomy Chair Fiona Harrison, California Assemblymember Chris Holden, President Tom Rosenbaum, Charlotte Ginsburg, Dr. Allen Ginsburg, Pasadena Mayor Victor Gordo, Provost Dave Tirrell.

What’s happening now is that we’re getting increasingly adept at instructing particles to move in coordinated ways that can’t be accurately described in terms of the behavior of one particle at a time. The particles, as we like to say, can become entangled. Many particles, like electrons or photons or atoms, when highly entangled, exhibit an extraordinary complexity that we can’t capture with the most powerful of today’s supercomputers, or with our current theories of how Nature works. That opens extraordinary opportunities for new discoveries and new applications.

We’re very proud of the role Caltech has played in setting the stage for the next quantum revolution. Richard Feynman envisioning quantum computers that far surpass the computers we have today. Kip Thorne proposing ways to use entangled photons to perform extraordinarily precise measurements. Jeff Kimble envisioning and executing ingenious methods for entangling atoms and photons. Jim Eisenstein creating and studying extraordinary phenomena in a soup of entangled electrons. And much more besides. But far greater things are yet to come.

How can we learn to understand and exploit the behavior of many entangled particles that work together? For that, we’ll need many scientists and engineers who work together. I joined the Caltech faculty in August 1983, almost exactly 40 years ago. These have been 40 good years, but I’m having more fun now than ever before. My training was in elementary particle physics. But as our ability to manipulate the quantum world advances, I find that I have more and more in common with my colleagues from different specialties. To fully realize my own potential as a researcher and a teacher, I need to stay in touch with atomic physics, condensed matter physics, materials science, chemistry, gravitational wave physics, computer science, electrical engineering, and much else. Even more important, that kind of interdisciplinary community is vital for broadening the vision of the students and postdocs in our research groups.

Nurturing that community — that’s what the Ginsburg Center is all about. That’s what will happen there every day. That sense of a shared mission, enhanced by colocation, will enable the Ginsburg Center to lead the way as quantum science and technology becomes increasingly central to Caltech’s research agenda in the years ahead, and increasingly important for science and engineering around the globe. And I just can’t wait for 2025.

Caltech is very fortunate to have generous and visionary donors like the Ginsburgs and the Sherman Fairchild Foundation to help us realize our quantum dreams.

Dr. Allen and Charlotte Ginsburg

It from Qubit: The Last Hurrah

Editor’s note: Since 2015, the Simons Foundation has supported the “It from Qubit” collaboration, a group of scientists drawing on ideas from quantum information theory to address deep issues in fundamental physics. The collaboration held its “Last Hurrah” event at Perimeter Institute last week. Here is a transcript of remarks by John Preskill at the conference dinner.

It from Qubit 2023 at Perimeter Institute

This meeting is forward-looking, as it should be, but it’s fun to look back as well, to assess and appreciate the progress we’ve made. So my remarks may meander back and forth through the years. Settle back — this may take a while.

We proposed the It from Qubit collaboration in March 2015, in the wake of several years of remarkable progress. Interestingly, that progress was largely provoked by an idea that most of us think is wrong: Black hole firewalls. Wrong perhaps, but challenging to grapple with.

This challenge accelerated a synthesis of quantum computing, quantum field theory, quantum matter, and quantum gravity as well. By 2015, we were already appreciating the relevance to quantum gravity of concepts like quantum error correction, quantum computational complexity, and quantum chaos. It was natural to assemble a collaboration in which computer scientists and information theorists would participate along with high-energy physicists.

We built our proposal around some deep questions where further progress seemed imminent, such as these:

Does spacetime emerge from entanglement?
Do black holes have interiors?
What is the information-theoretical structure of quantum field theory?
Can quantum computers simulate all physical phenomena?

On April 30, 2015 we presented our vision to the Simons Foundation, led by Patrick [Hayden] and Matt [Headrick], with Juan [Maldacena], Lenny [Susskind] and me tagging along. We all shared at that time a sense of great excitement; that feeling must have been infectious, because It from Qubit was successfully launched.

Some It from Qubit investigators at a 2015 meeting.

Since then ideas we talked about in 2015 have continued to mature, to ripen. Now our common language includes ideas like islands and quantum extremal surfaces, traversable wormholes, modular flow, the SYK model, quantum gravity in the lab, nonisometric codes, the breakdown of effective field theory when quantum complexity is high, and emergent geometry described by Von Neumann algebras. In parallel, we’ve seen a surge of interest in quantum dynamics in condensed matter, focused on issues like how entanglement spreads, and how chaotic systems thermalize — progress driven in part by experimental advances in quantum simulators, both circuit-based and analog.

Why did we call ourselves “It from Qubit”? Patrick explained that in our presentation with a quote from John Wheeler in 1990. Wheeler said,

“It from bit” symbolizes the idea that every item of the physical world has at bottom—a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

As is often the case with Wheeler, you’re not quite sure what he’s getting at. But you can glean that Wheeler envisioned that progress in fundamental physics would be hastened by bringing in ideas from information theory. So we updated Wheeler’s vision by changing “it from bit” to “it from qubit.”

As you may know, Richard Feynman had been Wheeler’s student, and he once said this about Wheeler: “Some people think Wheeler’s gotten crazy in his later years, but he’s always been crazy.” So you can imagine how flattered I was when Graeme Smith said the exact same thing about me.

During the 1972-73 academic year, I took a full-year undergraduate course from Wheeler at Princeton that covered everything in physics, so I have a lot of Wheeler stories. I’ll just tell one, which will give you some feel for his teaching style. One day, Wheeler arrives in class dressed immaculately in a suit and tie, as always, and he says: “Everyone take out a sheet of paper, and write down all the equations of physics – don’t leave anything out.” We dutifully start writing equations. The Schrödinger equation, Newton’s laws, Maxwell’s equations, the definition of entropy and the laws of thermodynanics, Navier-Stokes … we had learned a lot. Wheeler collects all the papers, and puts them in a stack on a table at the front of the classroom. He gestures toward the stack and says imploringly “Fly!” [Long pause.] Nothing happens. He tries again, even louder this time: “Fly!” [Long pause.] Nothing happens. Then Wheeler concludes: “On good authority, this stack of papers contains all the equations of physics. But it doesn’t fly. Yet, the universe flies. Something must be missing.”

Channeling Wheeler at the banquet, I implore my equations to fly. Photo by Jonathan Oppenheim.

He was an odd man, but inspiring. And not just odd, but also old. We were 19 and could hardly believe he was still alive — after all, he had worked with Bohr on nuclear fission in the 1930s! He was 61. I’m wiser now, and know that’s not really so old.

Now let’s skip ahead to 1998. Just last week, Strings 2023 happened right here at PI. So it’s fitting to mention that a pivotal Strings meeting occurred 25 years ago, Strings 1998 in Santa Barbara. The participants were in a celebratory mood, so much so that Jeff Harvey led hundreds of physicists in a night of song and dance. It went like this [singing to the tune of “The Macarena”]:

You start with the brane
and the brane is BPS.
Then you go near the brane
and the space is AdS.
Who knows what it means?
I don’t, I confess.
Ehhhh! Maldacena!

You can’t blame them for wanting to celebrate. Admittedly I wasn’t there, so how did I know that hundreds of physicists were singing and dancing? I read about it in the New York Times!

It was significant that by 1998, the Strings meetings had already been held annually for 10 years. You might wonder how that came about. Let’s go back to 1984. Those of you who are too young to remember might not realize that in the late 70s and early 80s string theory was in eclipse. It had initially been proposed as a model of hadrons, but after the discovery of asymptotic freedom in 1973 quantum chromodynamics became accepted as the preferred theory of the strong interactions. (Maybe the QCD string will make a comeback someday – we’ll see.) The community pushing string theory forward shrunk to a handful of people around the world. That changed very abruptly in August 1984. I tried to capture that sudden change in a poem I wrote for John Schwarz’s 60th birthday in 2001. I’ll read it — think of this as a history lesson.

Thirty years ago or more
John saw what physics had in store.
He had a vision of a string
And focused on that one big thing.

But then in nineteen-seven-three
Most physicists had to agree
That hadrons blasted to debris
Were well described by QCD.

The string, it seemed, by then was dead.
But John said: “It’s space-time instead!
The string can be revived again.
Give masses twenty powers of ten!

Then Dr. Green and Dr. Black,
Writing papers by the stack,
Made One, Two-A, and Two-B glisten.
Why is it none of us would listen?

We said, “Who cares if super tricks
Bring D to ten from twenty-six?
Your theory must have fatal flaws.
Anomalies will doom your cause.”

If you weren’t there you couldn’t know
The impact of that mighty blow:
“The Green-Schwarz theory could be true —
It works for S-O-thirty-two!”

Then strings of course became the rage
And young folks of a certain age
Could not resist their siren call:
One theory that explains it all.

Because he never would give in,
Pursued his dream with discipline,
John Schwarz has been a hero to me.
So … please don’t spell it with a  “t”!

And 39 years after the revolutionary events of 1984, the intellectual feast launched by string theory still thrives.

In the late 1980s and early 1990s, many high-energy physicists got interested in the black hole information problem. Of course, the problem was 15 years old by then; it arose when Hawking radiation was discovered, as Hawking himself pointed out shortly thereafter. But many of us were drawn to this problem while we waited for the Superconducting Super Collider to turn on. As I have sometimes done when I wanted to learn something, in 1990 I taught a course on quantum field theory in curved spacetime, the main purpose of which was to explain the origin of Hawking radiation, and then for a few years I tried to understand whether information can escape from black holes and if so how, as did many others in those days. That led to a 1992 Aspen program co-organized by Andy Strominger and me on “Quantum Aspects of Black Holes.” Various luminaries were there, among them Hawking, Susskind, Sidney Coleman, Kip Thorne, Don Page, and others. Andy and I were asked to nominate someone from our program to give the Aspen Center colloquium, so of course we chose Lenny, and he gave an engaging talk on “The Puzzle of Black Hole Evaporation.”

At the end of the talk, Lenny reported on discussions he’d had with various physicists he respected about the information problem, and he summarized their views. Of course, Hawking said information is lost. ‘t Hooft said that the S-matrix must be unitary for profound reasons we needed to understand. Polchinski said in 1992 that information is lost and there is no way to retrieve it. Yakir Aharonov said that the information resides in a stable Planck-sized black hole remnant. Sidney Coleman said a black hole is a lump of coal — that was the code in 1992 for what we now call the central dogma of black hole physics, that as seen from the outside a black hole is a conventional quantum system. And – remember this was Lenny’s account of what he claimed people had told him – Frank Wilczek said this is a technical problem, I’ll soon have it solved, while Ed Witten said he did not find the problem interesting.

We talked a lot that summer about the no-cloning principle, and our discomfort with the notion that the quantum information encoded in an infalling encyclopedia could be in two places at once on the same time slice, seen inside the black hole by infalling observers and seen outside the black hole by observers who peruse the Hawking radiation. That potential for cloning shook the faith of the self-appointed defenders of unitarity. Andy and I wrote a report at the end of the workshop with a pessimistic tone:

There is an emerging consensus among the participants that Hawking is essentially right – that the information loss paradox portends a true revolution in fundamental physics. If so, then one must go further, and develop a sensible “phenomenological” theory of information loss. One must reconcile the fact of information loss with established principles of physics, such as locality and energy conservation. We expect that many people, stimulated by their participation in the workshop, will now focus attention on this challenge.

I posted a paper on the arXiv a month later with a similar outlook.

There was another memorable event a year later, in June 1993, a conference at the ITP in Santa Barbara (there was no “K” back then), also called “Quantum Aspects of Black Holes.” Among those attending were Susskind, Gibbons, Polchinski, Thorne, Wald, Israel, Bekenstein, and many others. By then our mood was brightening. Rather pointedly, Lenny said to me that week: “Why is this meeting so much better than the one you organized last year?” And I replied, “Because now you think you know the answer!”

That week we talked about “black hole complementarity,” our hope that quantum information being available both inside and outside the horizon could be somehow consistent with the linearity of quantum theory. Complementarity then was a less radical, less wildly nonlocal idea than it became later on. We envisioned that information in an infalling body could stick to the stretched horizon, but not, as I recall, that the black hole interior would be somehow encoded in Hawking radiation emitted long ago — that came later. But anyway, we felt encouraged.

Joe Polchinski organized a poll of the participants, where one could choose among four options.

  1. Information is lost (unitarity violated)
  2. Information escapes (causality violated)
  3. Planck-scale black hole remnants
  4. None of the above

The poll results favored unitarity over information loss by a 60-40 margin. Perhaps not coincidentally, the participants self-identified as 60% high energy physicists and 40% relativists.

The following summer in June 1994, there was a program called Geometry and Gravity at the Newton Institute in Cambridge. Hawking, Gibbons, Susskind, Strominger, Harvey, Sorkin, and (Herman) Verlinde were among the participants. I had more discussions with Lenny that month than any time before or since. I recall sending an email to Paul Ginsparg after one such long discussion in which I said, “When I hear Lenny Susskind speak, I truly believe that information can come out of a black hole.” Secretly, though, having learned about Shor’s algorithm shortly before that program began, I was spending my evenings struggling to understand Shor’s paper. After Cambridge, Lenny visited ‘t Hooft in Utrecht, and returned to Stanford all charged up to write his paper on “The world as a hologram,” in which he credits ‘t Hooft with the idea that “the world is in a sense two-dimensional.”

Important things happened in the next few years: D-branes, counting of black hole microstates, M-theory, and AdS/CFT. But I’ll skip ahead to the most memorable of my visits to Perimeter Institute. (Of course, I always like coming here, because in Canada you use the same electrical outlets we do …)

In June 2007, there was a month-long program at PI called “Taming the Quantum World.” I recall that Lucien Hardy objected to that title — he preferred “Let the Beast Loose” — which I guess is a different perspective on the same idea. I talked there about fault-tolerant quantum computing, but more importantly, I shared an office with Patrick Hayden. I already knew Patrick well — he had been a Caltech postdoc — but I was surprised and pleased that he was thinking about black holes. Patrick had already reached crucial insights concerning the behavior of a black hole that is profoundly entangled with its surroundings. That sparked intensive discussions resulting in a paper later that summer called “Black holes as mirrors.” In the acknowledgments you’ll find this passage:

We are grateful for the hospitality of the Perimeter Institute, where we had the good fortune to share an office, and JP thanks PH for letting him use the comfortable chair.

We intended for that paper to pique the interest of both the quantum information and quantum gravity communities, as it seemed to us that the time was ripe to widen the communication channel between the two. Since then, not only has that communication continued, but a deeper synthesis has occurred; most serious quantum gravity researchers are now well acquainted with the core concepts of quantum information science.

That John Schwarz poem I read earlier reminds me that I often used to write poems. I do it less often lately. Still, I feel that you are entitled to hear something that rhymes tonight. But I quickly noticed our field has many words that are quite hard to rhyme, like “chaos” and “dogma.” And perhaps the hardest of all: “Takayanagi.” So I decided to settle for some limericks — that’s easier for me than a full-fledged poem.

This first one captures how I felt when I first heard about AdS/CFT: excited but perplexed.

Spacetime is emergent they say.
But emergent in what sort of way?
It’s really quite cool,
The bulk has a dual!
I might understand that someday.

For a quantum information theorist, it was pleasing to learn later on that we can interpret the dictionary as an encoding map, such that the bulk degrees of freedom are protected when a portion of the boundary is erased.

Almheiri and Harlow and Dong
Said “you’re thinking about the map wrong.”
It’s really a code!
That’s the thing that they showed.
Should we have known that all along?

(It is easier to rhyme “Dong” than “Takayanagi”.) To see that connection one needed a good grasp of both AdS/CFT and quantum error-correcting codes. In 2014 few researchers knew both, but those guys did.

For all our progress, we still don’t have a complete answer to a key question that inspired IFQ. What’s inside a black hole?

Information loss has been denied.
Locality’s been cast aside.
When the black hole is gone
What fell in’s been withdrawn.
I’d still like to know: what’s inside?

We’re also still lacking an alternative nonperturbative formulation of the bulk; we can only say it’s something that’s dual to the boundary. Until we can define both sides of the correspondence, the claim that two descriptions are equivalent, however inspiring, will remain unsatisfying.

Duality I can embrace.
Complexity, too, has its place.
That’s all a good show
But I still want to know:
What are the atoms of space?

The question, “What are the atoms of space?” is stolen from Joe Polchinski, who framed it to explain to a popular audience what we’re trying to answer. I miss Joe. He was a founding member of It from Qubit, an inspiring scientific leader, and still an inspiration for all of us today.

The IFQ Simons collaboration may fade away, but the quest that has engaged us these past 8 years goes on. IFQ is the continuation of a long struggle, which took on great urgency with Hawking’s formulation of the information loss puzzle nearly 50 years ago. Understanding quantum gravity and its implications is a huge challenge and a grand quest that humanity is obligated to pursue. And it’s fun and it’s exciting, and I sincerely believe that we’ve made remarkable progress in recent years, thanks in large part to you, the IFQ community. We are privileged to live at a time when truths about the nature of space and time are being unveiled. And we are privileged to be part of this community, with so many like-minded colleagues pulling in the same direction, sharing the joy of facing this challenge.

Where is it all going? Coming back to our pitch to the Simons Foundation in 2015, I was very struck by Juan’s presentation that day, and in particular his final slide. I liked it so much that I stole it and used in my presentations for a while. Juan tried to explain what we’re doing by means of an analogy to biological science. How are the quantumists like the biologists?

Well, bulk quantum gravity is life. We all want to understand life. The boundary theory is chemistry, which underlies life. The quantum information theorists are chemists; they want to understand chemistry in detail. The quantum gravity theorists are biologists, they think chemistry is fine, if it can really help them to understand life. What we want is: molecular biology, the explanation for how life works in terms of the underlying chemistry. The black hole information problem is our fruit fly, the toy problem we need to solve before we’ll be ready to take on a much bigger challenge: finding the cure for cancer; that is, understanding the big bang.

How’s it going? We’ve made a lot of progress since 2015. We haven’t cured cancer. Not yet. But we’re having a lot of fun along the way there.

I’ll end with this hope, addressed especially to those who were not yet born when AdS/CFT was first proposed, or were still scampering around in your playpens. I’ll grant you a reprieve, you have another 8 years. By then: May you cure cancer!

So I propose this toast: To It from Qubit, to our colleagues and friends, to our quest, to curing cancer, to understanding the universe. I wish you all well. Cheers!

The Book of Mark

Mark Srednicki doesn’t look like a high priest. He’s a professor of physics at the University of California, Santa Barbara (UCSB); and you’ll sooner find him in khakis than in sacred vestments. Humor suits his round face better than channeling divine wrath would; and I’ve never heard him speak in tongues—although, when an idea excites him, his hands rise to shoulder height of their own accord, as though halfway toward a priestly blessing. Mark belongs less on a ziggurat than in front of a chalkboard. Nevertheless, he called himself a high priest.

Specifically, Mark jokingly called himself a high priest of the eigenstate thermalization hypothesis, a framework for understanding how quantum many-body systems thermalize internally. The eigenstate thermalization hypothesis has an unfortunate number of syllables, so I’ll call it the ETH. The ETH illuminates closed quantum many-body systems, such as a clump of N ultracold atoms. The clump can begin in a pure product state | \psi(0) \rangle, then evolve under a chaotic1 Hamiltonian H. The time-t state | \psi(t) \rangle will remain pure; its von Neumann entropy will always vanish. Yet entropy grows according to the second law of thermodynamics. Breaking the second law amounts almost to a enacting a miracle, according to physicists. Does the clump of atoms deserve consideration for sainthood?

No—although the clump’s state remains pure, a small subsystem’s state does not. A subsystem consists of, for example, a few atoms. They’ll entangle with the other atoms, which serve as an effective environment. The entanglement will mix the few atoms’ state, whose von Neumann entropy will grow.

The ETH predicts this growth. The ETH is an ansatz about H and an operator O—say, an observable of the few-atom subsystem. We can represent O as a matrix relative to the energy eigenbasis. The matrix elements have a certain structure, if O and H satisfy the ETH. Suppose that the operators do and that H lacks degeneracies—that no two energy eigenvalues equal each other. We can prove that O thermalizes: Imagine measuring the expectation value \langle \psi(t) | O | \psi(t) \rangle at each of many instants t. Averaging over instants produces the time-averaged expectation value \overline{ \langle O \rangle_t }

Another average is the thermal average—the expectation value of O in the appropriate thermal state. If H conserves just itself,2 the appropriate thermal state is the canonical state, \rho_{\rm can} := e^{-\beta H}/ Z. The average energy \langle \psi(0) | H | \psi(0) \rangle defines the inverse temperature \beta, and Z normalizes the state. Hence the thermal average is \langle O \rangle_{\rm th}  :=  {\rm Tr} ( O \rho_{\rm can} )

The time average approximately equals the thermal average, according to the ETH: \overline{ \langle O \rangle_t }  =  \langle O \rangle_{\rm th} + O \big( N^{-1} \big). The correction is small in the total number N of atoms. Through the lens of O, the atoms thermalize internally. Local observables tend to satisfy the ETH, and we can easily observe only local observables. We therefore usually observe thermalization, consistently with the second law of thermodynamics.

I agree that Mark Srednicki deserves the title high priest of the ETH. He and Joshua Deutsch independently dreamed up the ETH in 1994 and 1991. Since numericists reexamined it in 2008, studies and applications of the ETH have exploded like a desert religion. Yet Mark had never encountered the question I posed about it in 2021. Next month’s blog post will share the good news about that question.

1Nonintegrable.

2Apart from trivial quantities, such as projectors onto eigenspaces of H.