Make use of time, let not advantage slip

During the spring of 2022, I felt as though I kept dashing backward and forward in time. 

At the beginning of the season, hay fever plagued me in Maryland. Then, I left to present talks in southern California. There—closer to the equator—rose season had peaked, and wisteria petals covered the ground near Caltech’s physics building. From California, I flew to Canada to present a colloquium. Time rewound as I traveled northward; allergies struck again. After I returned to Maryland, the spring ripened almost into summer. But the calendar backtracked when I flew to Sweden: tulips and lilacs surrounded me again.

Caltech wisteria in April 2022: Thou art lovely and temperate.

The zigzagging through horticultural time disoriented my nose, but I couldn’t complain: it echoed the quantum information processing that collaborators and I would propose that summer. We showed how to improve quantum metrology—our ability to measure things, using quantum detectors—by simulating closed timelike curves.

Swedish wildflowers in June 2022

A closed timelike curve is a trajectory that loops back on itself in spacetime. If on such a trajectory, you’ll advance forward in time, reverse chronological direction to advance backward, and then reverse again. Author Jasper Fforde illustrates closed timelike curves in his novel The Eyre Affair. A character named Colonel Next buys an edition of Shakespeare’s works, travels to the Elizabethan era, bestows them on a Brit called Will, and then returns to his family. Will copies out the plays and stages them. His colleagues publish the plays after his death, and other editions ensue. Centuries later, Colonel Next purchases one of those editions to take to the Elizabethan era.1 

Closed timelike curves can exist according to Einstein’s general theory of relativity. But do they exist? Nobody knows. Many physicists expect not. But a quantum system can simulate a closed timelike curve, undergoing a process modeled by the same mathematics.

How can one formulate closed timelike curves in quantum theory? Oxford physicist David Deutsch proposed one formulation; a team led by MIT’s Seth Lloyd proposed another. Correlations distinguish the proposals. 

Two entities share correlations if a change in one entity tracks a change in the other. Two classical systems can correlate; for example, your brain is correlated with mine, now that you’ve read writing I’ve produced. Quantum systems can correlate more strongly than classical systems can, as by entangling

Suppose Colonel Next correlates two nuclei and gives one to his daughter before embarking on his closed timelike curve. Once he completes the loop, what relationship does Colonel Next’s nucleus share with his daughter’s? The nuclei retain the correlations they shared before Colonel Next entered the loop, according to Seth and collaborators. When referring to closed timelike curves from now on, I’ll mean ones of Seth’s sort.

Toronto hadn’t bloomed by May 2022.

We can simulate closed timelike curves by subjecting a quantum system to a circuit of the type illustrated below. We read the diagram from bottom to top. Along this direction, time—as measured by a clock at rest with respect to the laboratory—progresses. Each vertical wire represents a qubit—a basic unit of quantum information, encoded in an atom or a photon or the like. Each horizontal slice of the diagram represents one instant. 

At the bottom of the diagram, the two vertical wires sprout from one curved wire. This feature signifies that the experimentalist prepares the qubits in an entangled state, represented by the symbol | \Psi_- \rangle. Farther up, the left-hand wire runs through a box. The box signifies that the corresponding qubit undergoes a transformation (for experts: a unitary evolution). 

At the top of the diagram, the vertical wires fuse again: the experimentalist measures whether the qubits are in the state they began in. The measurement is probabilistic; we (typically) can’t predict the outcome in advance, due to the uncertainty inherent in quantum physics. If the measurement yields the yes outcome, the experimentalist has simulated a closed timelike curve. If the no outcome results, the experimentalist should scrap the trial and try again.

So much for interpreting the diagram above as a quantum circuit. We can reinterpret the illustration as a closed timelike curve. You’ve probably guessed as much, comparing the circuit diagram to the depiction, farther above, of Colonel Next’s journey. According to the second interpretation, the loop represents one particle’s trajectory through spacetime. The bottom and top show the particle reversing chronological direction—resembling me as I flew to or from southern California.

Me in southern California in spring 2022. Photo courtesy of Justin Dressel.

How can we apply closed timelike curves in quantum metrology? In Fforde’s books, Colonel Next has a brother, named Mycroft, who’s an inventor.2 Suppose that Mycroft is studying how two particles interact (e.g., by an electric force). He wants to measure the interaction’s strength. Mycroft should prepare one particle—a sensor—and expose it to the second particle. He should wait for some time, then measure how much the interaction has altered the sensor’s configuration. The degree of alteration implies the interaction’s strength. The particles can be quantum, if Mycroft lives not merely in Sherlock Holmes’s world, but in a quantum-steampunk one.

But how should Mycroft prepare the sensor—in which quantum state? Certain initial states will enable the sensor to acquire ample information about the interaction; and others, no information. Mycroft can’t know which preparation will work best: the optimal preparation depends on the interaction, which he hasn’t measured yet. 

Mycroft, as drawn by Sydney Paget in the 1890s

Mycroft can overcome this dilemma via a strategy published by my collaborator David Arvidsson-Shukur, his recent student Aidan McConnell, and me. According to our protocol, Mycroft entangles the sensor with a third particle. He subjects the sensor to the interaction (coupling the sensor to particle #2) and measures the sensor. 

Then, Mycroft learns about the interaction—learns which state he should have prepared the sensor in earlier. He effectively teleports this state backward in time to the beginning-of-protocol sensor, using particle #3 (which began entangled with the sensor).3 Quantum teleportation is a decades-old information-processing task that relies on entanglement manipulation. The protocol can transmit quantum states over arbitrary distances—or, effectively, across time.

We can view Mycroft’s experiment in two ways. Using several particles, he manipulates entanglement to measure the interaction strength optimally (with the best possible precision). This process is mathematically equivalent to another. In the latter process, Mycroft uses only one sensor. It comes forward in time, reverses chronological direction (after Mycroft learns the optimal initial state’s form), backtracks to an earlier time (to when the sensing protocol began), and returns to progressing forward in time (informing Mycroft about the interaction).

Where I stayed in Stockholm. I swear, I’m not making this up.

In Sweden, I regarded my work with David and Aidan as a lark. But it’s led to an experiment, another experiment, and two papers set to debut this winter. I even pass as a quantum metrologist nowadays. Perhaps I should have anticipated the metamorphosis, as I should have anticipated the extra springtimes that erupted as I traveled between north and south. As the bard says, there’s a time for all things.

More Swedish wildflowers from June 2022

1In the sequel, Fforde adds a twist to Next’s closed timelike curve. I can’t speak for the twist’s plausibility or logic, but it makes for delightful reading, so I commend the novel to you.

2You might recall that Sherlock Holmes has a brother, named Mycroft, who’s an inventor. Why? In Fforde’s novel, an evil corporation pursues Mycroft, who’s built a device that can transport him into the world of a book. Mycroft uses the device to hide from the corporation in Sherlock Holmes’s backstory.

3Experts, Mycroft implements the effective teleportation as follows: He prepares a fourth particle in the ideal initial sensor state. Then, he performs a two-outcome entangling measurement on particles 3 and 4: he asks “Are particles 3 and 4 in the state in which particles 1 and 3 began?” If the measurement yields the yes outcome, Mycroft has effectively teleported the ideal sensor state backward in time. He’s also simulated a closed timelike curve. If the measurement yields the no outcome, Mycroft fails to measure the interaction optimally. Figure 1 in our paper synopsizes the protocol.

A (quantum) complex legacy: Part trois

When I worked in Cambridge, Massachusetts, a friend reported that MIT’s postdoc association had asked its members how it could improve their lives. The friend confided his suggestion to me: throw more parties.1 This year grants his wish on a scale grander than any postdoc association could. The United Nations has designated 2025 as the International Year of Quantum Science and Technology (IYQ), as you’ve heard unless you live under a rock (or without media access—which, come to think of it, sounds not unappealing).

A metaphorical party cracker has been cracking since January. Governments, companies, and universities are trumpeting investments in quantum efforts. Institutions pulled out all the stops for World Quantum Day, which happens every April 14 but which scored a Google doodle this year. The American Physical Society (APS) suffused its Global Physics Summit in March with quantum science like a Bath & Body Works shop with the scent of Pink Pineapple Sunrise. At the summit, special symposia showcased quantum research, fellow blogger John Preskill dished about quantum-science history in a dinnertime speech, and a “quantum block party” took place one evening. I still couldn’t tell you what a quantum block party is, but this one involved glow sticks.

Google doodle from April 14, 2025

Attending the summit, I felt a satisfaction—an exultation, even—redolent of twelfth grade, when American teenagers summit the Mont Blanc of high school. It was the feeling that this year is our year. Pardon me while I hum “Time of your life.”2

Speakers and organizer of a Kavli Symposium, a special session dedicated to interdisciplinary quantum science, at the APS Global Physics Summit

Just before the summit, editors of the journal PRX Quantum released a special collection in honor of the IYQ.3 The collection showcases a range of advances, from chemistry to quantum error correction and from atoms to attosecond-length laser pulses. Collaborators and I contributed a paper about quantum complexity, a term that has as many meanings as companies have broadcast quantum news items within the past six months. But I’ve already published two Quantum Frontiers posts about complexity, and you surely study this blog as though it were the Bible, so we’re on the same page, right? 

Just joshing. 

Imagine you have a quantum computer that’s running a circuit. The computer consists of qubits, such as atoms or ions. They begin in a simple, “fresh” state, like a blank notebook. Post-circuit, they store quantum information, such as entanglement, as a notebook stores information post-semester. We say that the qubits are in some quantum state. The state’s quantum complexity is the least number of basic operations, such as quantum logic gates, needed to create that state—via the just-completed circuit or any other circuit.

Today’s quantum computers can’t create high-complexity states. The reason is, every quantum computer inhabits an environment that disturbs the qubits. Air molecules can bounce off them, for instance. Such disturbances corrupt the information stored in the qubits. Wait too long, and the environment will degrade too much of the information for the quantum computer to work. We call the threshold time the qubits’ lifetime, among more-obscure-sounding phrases. The lifetime limits the number of gates we can run per quantum circuit.

The ability to perform many quantum gates—to perform high-complexity operations—serves as a resource. Other quantities serve as resources, too, as you’ll know if you’re one of the three diehard Quantum Frontiers fans who’ve been reading this blog since 2014 (hi, Mom). Thermodynamic resources include work: coordinated energy that one can harness directly to perform a useful task, such as lifting a notebook or staying up late enough to find out what a quantum block party is. 

My collaborators: Jonas Haferkamp, Philippe Faist, Teja Kothakonda, Jens Eisert, and Anthony Munson (in an order of no significance here)

My collaborators and I showed that work trades off with complexity in information- and energy-processing tasks: the more quantum gates you can perform, the less work you have to spend on a task, and vice versa. Qubit reset exemplifies such tasks. Suppose you’ve filled a notebook with a calculation, you want to begin another calculation, and you have no more paper. You have to erase your notebook. Similarly, suppose you’ve completed a quantum computation and you want to run another quantum circuit. You have to reset your qubits to a fresh, simple state

Three methods suggest themselves. First, you can “uncompute,” reversing every quantum gate you performed.4 This strategy requires a long lifetime: the information imprinted on the qubits by a gate mustn’t leak into the environment before you’ve undone the gate. 

Second, you can do the quantum equivalent of wielding a Pink Pearl Paper Mate: you can rub the information out of your qubits, regardless of the circuit you just performed. Thermodynamicists inventively call this strategy erasure. It requires thermodynamic work, just as applying a Paper Mate to a notebook does. 

Third, you can

Suppose your qubits have finite lifetimes. You can undo as many gates as you have time to. Then, you can erase the rest of the qubits, spending work. How does complexity—your ability to perform many gates—trade off with work? My collaborators and I quantified the tradeoff in terms of an entropy we invented because the world didn’t have enough types of entropy.5

Complexity trades off with work not only in qubit reset, but also in data compression and likely other tasks. Quantum complexity, my collaborators and I showed, deserves a seat at the great soda fountain of quantum thermodynamics.

The great soda fountain of quantum thermodynamics

…as quantum information science deserves a seat at the great soda fountain of physics. When I embarked upon my PhD, faculty members advised me to undertake not only quantum-information research, but also some “real physics,” such as condensed matter. The latter would help convince physics departments that I was worth their money when I applied for faculty positions. By today, the tables have turned. A condensed-matter theorist I know has wound up an electrical-engineering professor because he calculates entanglement entropies.

So enjoy our year, fellow quantum scientists. Party like it’s 1925. Burnish those qubits—I hope they achieve the lifetimes of your life.

1Ten points if you can guess who the friend is.

2Whose official title, I didn’t realize until now, is “Good riddance.” My conception of graduation rituals has just turned a somersault. 

3PR stands for Physical Review, the brand of the journals published by the APS. The APS may have intended for the X to evoke exceptional, but I like to think it stands for something more exotic-sounding, like ex vita discedo, tanquam ex hospitio, non tanquam ex domo.

4Don’t ask me about the notebook analogue of uncomputing a quantum state. Explaining it would require another blog post.

5For more entropies inspired by quantum complexity, see this preprint. You might recognize two of the authors from earlier Quantum Frontiers posts if you’re one of the three…no, not even the three diehard Quantum Frontiers readers will recall; but trust me, two of the authors have received nods on this blog before.

I know I am but what are you? Mind and Matter in Quantum Mechanics

Nowadays it is best to exercise caution when bringing the words “quantum” and “consciousness” anywhere near each other, lest you be suspected of mysticism or quackery. Eugene Wigner did not concern himself with this when he wrote his “Remarks on the Mind-Body Question” in 1967. (Perhaps he was emboldened by his recent Nobel prize for contributions to the mathematical foundations of quantum mechanics, which gave him not a little no-nonsense technical credibility.) The mind-body question he addresses is the full-blown philosophical question of “the relation of mind to body”, and he argues unapologetically that quantum mechanics has a great deal to say on the matter. The workhorse of his argument is a thought experiment that now goes by the name “Wigner’s Friend”. About fifty years later, Daniela Frauchiger and Renato Renner formulated another, more complex thought experiment to address related issues in the foundations of quantum theory. In this post, I’ll introduce Wigner’s goals and argument, and evaluate Frauchiger’s and Renner’s claims of its inadequacy, concluding that these are not completely fair, but that their thought experiment does do something interesting and distinct. Finally, I will describe a recent paper of my own, in which I formalize the Frauchiger-Renner argument in a way that illuminates its status and isolates the mathematical origin of their paradox.

* * *

Wigner takes a dualist view about the mind, that is, he believes it to be non-material. To him this represents the common-sense view, but is nevertheless a newly mainstream attitude. Indeed,

[until] not many years ago, the “existence” of a mind or soul would have been passionately denied by most physical scientists. The brilliant successes of mechanistic and, more generally, macroscopic physics and of chemistry overshadowed the obvious fact that thoughts, desires, and emotions are not made of matter, and it was nearly universally accepted among physical scientists that there is nothing besides matter.

He credits the advent of quantum mechanics with

the return, on the part of most physical scientists, to the spirit of Descartes’s “Cogito ergo sum”, which recognizes the thought, that is, the mind, as primary. [With] the creation of quantum mechanics, the concept of consciousness came to the fore again: it was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness.

What Wigner has in mind here is that the standard presentation of quantum mechanics speaks of definite outcomes being obtained when an observer makes a measurement. Of course this is also true in classical physics. In quantum theory, however, the principles of linear evolution and superposition, together with the plausible assumption that mental phenomena correspond to physical phenomena in the brain, lead to situations in which there is no mechanism for such definite observations to arise. Thus there is a tension between the fact that we would like to ascribe particular observations to conscious agents and the fact that we would like to view these observations as corresponding to particular physical situations occurring in their brains.

Once we have convinced ourselves that, in light of quantum mechanics, mental phenomena must be considered on an equal footing with physical phenomena, we are faced with the question of how they interact. Wigner takes it for granted that “if certain physico-chemical conditions are satisfied, a consciousness, that is, the property of having sensations, arises.” Does the influence run the other way? Wigner claims that the “traditional answer” is that it does not, but argues that in fact such influence ought indeed to exist. (Indeed this, rather than technical investigation of the foundations of quantum mechanics, is the central theme of his essay.) The strongest support Wigner feels he can provide for this claim is simply “that we do not know of any phenomenon in which one subject is influenced by another without exerting an influence thereupon”. Here he recalls the interaction of light and matter, pointing out that while matter obviously affects light, the effects of light on matter (for example radiation pressure) are typically extremely small in magnitude, and might well have been missed entirely had they not been suggested by the theory.

Quantum mechanics provides us with a second argument, in the form of a demonstration of the inconsistency of several apparently reasonable assumptions about the physical, the mental, and the interaction between them. Wigner works, at least implicitly, within a model where there are two basic types of object: physical systems and consciousnesses. Some physical systems (those that are capable of instantiating the “certain physico-chemical conditions”) are what we might call mind-substrates. Each consciousness corresponds to a mind-substrate, and each mind-substrate corresponds to at most one consciousness. He considers three claims (this organization of his premises is not explicit in his essay):

1. Isolated physical systems evolve unitarily.

2. Each consciousness has a definite experience at all times.

3. Definite experiences correspond to pure states of mind-substrates, and arise for a consciousness exactly when the corresponding mind-substrate is in the corresponding pure state.

The first and second assumptions constrain the way the model treats physical and mental phenomena, respectively. Assumption 1 is often paraphrased as the `”completeness of quantum mechanics”, while Assumption 2 is a strong rejection of solipsism – the idea that only one’s own mind is sure to exist. Assumption 3 is an apparently reasonable assumption about the relation between mental and physical phenomena.

With this framework established, Wigner’s thought experiment, now typically known as Wigner’s Friend, is quite straightforward. Suppose that an observer, Alice (to name the friend), is able to perform a measurement of some physical quantity q of a particle, which may take two values, 0 and 1. Assumption 1 tells us that if Alice performs this measurement when the particle is in a superposition state, the joint system of Alice’s brain and the particle will end up in an entangled state. Now Alice’s mind-substrate is not in a pure state, so by Assumption 3 does not have a definite experience. This contradicts Assumption 2. Wigner’s proposed resolution to this paradox is that in fact Assumption 1 is incorrect, and that there is an influence of the mental on the physical, namely objective collapse or, as he puts it, that the “statistical element which, according to the orthodox theory, enters only if I make an observation enters equally if my friend does”.

* * *

Decades after the publication of Wigner’s essay, Daniela Frauchiger and Renato Renner formulated a new thought experiment, involving observers making measurements of other observers, which they intended to remedy what they saw as a weakness in Wigner’s argument. In their words, “Wigner proposed an argument […] which should show that quantum mechanics cannot have unlimited validity”. In fact, they argue, Wigner’s argument does not succeed in doing so. They assert that Wigner’s paradox may be resolved simply by noting a difference in what each party knows. Whereas Wigner, describing the situation from the outside, does not initially know the result of his friend’s measurement, and therefore assigns the “absurd” entangled state to the joint system composed of both her body and the system she has measured, his friend herself is quite aware of what she has observed, and so assigns to the system either, but not both, of the states corresponding to definite measurement outcomes. “For this reason”, Frauchiger and Renner argue, “the Wigner’s Friend Paradox cannot be regarded as an argument that rules out quantum mechanics as a universally valid theory.”

This criticism strikes me as somewhat unfair to Wigner. In fact, Wigner’s objection to admitting two different states as equally valid descriptions is that the two states correspond to different sets of \textit{physical} properties of the joint system consisting of Alice and the system she measures. For Wigner, physical properties of physical systems are distinct from mental properties of consciousnesses. To engage in some light textual analysis, we can note that the word ‘conscious’, or ‘consciousness’, appears forty-one times in Wigner’s essay, and only once in Frauchiger and Renner’s, in the title of a cited paper. I have the impression that the authors pay inadequate attention to how explicitly Wigner takes a dualist position, including not just physical systems but also, and distinctly, consciousnesses in his ontology. Wigner’s argument does indeed achieve his goals, which are developed in the context of this strong dualism, and differ from the goals of Frauchiger and Renner, who appear not to share this philosophical stance, or at least do not commit fully to it.

Nonetheless, the thought experiment developed by Frauchiger and Renner does achieve something distinct and interesting. We can understand Wigner’s no-go theorem to be of the following form: “Within a model incorporating both mental and physical phenomena, a set of apparently reasonable conditions on how the model treats physical phenomena, mental phenomena, and their interaction cannot all be satisfied”. The Frauchiger-Renner thought experiment can be cast in the same form, with different choices about how to implement the model and which conditions to consider. The major difference in the model itself is that Frauchiger and Renner do not take consciousnesses to be entities in their own rights, but simply take some states of certain physical systems to correspond to conscious experiences. Within such a model, Wigner’s assumption that each mind has a single, definite conscious experience at all times seems far less natural than it did within his model, where consciousnesses are distinct entities from the physical systems that determine them. Thus Frauchiger and Renner need to weaken this assumption, which was so natural to Wigner. The weakening they choose is a sort of transitivity of theories of mind. In their words (Assumption C in their paper):

Suppose that agent A has established that “I am certain that agent A’, upon reasoning within the same theory as the one I am using, is certain that x =\xi at time t.” Then agent A can conclude that “I am certain that x=\xi at time t.”

Just as Assumption 3 above was, for Wigner, a natural restriction on how a sensible theory ought to treat mental phenomena, this serves as Frauchiger’s and Renner’s proposed constraint. Just as Wigner designed a thought experiment that demonstrated the incompatibility of his assumption with an assumption of the universal applicability of unitary quantum mechanics to physical systems, so do Frauchiger and Renner.

* * *

In my recent paper “Reasoning across spacelike surfaces in the Frauchiger-Renner thought experiment”, I provide two closely related formalizations of the Frauchiger-Renner argument. These are motivated by a few observations:

1. Assumption C ought to make reference to the (possibly different) times at which agents A and A' are certain about their respective judgments, since these states of knowledge change.

2. Since Frauchiger and Renner do not subscribe to Wigner’s strong dualism, an agent’s certainty about a given proposition, like any other mental state, corresponds within their implicit model to a physical state. Thus statements like “Alice knows that P” should be understood as statements about the state of some part of Alice’s brain. Conditional statements like “if upon measuring a quantity q Alice observes outcome x, she knows that P” should be understood as claims about the state of the composite system composed of the part of Alice’s brain responsible for knowing P and the part responsible for recording outcomes of the measurement of q.

3. Because the causal structure of the protocol does not depend on the absolute times of each event, an external agent describing the protocol can choose various “spacelike surfaces”, corresponding to fixed times in different spacetime embeddings of the protocol (or to different inertial frames). There is no reason to privilege one of these surfaces over another, and so each of them should be assigned a quantum state. This may be viewed as an implementation of a relativistic principle.

A visual representation of the formalization of the Frauchiger-Renner protocol and the arguments of the no-go theorem. The graphical conventions are explained in detail in “Reasoning across spacelike surfaces in the Frauchiger-Renner thought experiment”.

After developing a mathematical framework based on these observations, I recast Frauchiger’s and Renner’s Assumption C in two ways: first, in terms of a claim about the validity of iterating the “relative state” construction that captures how conditional statements are interpreted in terms of quantum states; and second, in terms of a deductive rule that allows chaining of inferences within a system of quantum logic. By proving that these claims are false in the mathematical framework, I provide a more formal version of the no-go theorem. I also show that the first claim can be rescued if the relative state construction is allowed to be iterated only “along” a single spacelike surface, and the second if a deduction is only allowed to chain inferences “along” a single surface. In other words, the mental transitivity condition desired by Frauchiger and Renner can in fact be combined with universal physical applicability of unitary quantum mechanics, but only if we restrict our analysis to a single spacelike surface. Thus I hope that the analysis I offer provides some clarification of what precisely is going on in Frauchiger and Renner’s thought experiment, what it tells us about combining the physical and the mental in light of quantum mechanics, and how it relates to Wigner’s thought experiment.

* * *

In view of the fact that “Quantum theory cannot consistently describe the use of itself” has, at present, over five hundred citations, and “Remarks on the Mind-Body Question” over thirteen hundred, it seems fitting to close with a thought, cautionary or exultant, from Peter Schwenger’s book on asemic, that is meaningless, writing. He notes that

commentary endlessly extends language; it is in the service of an impossible quest to extract the last, the final, drop of meaning.

I provide no analysis of this claim.

Quantum automata

Do you know when an engineer built the first artificial automaton—the first human-made machine that operated by itself, without external control mechanisms that altered the machine’s behavior over time as the machine undertook its mission?

The ancient Greek thinker Archytas of Tarentum reportedly created it about 2,300 years ago. Steam propelled his mechanical pigeon through the air.

For centuries, automata cropped up here and there as curiosities and entertainment. The wealthy exhibited automata to amuse and awe their peers and underlings. For instance, the French engineer Jacques de Vauconson built a mechanical duck that appeared to eat and then expel grains. The device earned the nickname the Digesting Duck…and the nickname the Defecating Duck.

Vauconson also invented a mechanical loom that helped foster the Industrial Revolution. During the 18th and 19th centuries, automata began to enable factories, which changed the face of civilization. We’ve inherited the upshots of that change. Nowadays, cars drive themselves, Roombas clean floors, and drones deliver packages.1 Automata have graduated from toys to practical tools.2

Rather, classical automata have. What of their quantum counterparts?

Scientists have designed autonomous quantum machines, and experimentalists have begun realizing them. The roster of such machines includes autonomous quantum engines, refrigerators, and clocks. Much of this research falls under the purview of quantum thermodynamics, due to the roles played by energy in these machines’ functioning: above, I defined an automaton as a machine free of time-dependent control (exerted by a user). Equivalently, according to a thermodynamicist mentality, we can define an automaton as a machine on which no user performs any work as the machine operates. Thermodynamic work is well-ordered energy that can be harnessed directly to perform a useful task. Often, instead of receiving work, an automaton receives access to a hot environment and a cold environment. Heat flows from the hot to the cold, and the automaton transforms some of the heat into work.

Quantum automata appeal to me because quantum thermodynamics has few practical applications, as I complained in my previous blog post. Quantum thermodynamics has helped illuminate the nature of the universe, and I laud such foundational insights. Yet we can progress beyond laudation by trying to harness those insights in applications. Some quantum thermal machines—quantum batteries, engines, etc.—can outperform their classical counterparts, according to certain metrics. But controlling those machines, and keeping them cold enough that they behave quantum mechanically, costs substantial resources. The machines cost more than they’re worth. Quantum automata, requiring little control, offer hope for practicality. 

To illustrate this hope, my group partnered with Simone Gasparinetti’s lab at Chalmer’s University in Sweden. The experimentalists created an autonomous quantum refrigerator from superconducting qubits. The quantum refrigerator can help reset, or “clear,” a quantum computer between calculations.

Artist’s conception of the autonomous-quantum-refrigerator chip. Credit: Chalmers University of Technology/Boid AB/NIST.

After we wrote the refrigerator paper, collaborators and I raised our heads and peered a little farther into the distance. What does building a useful autonomous quantum machine take, generally? Collaborators and I laid out guidelines in a “Key Issues Review” published in Reports in Progress on Physics last November.

We based our guidelines on DiVincenzo’s criteria for quantum computing. In 1996, David DiVincenzo published seven criteria that any platform, or setup, must meet to serve as a quantum computer. He cast five of the criteria as necessary and two criteria, related to information transmission, as optional. Similarly, our team provides ten criteria for building useful quantum automata. We regard eight of the criteria as necessary, at least typically. The final two, optional guidelines govern information transmission and machine transportation. 

Time-dependent external control and autonomy

DiVincenzo illustrated his criteria with multiple possible quantum-computing platforms, such as ions. Similarly, we illustrate our criteria in two ways. First, we show how different quantum automata—engines, clocks, quantum circuits, etc.—can satisfy the criteria. Second, we illustrate how quantum automata can consist of different platforms: ultracold atoms, superconducting qubits, molecules, and so on.

Nature has suggested some of these platforms. For example, our eyes contain autonomous quantum energy transducers called photoisomers, or molecular switches. Suppose that such a molecule absorbs a photon. The molecule may use the photon’s energy to switch configuration. This switching sets off chemical and neurological reactions that result in the impression of sight. So the quantum switch transduces energy from light into mechanical, chemical, and electric energy.

Photoisomer. (Image by Todd Cahill, from Quantum Steampunk.)

My favorite of our criteria ranks among the necessary conditions: every useful quantum automata must produce output worth the input. How one quantifies a machine’s worth and cost depends on the machine and on the user. For example, an agent using a quantum engine may care about the engine’s efficiency, power, or efficiency at maximum power. Costs can include the energy required to cool the engine to the quantum regime, as well as the control required to initialize the engine. The agent also chooses which value they regard as an acceptable threshold for the output produced per unit input. I like this criterion because it applies a broom to dust that we quantum thermodynamicists often hide under a rug: quantum thermal machines’ costs. Let’s begin building quantum engines that perform more work than they require to operate.

One might object that scientists and engineers are already sweating over nonautonomous quantum machines. Companies, governments, and universities are pouring billions of dollars into quantum computing. Building a full-scale quantum computer by hook or by crook, regardless of classical control, is costing enough. Eliminating time-dependent control sounds even tougher. Why bother?

Fellow Quantum Frontiers blogger John Preskill pointed out one answer, when I described my new research program to him in 2022: control systems are classical—large and hot. Consider superconducting qubits—tiny quantum circuits—printed on a squarish chip about the size of your hand. A control wire terminates on each qubit. The rest of the wire runs off the edge of the chip, extending to classical hardware standing nearby. One can fit only so many wires on the chip, so one can fit only so many qubits. Also, the wires, being classical, are hotter than the qubits should be. The wires can help decohere the circuits, introducing errors into the quantum information they store. The more we can free the qubits from external control—the more autonomy we can grant them—the better.

Besides, quantum automata exemplify quantum steampunk, as my coauthor Pauli Erker observed. I kicked myself after he did, because I’d missed the connection. The irony was so thick, you could have cut it with the retractible steel knife attached to a swashbuckling villain’s robotic arm. Only two years before, I’d read The Watchmaker of Filigree Street, by Natasha Pulley. The novel features a Londoner expatriate from Meiji Japan, named Mori, who builds clockwork devices. The most endearing is a pet-like octopus, called Katsu, who scrambles around Mori’s workshop and hoards socks. 

Does the world need a quantum version of Katsu? Not outside of quantum-steampunk fiction…yet. But a girl can dream. And quantum automata now have the opportunity to put quantum thermodynamics to work.

From tumblr

1And deliver pizzas. While visiting the University of Pittsburgh a few years ago, I was surprised to learn that the robots scurrying down the streets were serving hungry students.

2And minions of starving young scholars.

How writing a popular-science book led to a Nature Physics paper

Several people have asked me whether writing a popular-science book has fed back into my research. Nature Physics published my favorite illustration of the answer this January. Here’s the story behind the paper.

In late 2020, I was sitting by a window in my home office (AKA living room) in Cambridge, Massachusetts. I’d drafted 15 chapters of my book Quantum Steampunk. The epilogue, I’d decided, would outline opportunities for the future of quantum thermodynamics. So I had to come up with opportunities for the future of quantum thermodynamics. The rest of the book had related foundational insights provided by quantum thermodynamics about the universe’s nature. For instance, quantum thermodynamics had sharpened the second law of thermodynamics, which helps explain time’s arrow, into more-precise statements. Conventional thermodynamics had not only provided foundational insights, but also accompanied the Industrial Revolution, a paragon of practicality. Could quantum thermodynamics, too, offer practical upshots?

Quantum thermodynamicists had designed quantum engines, refrigerators, batteries, and ratchets. Some of these devices could outperform their classical counterparts, according to certain metrics. Experimentalists had even realized some of these devices. But the devices weren’t useful. For instance, a simple quantum engine consisted of one atom. I expected such an atom to produce one electronvolt of energy per engine cycle. (A light bulb emits about 1021 electronvolts of light per second.) Cooling the atom down and manipulating it would cost loads more energy. The engine wouldn’t earn its keep.

Autonomous quantum machines offered greater hope for practicality. By autonomous, I mean, not requiring time-dependent external control: nobody need twiddle knobs or push buttons to guide the machine through its operation. Such control requires work—organized, coordinated energy. Rather than receiving work, an autonomous machine accesses a cold environment and a hot environment. Heat—random, disorganized energy cheaper than work—flows from the hot to the cold. The machine transforms some of that heat into work to power itself. That is, the machine sources its own work from cheap heat in its surroundings. Some air conditioners operate according to this principle. So can some quantum machines—autonomous quantum machines.

Thermodynamicists had designed autonomous quantum engines and refrigerators. Trapped-ion experimentalists had realized one of the refrigerators, in a groundbreaking result. Still, the autonomous quantum refrigerator wasn’t practical. Keeping the ion cold and maintaining its quantum behavior required substantial work.

My community needed, I wrote in my epilogue, an analogue of solar panels in southern California. (I probably drafted the epilogue during a Boston winter, thinking wistfully of Pasadena.) If you built a solar panel in SoCal, you could sit back and reap the benefits all year. The panel would fulfill its mission without further effort from you. If you built a solar panel in Rochester, you’d have to scrape snow off of it. Also, the panel would provide energy only a few months per year. The cost might not outweigh the benefit. Quantum thermal machines resembled solar panels in Rochester, I wrote. We needed an analogue of SoCal: an appropriate environment. Most of it would be cold (unlike SoCal), so that maintaining a machine’s quantum nature would cost a user almost no extra energy. The setting should also contain a slightly warmer environment, so that net heat would flow. If you deposited an autonomous quantum machine in such a quantum SoCal, the machine would operate on its own.

Where could we find a quantum SoCal? I had no idea.

Sunny SoCal. (Specifically, the Huntington Gardens.)

A few months later, I received an email from quantum experimentalist Simone Gasparinetti. He was setting up a lab at Chalmers University in Sweden. What, he asked, did I see as opportunities for experimental quantum thermodynamics? We’d never met, but we agreed to Zoom. Quantum Steampunk on my mind, I described my desire for practicality. I described autonomous quantum machines. I described my yearning for a quantum SoCal.

I have it, Simone said.

Simone and his colleagues were building a quantum computer using superconducting qubits. The qubits fit on a chip about the size of my hand. To keep  the chip cold, the experimentalists put it in a dilution refrigerator. You’ve probably seen photos of dilution refrigerators from Google, IBM, and the like. The fridges tend to be cylindrical, gold-colored monstrosities from which wires stick out. (That is, they look steampunk.) You can easily develop the impression that the cylinder is a quantum computer, but it’s only the fridge.

Not a quantum computer

The fridge, Simone said, resembles an onion: it has multiple layers. Outer layers are warmer, and inner layers are colder. The quantum computer sits in the innermost layer, so that it behaves as quantum mechanically as possible. But sometimes, even the fridge doesn’t keep the computer cold enough.

Imagine that you’ve finished one quantum computation and you’re preparing for the next. The computer has written quantum information to certain qubits, as you’ve probably written on scrap paper while calculating something in a math class. To prepare for your next math assignment, given limited scrap paper, you’d erase your scrap paper. The quantum computer’s qubits need erasing similarly. Erasing, in this context, means cooling down even more than the dilution refrigerator can manage

Why not use an autonomous quantum refrigerator to cool the scrap-paper qubits?

I loved the idea, for three reasons. First, we could place the quantum refrigerator beside the quantum computer. The dilution refrigerator would already be cold, for the quantum computations’ sake. Therefore, we wouldn’t have to spend (almost any) extra work on keeping the quantum refrigerator cold. Second, Simone could connect the quantum refrigerator to an outer onion layer via a cable. Heat would flow from the warmer outer layer to the colder inner layer. From the heat, the quantum refrigerator could extract work. The quantum refrigerator would use that work to cool computational qubits—to erase quantum scrap paper. The quantum refrigerator would service the quantum computer. So, third, the quantum refrigerator would qualify as practical.

Over the next three years, we brought that vision to life. (By we, I mostly mean Simone’s group, as my group doesn’t have a lab.)

Artist’s conception of the autonomous-quantum-refrigerator chip. Credit: Chalmers University of Technology/Boid AB/NIST.

Postdoc Aamir Ali spearheaded the experiment. Then-master’s student Paul Jamet Suria and PhD student Claudia Castillo-Moreno assisted him. Maryland postdoc Jeffrey M. Epstein began simulating the superconducting qubits numerically, then passed the baton to PhD student José Antonio Marín Guzmán. 

The experiment provided a proof of principle: it demonstrated that the quantum refrigerator could operate. The experimentalists didn’t apply the quantum refrigerator in a quantum computation. Also, they didn’t connect the quantum refrigerator to an outer onion layer. Instead, they pumped warm photons to the quantum refrigerator via a cable. But even in such a stripped-down experiment, the quantum refrigerator outperformed my expectations. I thought it would barely lower the “scrap-paper” qubit’s temperature. But that qubit reached a temperature of 22 milliKelvin (mK). For comparison: if the qubit had merely sat in the dilution refrigerator, it would have reached a temperature of 45–70 mK. State-of-the-art protocols had lowered scrap-paper qubits’ temperatures to 40–49 mK. So our quantum refrigerator outperformed our competitors, through the lens of temperature. (Our quantum refrigerator cooled more slowly than they did, though.)

Simone, José Antonio, and I have followed up on our autonomous quantum refrigerator with a forward-looking review about useful autonomous quantum machines. Keep an eye out for a blog post about the review…and for what we hope grows into a subfield.

In summary, yes, publishing a popular-science book can benefit one’s research.

Ten lessons I learned from John Preskill

Last August, Toronto’s Centre for Quantum Information and Quantum Control (CQIQC) gave me 35 minutes to make fun of John Preskill in public. CQIQC was hosting its biannual conference, also called CQIQC, in Toronto. The conference features the awarding of the John Stewart Bell Prize for fundamental quantum physics. The prize derives its name for the thinker who transformed our understanding of entanglement. John received this year’s Bell Prize for identifying, with collaborators, how we can learn about quantum states from surprisingly few trials and measurements.

The organizers invited three Preskillites to present talks in John’s honor: Hoi-Kwong Lo, who’s helped steer quantum cryptography and communications; Daniel Gottesman, who’s helped lay the foundations of quantum error correction; and me. I believe that one of the most fitting ways to honor John is by sharing the most exciting physics you know of. I shared about quantum thermodynamics for (simple models of) nuclear physics, along with ten lessons I learned from John. You can watch the talk here and check out the paper, recently published in Physical Review Letters, for technicalities.

John has illustrated this lesson by wrestling with the black-hole-information paradox, including alongside Stephen Hawking. Quantum information theory has informed quantum thermodynamics, as Quantum Frontiers regulars know. Quantum thermodynamics is the study of work (coordinated energy that we can harness directly) and heat (the energy of random motion). Systems exchange heat with heat reservoirs—large, fixed-temperature systems. As I draft this blog post, for instance, I’m radiating heat into the frigid air in Montreal Trudeau Airport.

So much for quantum information. How about high-energy physics? I’ll include nuclear physics in the category, as many of my Europeans colleagues do. Much of nuclear physics and condensed matter involves gauge theories. A gauge theory is a model that contains more degrees of freedom than the physics it describes. Similarly, a friend’s description of the CN Tower could last twice as long as necessary, due to redundancies. Electrodynamics—the theory behind light bulbs—is a gauge theory. So is quantum chromodynamics, the theory of the strong force that holds together a nucleus’s constituents.

Every gauge theory obeys Gauss’s law. Gauss’s law interrelates the matter at a site to the gauge field around the site. For example, imagine a positive electric charge in empty space. An electric field—a gauge field—points away from the charge at every spot in space. Imagine a sphere that encloses the charge. How much of the electric field is exiting the sphere? The answer depends on the amount of charge inside, according to Gauss’s law.

Gauss’s law interrelates the matter at a site with the gauge field nearby…which is related to the matter at the next site…which is related to the gauge field farther away. So everything depends on everything else. So we can’t easily claim that over here are independent degrees of freedom that form a system of interest, while over there are independent degrees of freedom that form a heat reservoir. So how can we define the heat and work exchanged within a lattice gauge theory? If we can’t, we should start biting our nails: thermodynamics is the queen of the physical theories, a metatheory expected to govern all other theories. But how can we define the quantum thermodynamics of lattice gauge theories? My colleague Zohreh Davoudi and her group asked me this question.

I had the pleasure of addressing the question with five present and recent Marylanders…

…the mention of whom in my CQIQC talk invited…

I’m a millennial; social media took off with my generation. But I enjoy saying that my PhD advisor enjoys far more popularity on social media than I do.

How did we begin establishing a quantum thermodynamics for lattice gauge theories?

Someone who had a better idea than I, when I embarked upon this project, was my colleague Chris Jarzynski. So did Dvira Segal, a University of Toronto chemist and CQIQC’s director. So did everyone else who’d helped develop the toolkit of strong-coupling thermodynamics. I’d only heard of the toolkit, but I thought it sounded useful for lattice gauge theories, so I invited Chris to my conversations with Zohreh’s group.

I didn’t create this image for my talk, believe it or not. The picture already existed on the Internet, courtesy of this blog.

Strong-coupling thermodynamics concerns systems that interact strongly with reservoirs. System–reservoir interactions are weak, or encode little energy, throughout much of thermodynamics. For example, I exchange little energy with Montreal Trudeau’s air, relative to the amount of energy inside me. The reason is, I exchange energy only through my skin. My skin forms a small fraction of me because it forms my surface. My surface is much smaller than my volume, which is proportional to the energy inside me. So I couple to Montreal Trudeau’s air weakly.

My surface would be comparable to my volume if I were extremely small—say, a quantum particle. My interaction with the air would encode loads of energy—an amount comparable to the amount inside me. Should we count that interaction energy as part of my energy or as part of the air’s energy? Could we even say that I existed, and had a well-defined form, independently of that interaction energy? Strong-coupling thermodynamics provides a framework for answering these questions.

Kevin Kuns, a former Quantum Frontiers blogger, described how John explains physics through simple concepts, like a ball attached to a spring. John’s gentle, soothing voice resembles a snake charmer’s, Kevin wrote. John charms his listeners into returning to their textbooks and brushing up on basic physics.

Little is more basic than the first law of thermodynamics, synopsized as energy conservation. The first law governs how much a system’s internal energy changes during any process. The energy change equals the heat absorbed, plus the work absorbed, by the system. Every formulation of thermodynamics should obey the first law—including strong-coupling thermodynamics. 

Which lattice-gauge-theory processes should we study, armed with the toolkit of strong-coupling thermodynamics? My collaborators and I implicitly followed

and

We don’t want to irritate experimentalists by asking them to run difficult protocols. Tom Rosenbaum, on the left of the previous photograph, is a quantum experimentalist. He’s also the president of Caltech, so John has multiple reasons to want not to irritate him.

Quantum experimentalists have run quench protocols on many quantum simulators, or special-purpose quantum computers. During a quench protocol, one changes a feature of the system quickly. For example, many quantum systems consist of particles hopping across a landscape of hills and valleys. One might flatten a hill during a quench.

We focused on a three-step quench protocol: (1) Set the system up in its initial landscape. (2) Quickly change the landscape within a small region. (3) Let the system evolve under its natural dynamics for a long time. Step 2 should cost work. How can we define the amount of work performed? By following

John wrote a blog post about how the typical physicist is a one-trick pony: they know one narrow subject deeply. John prefers to know two subjects. He can apply insights from one field to the other. A two-trick pony can show that Gauss’s law behaves like a strong interaction—that lattice gauge theories are strongly coupled thermodynamic systems. Using strong-coupling thermodynamics, the two-trick pony can define the work (and heat) exchanged within a lattice gauge theory. 

An experimentalist can easily measure the amount of work performed,1 we expect, for two reasons. First, the experimentalist need measure only the small region where the landscape changed. Measuring the whole system would be tricky, because it’s so large and it can contain many particles. But an experimentalist can control the small region. Second, we proved an equation that should facilitate experimental measurements. The equation interrelates the work performed1 with a quantity that seems experimentally accessible.

My team applied our work definition to a lattice gauge theory in one spatial dimension—a theory restricted to living on a line, like a caterpillar on a thin rope. You can think of the matter as qubits2 and the gauge field as more qubits. The system looks identical if you flip it upside-down; that is, the theory has a \mathbb{Z}_2 symmetry. The system has two phases, analogous to the liquid and ice phases of H_2O. Which phase the system occupies depends on the chemical potential—the average amount of energy needed to add a particle to the system (while the system’s entropy, its volume, and more remain constant).

My coauthor Connor simulated the system numerically, calculating its behavior on a classical computer. During the simulated quench process, the system began in one phase (like H_2O beginning as water). The quench steered the system around within the phase (as though changing the water’s temperature) or across the phase transition (as though freezing the water). Connor computed the work performed during the quench.1 The amount of work changed dramatically when the quench started steering the system across the phase transition. 

Not only could we define the work exchanged within a lattice gauge theory, using strong-coupling quantum thermodynamics. Also, that work signaled a phase transition—a large-scale, qualitative behavior.

What future do my collaborators and I dream of for our work? First, we want for an experimentalist to measure the work1 spent on a lattice-gauge-theory system in a quantum simulation. Second, we should expand our definitions of quantum work and heat beyond sudden-quench processes. How much work and heat do particles exchange while scattering in particle accelerators, for instance? Third, we hope to identify other phase transitions and macroscopic phenomena using our work and heat definitions. Fourth—most broadly—we want to establish a quantum thermodynamics for lattice gauge theories.

Five years ago, I didn’t expect to be collaborating on lattice gauge theories inspired by nuclear physics. But this work is some of the most exciting I can think of to do. I hope you think it exciting, too. And, more importantly, I hope John thought it exciting in Toronto.

I was a student at Caltech during “One Entangled Evening,” the campus-wide celebration of Richard Feynman’s 100th birthday. So I watched John sing and dance onstage, exhibiting no fear of embarrassing himself. That observation seemed like an appropriate note on which to finish with my slides…and invite questions from the audience.

Congratulations on your Bell Prize, John.

1Really, the dissipated work.

2Really, hardcore bosons.

Always appropriate

I met boatloads of physicists as a master’s student at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. Researchers pass through Perimeter like diplomats through my current neighborhood—the Washington, DC area—except that Perimeter’s visitors speak math instead of legalese and hardly any of them wear ties. But Nilanjana Datta, a mathematician at the University of Cambridge, stood out. She was one of the sharpest, most on-the-ball thinkers I’d ever encountered. Also, she presented two academic talks in a little black dress.

The academic year had nearly ended, and I was undertaking research at the intersection of thermodynamics and quantum information theory for the first time. My mentors and I were applying a mathematical toolkit then in vogue, thanks to Nilanjana and colleagues of hers: one-shot quantum information theory. To explain one-shot information theory, I should review ordinary information theory. Information theory is the study of how efficiently we can perform information-processing tasks, such as sending messages over a channel. 

Say I want to send you n copies of a message. Into how few bits (units of information) can I compress the n copies? First, suppose that the message is classical, such that a telephone could convey it. The average number of bits needed per copy equals the message’s Shannon entropy, a measure of your uncertainty about which message I’m sending. Now, suppose that the message is quantum. The average number of quantum bits needed per copy is the von Neumann entropy, now a measure of your uncertainty. At least, the answer is the Shannon or von Neumann entropy in the limit as n approaches infinity. This limit appears disconnected from reality, as the universe seems not to contain an infinite amount of anything, let alone telephone messages. Yet the limit simplifies the mathematics involved and approximates some real-world problems.

But the limit doesn’t approximate every real-world problem. What if I want to send only one copy of my message—one shot? One-shot information theory concerns how efficiently we can process finite amounts of information. Nilanjana and colleagues had defined entropies beyond Shannon’s and von Neumann’s, as well as proving properties of those entropies. The field’s cofounders also showed that these entropies quantify the optimal rates at which we can process finite amounts of information.

My mentors and I were applying one-shot information theory to quantum thermodynamics. I’d read papers of Nilanjana’s and spoken with her virtually (we probably used Skype back then). When I learned that she’d visit Waterloo in June, I was a kitten looking forward to a saucer of cream.

Nilanjana didn’t disappoint. First, she presented a seminar at Perimeter. I recall her discussing a resource theory (a simple information-theoretic model) for entanglement manipulation. One often models entanglement manipulators as experimentalists who can perform local operations and classical communications: each experimentalist can poke and prod the quantum system in their lab, as well as link their labs via telephone. We abbreviate the set of local operations and classical communications as LOCC. Nilanjana broadened my view to the superset SEP, the operations that map every separable (unentangled) state to a separable state.

Kudos to John Preskill for hunting down this screenshot of the video of Nilanjana’s seminar. The author appears on the left.

Then, because she eats seminars for breakfast, Nilanjana presented an even more distinguished talk the same day: a colloquium. It took place at the University of Waterloo’s Institute for Quantum Computing (IQC), a nearly half-hour walk from Perimeter. Would I be willing to escort Nilanjana between the two institutes? I most certainly would.

Nilanjana and I arrived at the IQC auditorium before anyone else except the colloquium’s host, Debbie Leung. Debbie is a University of Waterloo professor and another of the most rigorous quantum information theorists I know. I sat a little behind the two of them and marveled. Here were two of the scions of the science I was joining. Pinch me.

My relationship with Nilanjana deepened over the years. The first year of my PhD, she hosted a seminar by me at the University of Cambridge (although I didn’t present a colloquium later that day). Afterward, I wrote a Quantum Frontiers post about her research with PhD student Felix Leditzky. The two of them introduced me to second-order asymptotics. Second-order asymptotics dictate the rate at which one-shot entropies approach standard entropies as n (the number of copies of a message I’m compressing, say) grows large. 

The following year, Nilanjana and colleagues hosted me at “Beyond i.i.d. in Information Theory,” an annual conference dedicated to one-shot information theory. We convened in the mountains of Banff, Canada, about which I wrote another blog post. Come to think of it, Nilanjana lies behind many of my blog posts, as she lies behind many of my papers.

But I haven’t explained about the little black dress. Nilanjana wore one when presenting at Perimeter and the IQC. That year, I concluded that pants and shorts caused me so much discomfort, I’d wear only skirts and dresses. So I stuck out in physics gatherings like a theorem in a newspaper. My mother had schooled me in the historical and socioeconomic significance of the little black dress. Coco Chanel invented the slim, simple, elegant dress style during the 1920s. It helped free women from stifling, time-consuming petticoats and corsets: a few decades beforehand, dressing could last much of the morning—and then one would change clothes for the afternoon and then for the evening. The little black dress offered women freedom of movement, improved health, and control over their schedules. Better, the little black dress could suit most activities, from office work to dinner with friends.

Yet I didn’t recall ever having seen anyone present physics in a little black dress.

I almost never use this verb, but Nilanjana rocked that little black dress. She imbued it with all the professionalism and competence ever associated with it. Also, Nilanjana had long, dark hair, like mine (although I’ve never achieved her hair’s length); and she wore it loose, as I liked to. I recall admiring the hair hanging down her back after she received a question during the IQC colloquium. She’d whirled around to write the answer on the board, in the rapid-fire manner characteristic of her intellect. If one of the most incisive scientists I knew could wear dresses and long hair, then so could I.

Felix is now an assistant professor at the University of Illinois in Urbana-Champaign. I recently spoke with him and Mark Wilde, another one-shot information theorist and a guest blogger on Quantum Frontiers. The conversation led me to reminisce about the day I met Nilanjana. I haven’t visited Cambridge in years, and my research has expanded from one-shot thermodynamics into many-body physics. But one never forgets the classics.

Let gravity do its work

One day, early this spring, I found myself in a hotel elevator with three other people. The cohort consisted of two theoretical physicists, one computer scientist, and what appeared to be a normal person. I pressed the elevator’s 4 button, as my husband (the computer scientist) and I were staying on the hotel’s fourth floor. The button refused to light up.

“That happened last time,” the normal person remarked. He was staying on the fourth floor, too.

The other theoretical physicist pressed the 3 button.

“Should we press the 5 button,” the normal person continued, “and let gravity do its work?

I took a moment to realize that he was suggesting we ascend to the fifth floor and then induce the elevator to fall under gravity’s influence to the fourth. We were reaching floor three, so I exchanged a “have a good evening” with the other physicist, who left. The door shut, and we began to ascend.

As it happens,” I remarked, “he’s an expert on gravity.” The other physicist was Herman Verlinde, a professor at Princeton.

Such is a side effect of visiting the Simons Center for Geometry and Physics. The Simons Center graces the Stony Brook University campus, which was awash in daffodils and magnolia blossoms last month. The Simons Center derives its name from hedge-fund manager Jim Simons (who passed away during the writing of this article). He achieved landmark physics and math research before earning his fortune on Wall Street as a quant. Simons supported his early loves by funding the Simons Center and other scientific initiatives. The center reminded me of the Perimeter Institute for Theoretical Physics, down to the café’s linen napkins, so I felt at home.

I was participating in the Simons Center workshop “Entanglement, thermalization, and holography.” It united researchers from quantum information and computation, black-hole physics and string theory, quantum thermodynamics and many-body physics, and nuclear physics. We were to share our fields’ approaches to problems centered on thermalization, entanglement, quantum simulation, and the like. I presented about the eigenstate thermalization hypothesis, which elucidates how many-particle quantum systems thermalize. The hypothesis fails, I argued, if a system’s dynamics conserve quantities (analogous to energy and particle number) that can’t be measured simultaneously. Herman Verlinde discussed the ER=EPR conjecture.

My PhD advisor, John Preskill, blogged about ER=EPR almost exactly eleven years ago. Read his blog post for a detailed introduction. Briefly, ER=EPR posits an equivalence between wormholes and entanglement. 

The ER stands for Einstein–Rosen, as in Einstein–Rosen bridge. Sean Carroll provided the punchiest explanation I’ve heard of Einstein–Rosen bridges. He served as the scientific advisor for the 2011 film Thor. Sean suggested that the film feature a wormhole, a connection between two black holes. The filmmakers replied that wormholes were passé. So Sean suggested that the film feature an Einstein–Rosen bridge. “What’s an Einstein–Rosen bridge?” the filmmakers asked. “A wormhole.” So Thor features an Einstein–Rosen bridge.

EPR stands for Einstein–Podolsky–Rosen. The three authors published a quantum paradox in 1935. Their EPR paper galvanized the community’s understanding of entanglement.

ER=EPR is a conjecture that entanglement is closely related to wormholes. As Herman said during his talk, “You probably need entanglement to realize a wormhole.” Or any two maximally entangled particles are connected by a wormhole. The idea crystallized in a paper by Juan Maldacena and Lenny Susskind. They drew on work by Mark Van Raamsdonk (who masterminded the workshop behind this Quantum Frontiers post) and Brian Swingle (who’s appeared in further posts).

Herman presented four pieces of evidence for the conjecture, as you can hear in the video of his talk. One piece emerges from the AdS/CFT duality, a parallel between certain space-times (called anti–de Sitter, or AdS, spaces) and quantum theories that have a certain symmetry (called conformal field theories, or CFTs). A CFT, being quantum, can contain entanglement. One entangled state is called the thermofield double. Suppose that a quantum system is in a thermofield double and you discard half the system. The remaining half looks thermal—we can attribute a temperature to it. Evidence indicates that, if a CFT has a temperature, then it parallels an AdS space that contains a black hole. So entanglement appears connected to black holes via thermality and temperature.

Despite the evidence—and despite the eleven years since John’s publication of his blog post—ER=EPR remains a conjecture. Herman remarked, “It’s more like a slogan than anything else.” His talk’s abstract contains more hedging than a suburban yard. I appreciated the conscientiousness, a college acquaintance having once observed that I spoke carefully even over sandwiches with a friend.

A “source of uneasiness” about ER=EPR, to Herman, is measurability. We can’t check whether a quantum state is entangled via any single measurement. We have to prepare many identical copies of the state, measure the copies, and process the outcome statistics. In contrast, we seem able to conclude that a space-time is connected without measuring multiple copies of the space-time. We can check that a hotel’s first floor is connected to its fourth, for instance, by riding in an elevator once.

Or by riding an elevator to the fifth floor and descending by one story. My husband, the normal person, and I took the stairs instead of falling. The hotel fixed the elevator within a day or two, but who knows when we’ll fix on the truth value of ER=EPR?

With thanks to the conference organizers for their invitation, to the Simons Center for its hospitality, to Jim Simons for his generosity, and to the normal person for inspiration.

You can win Tic Tac Toe, if you know quantum physics.

Note: Oliver Zheng is a senior at University High School, Irvine CA. He has been working on AI players for quantum versions of Tic Tac Toe under the supervision of Dr. Spiros Michalakis.

Several years ago, while scrolling through YouTube, I came across a video of Paul Rudd playing something called “Quantum Chess.” I had no idea what it was, nor did I know that it would become one of the most gloriously nerdy rabbit holes I would ever fall into (see: 5D Chess with Multiverse Time Travel).

Over time, I tried to teach myself how to play these multi-layered, multi-dimensional games, but progress was slow. However, while taking a break during a piano lesson last year, I mentioned to my teacher my growing interest in unnecessarily stressful versions of chess. She told me that she happened to be friends with Dr. Xie Chen, professor of theoretical physics at Caltech who was sponsoring a Quantum Gaming project. I immediately jumped at the opportunity to connect with her, and within days was able to have my first online meeting with Dr. Chen. Soon after, I got invited to join the project. Following my introduction to the team, I started reading “Quantum Computation and Quantum Information”, which helped me understand how the theory behind the games worked. When I felt ready, Dr. Chen referred me to Dr. Spiros Michalakis at Caltech, who, funnily enough, was the creator of the quantum chess video. 

I would’ve never imagined that I am two degrees of separation from Paul Rudd, but nonetheless, I wanted to share some of the work I’ve been doing with Spiros on Quantum TiqTaqToe.

What is Quantum TiqTaqToe?

Evert van Nieuwenburg, the creator of Quantum TiqTaqToe whom I also collaborated with, goes in depth about how the game works here, but I will give a short rundown. The general idea is that there is now a split move, where you can put an ‘X’ in two different squares at once — a Schrödinger’s X, if you will. When the board has no more empty squares, the X randomly ‘collapses’ into one of the two squares with equal probability. The game ends when there are three real X’s or three real O’s in a row, just as in regular tic-tac-toe. Depending on the mode you are playing, you might also be able to entangle your X’s with your opponent’s O’s. You can get a better sense of all this by actually playing the game here.

My goal was to find out who wins when both players play optimally. For instance, in normal tic-tac-toe, it is well-known that the first X should go in the middle of the board, and if player O counters successfully, the game should end in a tie. Is the outcome of Quantum TiqTaqToe, too, predetermined to end in a tie if both players play optimally? And, if not, what is the best first move for player X? I sought to answer these questions through the power of computation.

The First Attempt

In the following section, I refer to a ‘game state’ as any unique arrangement of X’s and O’s on a board. The ‘empty game state’ simply means an empty board. ‘Traversing’ through a certain game state means that, at some point in the game, that game state occurs. So, for example, every game traverses through the empty game state, since every game starts with an empty board.

In order to solve the unsolved, one must first solve the solved. As such, my first attempt was to create an algorithm that would figure out the best move to play in regular tic-tac-toe. This first attempt was rather straightforward, and I will explain it here:

Essentially, I developed a model using what is known as “reinforcement learning” to determine the best next move given a certain game state. Here is how it works: To track which set of moves are best for player X and player O, respectively, every game state is assigned a value, initially 0. When a game ends, these values are updated to reflect who won. The more games are played, the better these values reflect the sequence of moves that X and O must make to win or tie. To train this model (machine learning parlance for the algorithm that updates the values/parameters mentioned above), I programmed the computer to play randomly chosen moves for X and O, until the game ended. If, say, player X won, then the value of every game state traversed was increased by 1 to indicate that X was favored. On the other hand, if player O won, then the value of every game state traversed was decreased by 1 to indicate that O was favored. Here is an example:

X wins!

Let’s say that this is the first iteration that the model is trained on. Then, the next time the model sees this game state,

it will recognize that X has an advantage. In the same vein, the model now also thinks that the empty game state is favorable towards X, since, in the one game that was played, when the empty game state was traversed, X won.

If we run these randomized games enough times (I ran ten million iterations), every move in every game state has most likely been made, which means that the model is able to give a meaningful evaluation for any game state. However, there is one major problem with this approach, in that the model only indicates who is favored when they make a random move, not when they make the best move. To illustrate this, let’s examine the following game state:

(O’s turn)

Here, player O has two options: they can win the game by putting their O on the bottom center square, or lose the game by putting it on the right center square. Any seasoned tic-tac-toe player would make the right move in this scenario, and win the game. However, since the model trains on random moves, it thinks that player O will win half the time and lose half the time. Thus, to the model, this game state is not favorable to either player, when in reality it is absolutely favored towards O. 

During my first meeting with Spiros and Evert, they pointed out this flaw in my model. Evert suggested that I study up on something called a minimax algorithm, which circumvents this flaw, and apply it to tic-tac-toe. This set me on the next step of my journey.

Enter Minimax

The content of this section takes inspiration from this article.

In the minimax algorithm, the two players are known as the ‘maximizer’ and the ‘minimizer’. In the case of tic-tac-toe, X would be the maximizer and O the minimizer. The maximizer’s goal is to maximize their score, while the minimizer’s goal is to minimize their score. In tic-tac-toe, the minimax algorithm is implemented so that a win by X is a score of +1, a win by O is a score of -1, and a tie is simply 0. So X, seeking to maximize their score, would want to win, which makes sense.

Now, if X wanted to maximize their score through some move, they would have to consider O’s move, who would try to minimize the score. But before O makes their move, they would have to consider X’s next move. This creates a sort of back-and-forth, recursive dynamic in the minimax algorithm. In order for either player to make the best move, they would have to go through all possible moves they can make, and all possible moves their opponent can make after that, and so on and so forth. Here is a relatively simple example of the minimax algorithm at work:

Let’s start from the top. X has three possible moves they can make, and evaluates each of them. 

In the leftmost branch, the result is either -1 or 0, but which is the real score? Well, we expect O to make their best move, and since they are trying to minimize the score, we expect them to choose the ‘-1’ case. So we can say that this move results in a score of -1. 

In the middle branch, the result is either 1 or 0, and, following the same reasoning as before, O chooses the move corresponding to the minimal score, resulting in a score of 0.

Finally, the last branch results in X winning, so the score is +1.

Now, X can finally choose their best move, and in the interest of maximizing the score, places their X on the bottom right square. Intuitively, this makes sense because it was the only move that wins the game for X outright.

Great, but what would a minimax algorithm look like in Quantum Tiqtaqtoe?

Enter Expecti-Minimax

Expectiminimax contains the same core idea as minimax, but something interesting happens when the game board collapses. The algorithm can’t know for sure what the board will look like after collapse, so all it can do is calculate an expected value of the result (hence the name). Let’s look at an example:

Here, collapse occurs, and one branch (top) results in a tie, while the other (bottom) results in O winning. Since a tie is equal to 0 and an O win is equal to -1, the algorithm treats the score as

Note: the sum is divided by two because both outcomes have a ½ probability of occurring.

Solving the Game

Using the expecti-minimax algorithm, I effectively ‘solved’ the minimal and moderate versions of quantum tiqtaqtoe. However, even though the algorithm will always show the best move, the outcome from game to game might not be the same due to the inherent element of randomness. The most interesting of all my discoveries was probably the first move that the algorithm suggests for X, which I was able to make sense of both intuitively and logically. I challenge you all to find it! (Hint: it is the same for both the minimal and moderate versions.)

It turns out that when X plays optimally, they will always win the minimal version no matter what O plays. Meanwhile, in the moderate version, X will win most of the time, but not all the time. The probability distribution is as follows:

  (Another challenge: why are the denominators powers of two?)

Having satisfied my curiosity (for now), I’m looking forward to creating a new game of my own: 4 by 4 quantum tic-tac-toe. Currently, I am working on an algorithm that will give the best move, but since a 4×4 board is almost two times larger than a 3×3 board, the computational runtime of an expectiminimax algorithm would be far too large. As such, I am exploring the use of heuristics, which is sort of what the human mind uses to approach a game like tic-tac-toe. Because of this reliance on heuristics, there is no longer a guarantee that the algorithm will always make the best move, making this new adventure all the more mysterious and captivating. 

Astrobiology meets quantum computation?

The origin of life appears to share little with quantum computation, apart from the difficulty of achieving it and its potential for clickbait. Yet similar notions of complexity have recently garnered attention in both fields. Each topic’s researchers expect only special systems to generate high values of such complexity, or complexity at high rates: organisms, in one community, and quantum computers (and perhaps black holes), in the other. 

Each community appears fairly unaware of its counterpart. This article is intended to introduce the two. Below, I review assembly theory from origin-of-life studies, followed by quantum complexity. I’ll then compare and contrast the two concepts. Finally, I’ll suggest that origin-of-life scientists can quantize assembly theory using quantum complexity. The idea is a bit crazy, but, well, so what?

Assembly theory in origin-of-life studies

Imagine discovering evidence of extraterrestrial life. How could you tell that you’d found it? You’d have detected a bunch of matter—a bunch of particles, perhaps molecules. What about those particles could evidence life?

This question motivated Sara Imari Walker and Lee Cronin to develop assembly theory. (Most of my assembly-theory knowledge comes from Sara, about whom I wrote this blog post years ago and with whom I share a mentor.) Assembly theory governs physical objects, from proteins to self-driving cars. 

Imagine assembling a protein from its constituent atoms. First, you’d bind two atoms together. Then, you might bind another two atoms together. Eventually, you’d bind two pairs together. Your sequence of steps would form an algorithm for assembling the protein. Many algorithms can generate the same protein. One algorithm has the least number of steps. That number is called the protein’s assembly number.

Different natural processes tend to create objects that have different assembly numbers. Stars form low-assembly-number objects by fusing two hydrogen atoms together into helium. Similarly, random processes have high probabilities of forming low-assembly-number objects. For example, geological upheavals can bring a shard of iron near a lodestone. The iron will stick to the magnetized stone, forming a two-component object.

My laptop has an enormous assembly number. Why can such an object exist? Because of information, Sara and Lee emphasize. Human beings amassed information about materials science, Boolean logic, the principles of engineering, and more. That information—which exists only because organisms exists—helped engender my laptop.

If any object has a high enough assembly number, Sara and Lee posit, that object evidences life. Absent life, natural processes have too low a probability of randomly throwing together molecules into the shape of a computer. How high is “high enough”? Approximately fifteen, experiments by Lee’s group suggest. (Why do those experiments point to the number fifteen? Sara’s group is working on a theory for predicting the number.)

In summary, assembly number quantifies complexity in origin-of-life studies, according to Sara and Lee. The researchers propose that only living beings create high-assembly-number objects.

Quantum complexity in quantum computation

Quantum complexity defines a stage in the equilibration of many-particle quantum systems. Consider a clump of N quantum particles isolated from its environment. The clump will be in a pure quantum state | \psi(0) \rangle at a time t = 0. The particles will interact, evolving the clump’s state as a function  | \psi(t) \rangle

Quantum many-body equilibration is more complicated than the equilibration undergone by your afternoon pick-me-up as it cools.

The interactions will equilibrate the clump internally. One stage of equilibration centers on local observables O. They’ll come to have expectation values \langle \psi(t) | O | \psi(t) \rangle approximately equal to thermal expectation values {\rm Tr} ( O \, \rho_{\rm th} ), for a thermal state \rho_{\rm th} of the clump. During another stage of equilibration, the particles correlate through many-body entanglement. 

The longest known stage centers on the quantum complexity of | \psi(t) \rangle. The quantum complexity is the minimal number of basic operations needed to prepare | \psi(t) \rangle from a simple initial state. We can define “basic operations” in many ways. Examples include quantum logic gates that act on two particles. Another example is an evolution for one time step under a Hamiltonian that couples together at most k particles, for some k independent of N. Similarly, we can define “a simple initial state” in many ways. We could count as simple only the N-fold tensor product | 0 \rangle^{\otimes N} of our favorite single-particle state | 0 \rangle. Or we could call any N-fold tensor product simple, or any state that contains at-most-two-body entanglement, and so on. These choices don’t affect the quantum complexity’s qualitative behavior, according to string theorists Adam Brown and Lenny Susskind.

How quickly can the quantum complexity of | \psi(t) \rangle grow? Fast growth stems from many-body interactions, long-range interactions, and random coherent evolutions. (Random unitary circuits exemplify random coherent evolutions: each gate is chosen according to the Haar measure, which we can view roughly as uniformly random.) At most, quantum complexity can grow linearly in time. Random unitary circuits achieve this rate. Black holes may; they scramble information quickly. The greatest possible complexity of any N-particle state scales exponentially in N, according to a counting argument

A highly complex state | \psi(t) \rangle looks simple from one perspective and complicated from another. Human scientists can easily measure only local observables O. Such observables’ expectation values \langle \psi(t) | O | \psi(t) \rangle  tend to look thermal in highly complex states, \langle \psi(t) | O | \psi(t) \rangle \approx {\rm Tr} ( O \, \rho_{\rm th} ), as implied above. The thermal state has the greatest von Neumann entropy, - {\rm Tr} ( \rho \log \rho), of any quantum state \rho that obeys the same linear constraints as | \psi(t) \rangle (such as having the same energy expectation value). Probed through simple, local observables O, highly complex states look highly entropic—highly random—similarly to a flipped coin.

Yet complex states differ from flipped coins significantly, as revealed by subtler analyses. An example underlies the quantum-supremacy experiment published by Google’s quantum-computing group in 2018. Experimentalists initialized 53 qubits (quantum two-level systems) in a tensor product. The state underwent many gates, which prepared a highly complex state. Then, the experimentalists measured the z-component \sigma_z of each qubit’s spin, randomly obtaining a -1 or a 1. One trial yielded a 53-bit string. The experimentalists repeated this process many times, using the same gates in each trial. From all the trials’ bit strings, the group inferred the probability p(s) of obtaining a given string s in the next trial. The distribution \{ p(s) \} resembles the uniformly random distribution…but differs from it subtly, as revealed by a cross-entropy analysis. Classical computers can’t easily generate \{ p(s) \}; hence the Google group’s claiming to have achieved quantum supremacy/advantage. Quantum complexity differs from simple randomness, that difference is difficult to detect, and the difference can evidence quantum computers’ power.

A fridge that holds one of Google’s quantum computers.

Comparison and contrast

Assembly number and quantum complexity resemble each other as follows:

  1. Each function quantifies the fewest basic operations needed to prepare something.
  2. Only special systems (organisms) can generate high assembly numbers, according to Sara and Lee. Similarly, only special systems (such as quantum computers and perhaps black holes) can generate high complexity quickly, quantum physicists expect.
  3. Assembly number may distinguish products of life from products of abiotic systems. Similarly, quantum complexity helps distinguish quantum computers’ computational power from classical computers’.
  4. High-assembly-number objects are highly structured (think of my laptop). Similarly, high-complexity quantum states are highly structured in the sense of having much many-body entanglement.
  5. Organisms generate high assembly numbers, using information. Similarly, using information, organisms have created quantum computers, which can generate quantum complexity quickly.

Assembly number and quantum complexity differ as follows:

  1. Classical objects have assembly numbers, whereas quantum states have quantum complexities.
  2. In the absence of life, random natural processes have low probabilities of producing high-assembly-number objects. That is, randomness appears to keep assembly numbers low. In contrast, randomness can help quantum complexity grow quickly.
  3. Highly complex quantum states look very random, according to simple, local probes. High-assembly-number objects do not.
  4. Only organisms generate high assembly numbers, according to Sara and Lee. In contrast, abiotic black holes may generate quantum complexity quickly.

Another feature shared by assembly-number studies and quantum computation merits its own paragraph: the importance of robustness. Suppose that multiple copies of a high-assembly-number (or moderate-assembly-number) object exist. Not only does my laptop exist, for example, but so do many other laptops. To Sara, such multiplicity signals the existence of some stable mechanism for creating that object. The multiplicity may provide extra evidence for life (including life that’s discovered manufacturing), as opposed to an unlikely sequence of random forces. Similarly, quantum computing—the preparation of highly complex states—requires stability. Decoherence threatens quantum states, necessitating quantum error correction. Quantum error correction differs from Sara’s stable production mechanism, but both evidence the importance of robustness to their respective fields.

A modest proposal

One can generalize assembly number to quantum states, using quantum complexity. Imagine finding a clump of atoms while searching for extraterrestrial life. The atoms need not have formed molecules, so the clump can have a low classical assembly number. However, the clump can be in a highly complex quantum state. We could detect the state’s complexity only (as far as I know) using many copies of the state, so imagine finding many clumps of atoms. Preparing highly complex quantum states requires special conditions, such as a quantum computer. The clump might therefore evidence organisms who’ve discovered quantum physics. Using quantum complexity, one might extend the assembly number to identify quantum states that may evidence life. However, quantum complexity, or a high rate of complexity generation, alone may not evidence life—for example, if achievable by black holes. Fortunately, a black hole seems unlikely to generate many identical copies of a highly complex quantum state. So we seem to have a low probability of mistakenly attributing a highly complex quantum state, sourced by a black hole, to organisms (atop our low probability of detecting any complex quantum state prepared by anyone other than us).

Would I expect a quantum assembly number to greatly improve humanity’s search for extraterrestrial life? I’m no astrobiology expert (NASA videos notwithstanding), but I’d expect probably not. Still, astrobiology requires chemistry, which requires quantum physics. Quantum complexity seems likely to find applications in the assembly-number sphere. Besides, doesn’t juxtaposing the search for extraterrestrial life and the understanding of life’s origins with quantum computing sound like fun? And a sense of fun distinguishes certain living beings from inanimate matter about as straightforwardly as assembly number does.

With thanks to Jim Al-Khalili, Paul Davies, the From Physics to Life collaboration, and UCLA for hosting me at the workshop that spurred this article.