What distinguishes quantum thermodynamics from quantum statistical mechanics?

Yoram Alhassid asked the question at the end of my Yale Quantum Institute colloquium last February. I knew two facts about Yoram: (1) He belongs to Yale’s theoretical-physics faculty. (2) His PhD thesis’s title—“On the Information Theoretic Approach to Nuclear Reactions”—ranks among my three favorites.1 

Over the past few months, I’ve grown to know Yoram better. He had reason to ask about quantum statistical mechanics, because his research stands up to its ears in the field. If forced to synopsize quantum statistical mechanics in five words, I’d say, “study of many-particle quantum systems.” Examples include gases of ultracold atoms. If given another five words, I’d add, “Calculate and use partition functions.” A partition function is a measure of the number of states, or configurations, accessible to the system. Calculate a system’s partition function, and you can calculate the system’s average energy, the average number of particles in the system, how the system responds to magnetic fields, etc.

Line in the sand

My colloquium concerned quantum thermodynamics, which I’ve blogged about many times. So I should have been able to distinguish quantum thermodynamics from its neighbors. But the answer I gave Yoram didn’t satisfy me. I mulled over the exchange for a few weeks, then emailed Yoram a 502-word essay. The exercise grew my appreciation for the question and my understanding of my field. 

An adaptation of the email appears below. The adaptation should suit readers who’ve majored in physics, but don’t worry if you haven’t. Bits of what distinguishes quantum thermodynamics from quantum statistical mechanics should come across to everyone—as should, I hope, the value of question-and-answer sessions:

One distinction is a return to the operational approach of 19th-century thermodynamics. Thermodynamicists such as Sadi Carnot wanted to know how effectively engines could operate. Their practical questions led to fundamental insights, such as the Carnot bound on an engine’s efficiency. Similarly, quantum thermodynamicists often ask, “How can this state serve as a resource in thermodynamic tasks?” This approach helps us identify what distinguishes quantum theory from classical mechanics.

For example, quantum thermodynamicists found an advantage in charging batteries via nonlocal operations. Another example is the “MBL-mobile” that I designed with collaborators. Many-body localization (MBL), we found, can enhance an engine’s reliability and scalability. 

Asking, “How can this state serve as a resource?” leads quantum thermodynamicists to design quantum engines, ratchets, batteries, etc. We analyze how these devices can outperform classical analogues, identifying which aspects of quantum theory power the outperformance. This question and these tasks contrast with the questions and tasks of many non-quantum-thermodynamicists who use statistical mechanics. They often calculate response functions and (e.g., ground-state) properties of Hamiltonians.

These goals of characterizing what nonclassicality is and what it can achieve in thermodynamic contexts resemble upshots of quantum computing and cryptography. As a 21st-century quantum information scientist, I understand what makes quantum theory quantum partially by understanding which problems quantum computers can solve efficiently and classical computers can’t. Similarly, I understand what makes quantum theory quantum partially by understanding how much more work you can extract from a singlet \frac{1}{ \sqrt{2} } ( | 0 1 \rangle - |1 0 \rangle ) (a maximally entangled state of two qubits) than from a product state in which the reduced states have the same forms as in the singlet, \frac{1}{2} ( | 0 \rangle \langle 0 | + | 1 \rangle \langle 1 | ).

As quantum thermodynamics shares its operational approach with quantum information theory, quantum thermodynamicists use mathematical tools developed in quantum information theory. An example consists of generalized entropies. Entropies quantify the optimal efficiency with which we can perform information-processing and thermodynamic tasks, such as data compression and work extraction.

Most statistical-mechanics researchers use just the Shannon and von Neumann entropies, H_{\rm Sh} and H_{\rm vN}, and perhaps the occasional relative entropy. These entropies quantify optimal efficiencies in large-system limits, e.g., as the number of messages compressed approaches infinity and in the thermodynamic limit.

Other entropic quantities have been defined and explored over the past two decades, in quantum and classical information theory. These entropies quantify the optimal efficiencies with which tasks can be performed (i) if the number of systems processed or the number of trials is arbitrary, (ii) if the systems processed share correlations, (iii) in the presence of “quantum side information” (if the system being used as a resource is entangled with another system, to which an agent has access), or (iv) if you can tolerate some probability \varepsilon that you fail to accomplish your task. Instead of limiting ourselves to H_{\rm Sh} and H_{\rm vN}, we use also “\varepsilon-smoothed entropies,” Rényi divergences, hypothesis-testing entropies, conditional entropies, etc.

Another hallmark of quantum thermodynamics is results’ generality and simplicity. Thermodynamics characterizes a system with a few macroscopic observables, such as temperature, volume, and particle number. The simplicity of some quantum thermodynamics served a chemist collaborator and me, as explained in the introduction of https://arxiv.org/abs/1811.06551.

Yoram’s question reminded me of one reason why, as an undergrad, I adored studying physics in a liberal-arts college. I ate dinner and took walks with students majoring in economics, German studies, and Middle Eastern languages. They described their challenges, which I analyzed with the physics mindset that I was acquiring. We then compared our approaches. Encountering other disciplines’ perspectives helped me recognize what tools I was developing as a budding physicist. How can we know our corner of the world without stepping outside it and viewing it as part of a landscape?


1The title epitomizes clarity and simplicity. And I have trouble resisting anything advertised as “the information-theoretic approach to such-and-such.”

Quantum information in quantum cognition

Some research topics, says conventional wisdom, a physics PhD student shouldn’t touch with an iron-tipped medieval lance: sinkholes in the foundations of quantum theory. Problems so hard, you’d have a snowball’s chance of achieving progress. Problems so obscure, you’d have a snowball’s chance of convincing anyone to care about progress. Whether quantum physics could influence cognition much.

Quantum physics influences cognition insofar as (i) quantum physics prevents atoms from imploding and (ii) implosion inhabits atoms from contributing to cognition. But most physicists believe that useful entanglement can’t survive in brains. Entanglement consists of correlations shareable by quantum systems and stronger than any achievable by classical systems. Useful entanglement dies quickly in hot, wet, random environments. 

Brains form such environments. Imagine injecting entangled molecules A and B into someone’s brain. Water, ions, and other particles would bombard the molecules. The higher the temperature, the heavier the bombardment. The bombardiers would entangle with the molecules via electric and magnetic fields. Each molecule can share only so much entanglement. The more A entangled with the environment, the less A could remain entangled with B. A would come to share a tiny amount of entanglement with each of many particles. Such tiny amounts couldn’t accomplish much. So quantum physics seems unlikely to affect cognition significantly.


Do not touch.

Yet my PhD advisor, John Preskill, encouraged me to consider whether the possibility interested me.

Try some completely different research, he said. Take a risk. If it doesn’t pan out, fine. People don’t expect much of grad students, anyway. Have you seen Matthew Fisher’s paper about quantum cognition? 

Matthew Fisher is a theoretical physicist at the University of California, Santa Barbara. He has plaudits out the wazoo, many for his work on superconductors. A few years ago, Matthew developed an interest in biochemistry. He knew that most physicists doubt whether quantum physics could affect cognition much. But suppose that it could, he thought. How could it? Matthew reverse-engineered a mechanism, in a paper published by Annals of Physics in 2015.

A PhD student shouldn’t touch such research with a ten-foot radio antenna, says conventional wisdom. But I trust John Preskill in a way in which I trust no one else on Earth.

I’ll look at the paper, I said.


Matthew proposed that quantum physics could influence cognition as follows. Experimentalists have performed quantum computation using one hot, wet, random system: that of nuclear magnetic resonance (NMR). NMR is the process that underlies magnetic resonance imaging (MRI), a technique used to image people’s brains. A common NMR system consists of high-temperature liquid molecules. The molecules consists of atoms whose nuclei have quantum properties called spin. The nuclear spins encode quantum information (QI).

Nuclear spins, Matthew reasoned, might store QI in our brains. He catalogued the threats that could damage the QI. Hydrogen ions, he concluded, would threaten the QI most. They could entangle with (decohere) the spins via dipole-dipole interactions.

How can a spin avoid the threats? First, by having a quantum number s = 1/2. Such a quantum number zeroes out the nuclei’s electric quadrupole moments. Electric-quadrupole interactions can’t decohere such spins. Which biologically prevalent atoms have s = 1/2 nuclear spins? Phosphorus and hydrogen. Hydrogen suffers from other vulnerabilities, so phosphorus nuclear spins store QI in Matthew’s story. The spins serve as qubits, or quantum bits.

How can a phosphorus spin avoid entangling with other spins via magnetic dipole-dipole interactions? Such interactions depend on the spins’ orientations relative to their positions. Suppose that the phosphorus occupies a small molecule that tumbles in biofluids. The nucleus’s position changes randomly. The interaction can average out over tumbles.

The molecule contains atoms other than phosphorus. Those atoms have nuclei whose spins can interact with the phosphorus spins, unless every threatening spin has a quantum number s = 0. Which biologically prevalent atoms have s = 0 nuclear spins? Oxygen and calcium. The phosphorus should therefore occupy a molecule with oxygen and calcium.

Matthew designed this molecule to block decoherence. Then, he found the molecule in the scientific literature. The structure, {\rm Ca}_9 ({\rm PO}_4)_6, is called a Posner cluster or a Posner molecule. I’ll call it a Posner, for short. Posners appear to exist in simulated biofluids, fluids created to mimic the fluids in us. Posners are believed to exist in us and might participate in bone formation. According to Matthew’s estimates, Posners might protect phosphorus nuclear spins for up to 1-10 days.

Posner 2

Posner molecule (image courtesy of Swift et al.)

How can Posners influence cognition? Matthew proposed the following story.

Adenosine triphosphate (ATP) is a molecule that fuels biochemical reactions. “Triphosphate” means “containing three phosphate ions.” Phosphate ({\rm PO}_4^{3-}) consists of one phosphorus atom and three oxygen atoms. Two of an ATP molecule’s phosphates can break off while remaining joined to each other.

The phosphate pair can drift until encountering an enzyme called pyrophosphatase. The enzyme can break the pair into independent phosphates. Matthew, with Leo Radzihovsky, conjectured that, as the pair breaks, the phosphorus nuclear spins are projected onto a singlet. This state, represented by \frac{1}{ \sqrt{2} } ( | \uparrow \downarrow \rangle - | \downarrow \uparrow \rangle ), is maximally entangled. 

Imagine many entangled phosphates in a biofluid. Six phosphates can join nine calcium ions to form a Posner molecule. The Posner can share up to six singlets with other Posners. Clouds of entangled Posners can form.

One clump of Posners can enter one neuron while another clump enters another neuron. The protein VGLUT, or BNPI, sits in cell membranes and has the potential to ferry Posners in. The neurons will share entanglement. Imagine two Posners, P and Q, approaching each other in a neuron N. Quantum-chemistry calculations suggest that the Posners can bind together. Suppose that P shares entanglement with a Posner P’ in a neuron N’, while Q shares entanglement with a Posner Q’ in N’. The entanglement, with the binding of P to Q, can raise the probability that P’ binds to Q’.

Bound-together Posners will move slowly, having to push much water out of the way. Hydrogen and magnesium ions can latch onto the slow molecules easily. The Posners’ negatively charged phosphates will attract the {\rm H}^+ and {\rm Mg}^{2+} as the phosphates attract the Posner’s {\rm Ca}^{2+}. The hydrogen and magnesium can dislodge the calcium, breaking apart the Posners. Calcium will flood neurons N and N’. Calcium floods a neuron’s axion terminal (the end of the neuron) when an electrical signal reaches the axion. The flood induces the neuron to release neurotransmitters. Neurotransmitters are chemicals that travel to the next neuron, inducing it to fire. So entanglement between phosphorus nuclear spins in Posner molecules might stimulate coordinated neuron firing.


Does Matthew’s story play out in the body? We can’t know till running experiments and analyzing the results. Experiments have begun: Last year, the Heising-Simons Foundation granted Matthew and collaborators $1.2 million to test the proposal.

Suppose that Matthew conjectures correctly, John challenged me, or correctly enough. Posner molecules store QI. Quantum systems can process information in ways in which classical systems, like laptops, can’t. How adroitly can Posners process QI?

I threw away my iron-tipped medieval lance in year five of my PhD. I left Caltech for a five-month fellowship, bent on returning with a paper with which to answer John. I did, and Annals of Physics published the paper this month.

Digest image

I had the fortune to interest Elizabeth Crosson in the project. Elizabeth, now an assistant professor at the University of New Mexico, was working as a postdoc in John’s group. Both of us are theorists who specialize in QI theory. But our backgrounds, skills, and specialties differ. We complemented each other while sharing a doggedness that kept us emailing, GChatting, and Google-hangout-ing at all hours.

Elizabeth and I translated Matthew’s biochemistry into the mathematical language of QI theory. We dissected Matthew’s narrative into a sequence of biochemical steps. We ascertained how each step would transform the QI encoded in the phosphorus nuclei. Each transformation, we represented with a piece of math and with a circuit-diagram element. (Circuit-diagram elements are pictures strung together to form circuits that run algorithms.) The set of transformations, we called Posner operations.

Imagine that you can perform Posner operations, by preparing molecules, trying to bind them together, etc. What QI-processing tasks can you perform? Elizabeth and I found applications to quantum communication, quantum error detection, and quantum computation. Our results rest on the assumption—possibly inaccurate—that Matthew conjectures correctly. Furthermore, we characterized what Posners could achieve if controlled. Randomness, rather than control, would direct Posners in biofluids. But what can happen in principle offers a starting point.

First, QI can be teleported from one Posner to another, while suffering noise.1 This noisy teleportation doubles as superdense coding: A trit is a random variable that assumes one of three possible values. A bit is a random variable that assumes one of two possible values. You can teleport a trit from one Posner to another effectively, while transmitting a bit directly, with help from entanglement. 


Second, Matthew argued that Posners’ structures protect QI. Scientists have developed quantum error-correcting and -detecting codes to protect QI. Can Posners implement such codes, in our model? Yes: Elizabeth and I (with help from erstwhile Caltech postdoc Fernando Pastawski) developed a quantum error-detection code accessible to Posners. One Posner encodes a logical qutrit, the quantum version of a trit. The code detects any error that slams any of the Posner’s six qubits.

Third, how complicated an entangled state can Posner operations prepare? A powerful one, we found: Suppose that you can measure this state locally, such that earlier measurements’ outcomes affect which measurements you perform later. You can perform any quantum computation. That is, Posner operations can prepare a state that fuels universal measurement-based quantum computation.

Finally, Elizabeth and I quantified effects of entanglement on the rate at which Posners bind together. Imagine preparing two Posners, P and P’, that share entanglement only with other particles. If the Posners approach each other with the right orientation, they have a 33.6% chance of binding, in our model. Now, suppose that every qubit in P is maximally entangled with a qubit in P’. The binding probability can rise to 100%.


Elizabeth and I recast as a quantum circuit a biochemical process discussed in Matthew Fisher’s 2015 paper.

I feared that other scientists would pooh-pooh our work as crazy. To my surprise, enthusiasm flooded in. Colleagues cheered the risk on a challenge in an emerging field that perks up our ears. Besides, Elizabeth’s and my work is far from crazy. We don’t assert that quantum physics affects cognition. We imagine that Matthew conjectures correctly, acknowledging that he might not, and explore his proposal’s implications. Being neither biochemists nor experimentalists, we restrict our claims to QI theory.

Maybe Posners can’t protect coherence for long enough. Would inaccuracy of Matthew’s beach our whale of research? No. Posners prompted us to propose ideas and questions within QI theory. For instance, our quantum circuits illustrate interactions (unitary gates, to experts) interspersed with measurements implemented by the binding of Posners. The circuits partially motivated a subfield that emerged last summer and is picking up speed: Consider interspersing random unitary gates with measurements. The unitaries tend to entangle qubits, whereas the measurements disentangle. Which influence wins? Does the system undergo a phase transition from “mostly entangled” to “mostly unentangled” at some measurement frequency? Researchers from Santa Barbara to Colorado; MIT; Oxford; Lancaster, UK; Berkeley; Stanford; and Princeton have taken up the challenge.  

A physics PhD student, conventional wisdom says, shouldn’t touch quantum cognition with a Swiss guard’s halberd. I’m glad I reached out: I learned much, contributed to science, and had an adventure. Besides, if anyone disapproves of daring, I can blame John Preskill.


Annals of Physics published “Quantum information in the Posner model of quantum cognition” here. You can find the arXiv version here and can watch a talk about our paper here. 

1Experts: The noise arises because, if two Posners bind, they effectively undergo a measurement. This measurement transforms a subspace of the two-Posner Hilbert space as a coarse-grained Bell measurement. A Bell measurement yields one of four possible outcomes, or two bits. Discarding one of the bits amounts to coarse-graining the outcome. Quantum teleportation involves a Bell measurement. Coarse-graining the measurement introduces noise into the teleportation.

Long live Yale’s cemetery

Call me morbid, but, the moment I arrived at Yale, I couldn’t wait to visit the graveyard.

I visited campus last February, to present the Yale Quantum Institute (YQI) Colloquium. The YQI occupies a building whose stone exterior honors Yale’s Gothic architecture and whose sleekness defies it. The YQI has theory and experiments, seminars and colloquia, error-correcting codes and small-scale quantum computers, mugs and laptop bumper stickers. Those assets would have drawn me like honey. But my host, Steve Girvin, piled molasses, fudge, and cookie dough on top: “you should definitely reserve some time to go visit Josiah Willard Gibbs, Jr., Lars Onsager, and John Kirkwood in the Grove Street Cemetery.”


Gibbs, Onsager, and Kirkwood pioneered statistical mechanics. Statistical mechanics is the physics of many-particle systems, energy, efficiency, and entropy, a measure of order. Statistical mechanics helps us understand why time flows in only one direction. As a colleague reminded me at a conference about entropy, “You are young. But you will grow old and die.” That conference featured a field trip to a cemetery at the University of Cambridge. My next entropy-centric conference took place next to a cemetery in Banff, Canada. A quantum-thermodynamics conference included a tour of an Oxford graveyard.1 (That conference reincarnated in Santa Barbara last June, but I found no cemeteries nearby. No wonder I haven’t blogged about it.) Why shouldn’t a quantum-thermodynamics colloquium lead to the Grove Street Cemetery?


Home of the Yale Quantum Institute

The Grove Street Cemetery lies a few blocks from the YQI. I walked from the latter to the former on a morning whose sunshine spoke more of springtime than of February. At one entrance stood a gatehouse that looked older than many of the cemetery’s residents.

“Can you tell me where to find Josiah Willard Gibbs?” I asked the gatekeepers. They handed me a map, traced routes on it, and dispatched me from their lodge. Snow had fallen the previous evening but was losing its battle against the sunshine. I sloshed to a pathway labeled “Locust,” waded along Locust until passing Myrtle, and splashed back and forth until a name caught my eye: “Gibbs.” 


One entrance of the Grove Street Cemetery

Josiah Willard Gibbs stamped his name across statistical mechanics during the 1800s. Imagine a gas in a box, a system that illustrates much of statistical mechanics. Suppose that the gas exchanges heat with a temperature-T bath through the box’s walls. After exchanging heat for a long time, the gas reaches thermal equilibrium: Large-scale properties, such as the gas’s energy, quit changing much. Imagine measuring the gas’s energy. What probability does the measurement have of outputting E? The Gibbs distribution provides the answer, e^{ - E / (k_{\rm B} T) } / Z. The k_{\rm B} denotes Boltzmann’s constant, a fundamental constant of nature. The Z denotes a partition function, which ensures that the probabilities sum to one.

Gibbs lent his name to more than probabilities. A function of probabilities, the Gibbs entropy, prefigured information theory. Entropy features in the Gibbs free energy, which dictates how much work certain thermodynamic systems can perform. A thermodynamic system has many properties, such as temperature and pressure. How many can you control? The answer follows from the Gibbs-Duheim relation. You’ll be able to follow the Gibbs walk, a Yale alumnus tells me, once construction on Yale’s physical-sciences complex ends.

Gibbs 1

Back I sloshed along Locust Lane. Turning left onto Myrtle, then right onto Cedar, led to a tree that sheltered two tombstones. They looked like buddies about to throw their arms around each other and smile for a photo. The lefthand tombstone reported four degrees, eight service positions, and three scientific honors of John Gamble Kirkwood. The righthand tombstone belonged to Lars Onsager:


[ . . . ]


Onsager extended thermodynamics beyond equilibrium. Imagine gently poking one property of a thermodynamic system. For example, recall the gas in a box. Imagine connecting one end of the box to a temperature-T bath and the other end to a bath at a slightly higher temperature, T' \gtrsim T. You’ll have poked the system’s temperature out of equilibrium. Heat will flow from the hotter bath to the colder bath. Particles carry the heat, energy of motion. Suppose that the particles have electric charges. An electric current will flow because of the temperature difference. Similarly, heat can flow because of an electric potential difference, or a pressure difference, and so on. You can cause a thermodynamic system’s elbow to itch, Onsager showed, by tickling the system’s ankle.

To Onsager’s left lay John Kirkwood. Kirkwood had defined a quasiprobability distribution in 1933. Quasiprobabilities resemble probabilities but can assume negative and nonreal values. These behaviors can signal nonclassical physics, such as the ability to outperform classical computers. I generalized Kirkwood’s quasiprobability with collaborators. Our generalized quasiprobability describes quantum chaos, thermalization, and the spread of information through entanglement. Applying the quasiprobability across theory and experiments has occupied me for two-and-a-half years. Rarely has a tombstone pleased anyone as much as Kirkwood’s tickled me.

Kirkwood and Onsager

The Grove Street Cemetery opened my morning with a whiff of rosemary. The evening closed with a shot of adrenaline. I met with four undergrad women who were taking Steve Girvin’s course, an advanced introduction to physics. I should have left the conversation bled of energy: Since visiting the cemetery, I’d held six discussions with nine people. But energy can flow backward. The students asked how I’d come to postdoc at Harvard; I asked what they might major in. They described the research they hoped to explore; I explained how I’d constructed my research program. They asked if I’d had to work as hard as they to understand physics; I confessed that I might have had to work harder.

I left the YQI content, that night. Such a future deserves its past; and such a past, its future.


With thanks to Steve Girvin, Florian Carle, and the Yale Quantum Institute for their hospitality.

1Thermodynamics is a physical theory that emerges from statistical mechanics.

“A theorist I can actually talk with”

Haunted mansions have ghosts, football teams have mascots, and labs have in-house theorists. I found myself posing as a lab’s theorist at Caltech. The gig began when Oskar Painter, a Caltech experimentalist, emailed that he’d read my first paper about quantum chaos. Would I discuss the paper with the group?

Oskar’s lab was building superconducting qubits, tiny circuits in which charge can flow forever. The lab aimed to control scores of qubits, to develop a quantum many-body system. Entanglement—strong correlations that quantum systems can sustain and everyday systems can’t—would spread throughout the qubits. The system could realize phases of matter—like many-particle quantum chaos—off-limits to most materials.

How could Oskar’s lab characterize the entanglement, the entanglement’s spread, and the phases? Expert readers will suggest measuring an entropy, a gauge of how much information this part of the system holds about that part. But experimentalists have had trouble measuring entropies. Besides, one measurement can’t capture many-body entanglement; such entanglement involves too many intricacies. Oskar was searching for arrows to add to his lab’s measurement quiver.


In-house theorist?

I’d proposed a protocol for measuring a characterization of many-body entanglement, quantum chaos, and thermalization—a property called “the out-of-time-ordered correlator.” The protocol appealed to Oskar. But practicalities limit quantum many-body experiments: The more qubits your system contains, the more the system can contact its environment, like stray particles. The stronger the interactions, the more the environment entangles with the qubits, and the less the qubits entangle with each other. Quantum information leaks from the qubits into their surroundings; what happens in Vegas doesn’t stay in Vegas. Would imperfections mar my protocol?

I didn’t know. But I knew someone who could help us find out.

Justin Dressel works at Chapman University as a physics professor. He’s received the highest praise that I’ve heard any experimentalist give a theorist: “He’s a theorist I can actually talk to.” With other collaborators, Justin and I simplified my scheme for measuring out-of-time-ordered correlators. Justin knew what superconducting-qubit experimentalists could achieve, and he’d been helping them reach for more.

How about, I asked Justin, we simulate our protocol on a computer? We’d code up virtual superconducting qubits, program in interactions with the environment, run our measurement scheme, and assess the results’ noisiness. Justin had the tools to simulate the qubits, but he lacked the time. 

Know any postdocs or students who’d take an interest? I asked.


Chapman University’s former science center. Don’t you wish you spent winters in California?

José Raúl González Alonso has a smile like a welcome sign and a coffee cup glued to one hand. He was moving to Chapman University to work as a Grand Challenges Postdoctoral Fellow. José had built simulations, and he jumped at the chance to study quantum chaos.

José confirmed Oskar’s fear and other simulators’ findings: The environment threatens measurements of the out-of-time-ordered correlator. Suppose that you measure this correlator at each of many instants, you plot the correlator against time, and you see the correlator drop. If you’ve isolated your qubits from their environment, we can expect them to carry many-body entanglement. Golden. But the correlator can drop if, instead, the environment is harassing your qubits. You can misdiagnose leaking as many-body entanglement.

OTOC plots

Our triumvirate identified a solution. Justin and I had discovered another characterization of quantum chaos and many-body entanglement: a quasiprobability, a quantum generalization of a probability.  

The quasiprobability contains more information about the entanglement than the out-of-time-ordered-correlator does. José simulated measurements of the quasiprobability. The quasiprobability, he found, behaves one way when the qubits entangle independently of their environment and behaves another way when the qubits leak. You can measure the quasiprobability to decide whether to trust your out-of-time-ordered-correlator measurement or to isolate your qubits better. The quasiprobability enables us to avoid false positives.

Physical Review Letters published our paper last month. Working with Justin and José deepened my appetite for translating between the abstract and the concrete, for proving abstractions as a theorist’s theorist and realizing them experimentally as a lab’s theorist. Maybe, someday, I’ll earn the tag “a theorist I can actually talk with” from an experimentalist. For now, at least I serve better than a football-team mascot.

Humans can intuit quantum physics.

One evening this January, audience members packed into a lecture hall in MIT’s physics building. Undergraduates, members of the public, faculty members, and other scholars came to watch a film premiere and a panel discussion. NOVA had produced the film, “Einstein’s Quantum Riddle,” which stars entanglement. Entanglement is a relationship between quantum systems such as electrons. Measuring two entangled electrons yields two outcomes, analogous to the numbers that face upward after you roll two dice. The quantum measurements’ outcomes can exhibit correlations stronger than any measurements of any classical, or nonquantum, systems can. Which die faces point upward can share only so much correlation, even if the dice hit each other.

einstein's q. riddle

Dice feature in the film’s explanations of entanglement. So does a variation on the shell game, in which one hides a ball under one of three cups, shuffles the cups, and challenges viewers to guess which cup is hiding the ball. The film derives its drama from the Cosmic Bell test. Bell tests are experiments crafted to show that classical physics can’t describe entanglement. Scientists recently enhanced Bell tests using light from quasars—ancient, bright, faraway galaxies. Mix astrophysics with quantum physics, and an edgy, pulsing soundtrack follows.

The Cosmic Bell test grew from a proposal by physicists at MIT and the University of Chicago. The coauthors include David Kaiser, a historian of science and a physicist on MIT’s faculty. Dave co-organized the premiere and the panel discussion that followed. The panel featured Dave; Paola Cappellaro, an MIT quantum experimentalist; Alan Guth, an MIT cosmologist who contributed to the Bell test; Calvin Leung, an MIT PhD student who contributed; Chris Schmidt, the film’s producer; and me. Brindha Muniappan, the Director of Education and Public Programs at the MIT Museum, moderated the discussion.


think that the other panelists were laughing with me.

Brindha asked what challenges I face when explaining quantum physics, such as on this blog. Quantum theory wears the labels “weird,” “counterintuitive,” and “bizarre” in journalism, interviews, blogs, and films. But the thorn in my communicational side reflects quantum “weirdness” less than it reflects humanity’s self-limitation: Many people believe that we can’t grasp quantum physics. They shut down before asking me to explain.

Examples include a friend and Quantum Frontiers follower who asks, year after year, for books about quantum physics. I suggest literature—much by Dave Kaiser—he reads some, and we discuss his impressions. He’s learning, he harbors enough curiosity to have maintained this routine for years, and he has technical experience as a programmer. But he’s demurred, several times, along the lines of “But…I don’t know. I don’t think I’ll ever understand it. Humans can’t understand quantum physics, can we? It’s too weird.” 

Quantum physics defies many expectations sourced from classical physics. Classical physics governs how basketballs arch, how paint dries, how sunlight slants through your window, and other everyday experiences. Yet we can gain intuition about quantum physics. If we couldn’t, how could we solve problems and accomplish research? Physicists often begin solving problems by trying to guess the answer from intuition. We reason our way toward a guess by stripping away complications, constructing toy models, and telling stories. We tell stories about particles hopping from site to site on lattices, particles trapped in wells, and arrows flipping upward and downward. These stories don’t capture all of quantum physics, but they capture the essentials. After grasping the essentials, we translate them into math, check how far our guesses lie from truth, and correct our understanding. Intuition about quantum physics forms the compass that guides problem solving.

Growing able to construct, use, and mathematize such stories requires work. You won’t come to understand quantum theory by watching NOVA films, though films can prime you for study. You can gain a facility with quantum theory through classes, problem sets, testing, research, seminars, and further processing. You might not have the time or inclination to. Even if you have, you might not come to understand why quantum theory describes our universe: Science can’t necessarily answer all “why” questions. But you can grasp what quantum theory implies about our universe.

People grasp physics arguably more exotic than quantum theory, without exciting the disbelief excited by a grasp of quantum theory. Consider the Voyager spacecraft launched in 1977. Voyager has survived solar winds and -452º F weather, imaged planets, and entered interstellar space. Classical physics—the physics of how basketballs arch—describes much of Voyager’s experience. But even if you’ve shot baskets, how much intuition do you have about interstellar space? I know physicists who claim to have more intuition about quantum physics than about much classical. When astrophysicists discuss Voyager and interstellar space, moreover, listeners don’t fret that comprehension lies beyond them. No one need fret when quantum physicists discuss the electrons in us.

Fretting might not occur to future generations: Outreach teams are introducing kids to quantum physics through games and videos. Caltech’s Institute for Quantum Information and Matter has partnered with Google to produce QCraft, a quantum variation on Minecraft, and with the University of Southern California on quantum chess. In 2017, the American Physical Society’s largest annual conference featured a session called “Gamification and other Novel Approaches in Quantum Physics Outreach.” Such outreach exposes kids to quantum terminology and concepts early. Quantum theory becomes a playground to explore, rather than a source of intimidation. Players will grow up primed to think about quantum-mechanics courses not “Will my grade-point average survive this semester?” but “Ah, so this is the math under the hood of entanglement.”

qcraft 2

Sociology restricts people to thinking quantum physics weird. But quantum theory defies classical expectations less than it could. Measurement outcomes could share correlations stronger than the correlations sourced by entanglement. How strong could the correlations grow? How else could physics depart farther from classical physics than quantum physics does? Imagine the worlds governed by all possible types of physics, called “generalized probabilistic theories” (GPTs). GPTs form a landscape in which quantum theory constitutes an island, on which classical physics constitutes a hill. Compared with the landscape’s outskirts, our quantum world looks tame.

GPTs fall under the research category of quantum foundations. Quantum foundations concerns why the math that describes quantum systems describes quantum systems, reformulations of quantum theory, how quantum theory differs from classical mechanics, how quantum theory could deviate but doesn’t, and what happens during measurements of quantum systems. Though questions about quantum foundations remain, they don’t block us from intuiting about quantum theory. A stable owner can sense when a horse has colic despite lacking a veterinary degree.

Moreover, quantum-foundations research has advanced over the past few decades. Collaborations and tools have helped: Theorists have been partnering with experimentalists, such as on the Cosmic Bell test and on studies of measurement. Information theory has engendered mathematical tools for quantifying entanglement and other quantum phenomena. Information theory has also firmed up an approach called “operationalism.” Operationalists emphasize preparation procedures, evolutions, and measurements. Focusing on actions and data concretizes arguments and facilitates comparisons with experiments. As quantum-foundations research has advanced, so have quantum information theory, quantum experiments, quantum technologies, and interdisciplinary cross-pollination. Twentieth-century quantum physicists didn’t imagine the community, perspectives, and knowledge that we’ve accrued. So don’t adopt 20th-century pessimism about understanding quantum theory. Einstein grasped much, but today’s scientific community grasps more. Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” Feynman helped spur the quantum-information revolution; he died before its adolescence. Besides, Feynman understood plenty about quantum theory. Intuition jumps off the pages of his lecture notes and speeches.


Landscape beyond quantum theory

I’ve swum in oceans and lakes, studied how the moon generates tides, and canoed. But piloting a steamboat along the Mississippi would baffle me. I could learn, given time, instruction, and practice; so can you learn quantum theory. Don’t let “weirdness,” “bizarreness,” or “counterintuitiveness” intimidate you. Humans can intuit quantum physics.

Chasing Ed Jaynes’s ghost

You can’t escape him, working where information theory meets statistical mechanics.

Information theory concerns how efficiently we can encode information, compute, evade eavesdroppers, and communicate. Statistical mechanics is the physics of  many particles. We can’t track every particle in a material, such as a sheet of glass. Instead, we reason about how the conglomerate likely behaves. Since we can’t know how all the particles behave, uncertainty blunts our predictions. Uncertainty underlies also information theory: You can think that your brother wished you a happy birthday on the phone. But noise corroded the signal; he might have wished you a madcap Earth Day. 

Edwin Thompson Jaynes united the fields, in two 1957 papers entitled “Information theory and statistical mechanics.” I’ve cited the papers in at least two of mine. Those 1957 papers, and Jaynes’s philosophy, permeate pockets of quantum information theory, statistical mechanics, and biophysics. Say you know a little about some system, Jaynes wrote, like a gas’s average energy. Say you want to describe the gas’s state mathematically. Which state can you most reasonably ascribe to the gas? The state that, upon satisfying the average-energy constraint, reflects our ignorance of the rest of the gas’s properties. Information theorists quantify ignorance with a function called entropy, so we ascribe to the gas a large-entropy state. Jaynes’s Principle of Maximum Entropy has spread from statistical mechanics to image processing and computer science and beyond. You can’t evade Ed Jaynes.

I decided to turn the tables on him this December. I was visiting to Washington University in St. Louis, where Jaynes worked until six years before his 1998 death. Haunted by Jaynes, I’d hunt down his ghost.


I began with my host, Kater Murch. Kater’s lab performs experiments with superconducting qubits. These quantum circuits sustain currents that can flow forever, without dissipating. I questioned Kater over hummus, the evening after I presented a seminar about quantum uncertainty and equilibration. Kater had arrived at WashU a decade-and-a-half after Jaynes’s passing but had kept his ears open.

Ed Jaynes, Kater said, consulted for a startup, decades ago. The company lacked the funds to pay him, so it offered him stock. That company was Varian, and Jaynes wound up with a pretty penny. He bought a mansion, across the street from campus, where he hosted the physics faculty and grad students every Friday. He’d play a grand piano, and guests would accompany him on instruments they’d bring. The department doubled as his family. 

The library kept a binder of Jaynes’s papers, which Kater had skimmed the previous year. What clarity shined through those papers! With a touch of pride, Kater added that he inhabited Jaynes’s former office. Or the office next door. He wasn’t certain.

I passed the hummus to a grad student of Kater’s. Do you hear stories about Jaynes around the department? I asked. I’d heard plenty about Feynman, as a PhD student at Caltech.

Not many, he answered. Just in conversations like this.

Later that evening, I exchanged emails with Kater. A contemporary of Jaynes’s had attended my seminar, he mentioned. Pity that I’d missed meeting the contemporary.

The following afternoon, I climbed to the physics library on the third floor of Crow Hall. Portraits of suited men greeted me. At the circulation desk, I asked for the binders of Jaynes’s papers.

Who? asked the student behind the granola bars advertised as “Free study snacks—help yourself!” 

E.T. Jaynes, I repeated. He worked here as a faculty member.

She turned to her computer. Can you spell that?

I obeyed while typing the name into the computer for patrons. The catalogue proffered several entries, one of which resembled my target. I wrote down the call number, then glanced at the notes over which the student was bending: “The harmonic oscillator.” An undergrad studying physics, I surmised. Maybe she’ll encounter Jaynes in a couple of years. 

I hiked upstairs, located the statistical-mechanics section, and ran a finger along the shelf. Hurt and Hermann, Itzykson and Drouffe, …Kadanoff and Baym. No Jaynes? I double-checked. No Jaynes. 

Library books

Upon descending the stairs, I queried the student at the circulation desk. She checked the catalogue entry, then ahhhed. You’d have go to the main campus library for this, she said. Do you want directions? I declined, thanked her, and prepared to return to Kater’s lab. Calculations awaited me there; I’d have no time for the main library. 

As I reached the physics library’s door, a placard caught my eye. It appeared to list the men whose portraits lined the walls. Arthur Compton…I only glanced at the placard, but I didn’t notice any “Jaynes.”

Arthur Compton greeted me also from an engraving en route to Kater’s lab. Down the hall lay a narrow staircase on whose installation, according to Kater, Jaynes had insisted. Physicists would have, in the stairs’ absence, had to trek down the hall to access the third floor. Of course I wouldn’t photograph the staircase for a blog post. I might belong to the millenial generation, but I aim and click only with purpose. What, though, could I report in a blog post? 

That night, I googled “e.t. jaynes.” His Wikipedia page contained only introductory and “Notes” sections. A WashU website offered a biography and unpublished works. But another tidbit I’d heard in the department yielded no Google hits, at first glance. I forbore a second glance, navigated to my inbox, and emailed Kater about plans for the next day.

I’d almost given up on Jaynes when Kater responded. After agreeing to my suggestion, he reported feedback about my seminar: A fellow faculty member “thought that Ed Jaynes (his contemporary) would have been very pleased.” 

The email landed in my “Nice messages” folder within two shakes. 

Leaning back, I reevaluated my data about Jaynes. I’d unearthed little, and little surprise: According to the WashU website, Jaynes “would undoubtedly be uncomfortable with all of the attention being lavished on him now that he is dead.” I appreciate privacy and modesty. Nor does Jaynes need portraits or engravings. His legacy lives in ideas, in people. Faculty from across his department attended a seminar about equilibration and about how much we can know about quantum systems. Kater might or might not inhabit Jaynes’s office. But Kater wears a strip cut from Jaynes’s mantle: Kater’s lab probes the intersection of information theory and statistical mechanics. They’ve built a Maxwell demon, a device that uses information as a sort of fuel to perform thermodynamic work. 

I’ve blogged about legacies that last. Assyrian reliefs carved in alabaster survive for millennia, as do ideas. Jaynes’s ideas thrive; they live even in me.

Did I find Ed Jaynes’s ghost at WashU? I think I honored it, by pursuing calculations instead of pursuing his ghost further. I can’t say whether I found his ghost. But I gained enough information.


With thanks to Kater and to the Washington University Department of Physics for their hospitality.

Doctrine of the (measurement) mean

Don’t invite me to dinner the night before an academic year begins.

You’ll find me in an armchair or sitting on my bed, laptop on my lap, journaling. I initiated the tradition the night before beginning college. I take stock of the past year, my present state, and hopes for the coming year.

Much of the exercise fosters what my high-school physics teacher called “an attitude of gratitude”: I reflect on cities I’ve visited, projects firing me up, family events attended, and subfields sampled. Other paragraphs, I want off my chest: Have I pushed this collaborator too hard or that project too little? Miscommunicated or misunderstood? Strayed too far into heuristics or into mathematical formalisms?

If only the “too much” errors, I end up thinking, could cancel the “too little.”

In one quantum-information context, they can.


Imagine that you’ve fabricated the material that will topple steel and graphene; let’s call it a supermetatopoconsulator. How, you wonder, do charge, energy, and particles move through this material? You’ll learn by measuring correlators.

A correlator signals how much, if you poke this piece here, that piece there responds. At least, a two-point correlator does: \langle A(0) B(\tau) \rangle. A(0) represents the poke, which occurs at time t = 0. B(\tau) represents the observable measured there at t = \tau. The \langle . \rangle encapsulates which state \rho the system started in.

Condensed-matter, quantum-optics, and particle experimentalists have measured two-point correlators for years. But consider the three-point correlator \langle A(0) B(\tau) C (\tau' ) \rangle, or a k-point \langle \underbrace{ A(0) \ldots M (\tau^{(k)}) }_k \rangle, for any k \geq 2. Higher-point correlators relate more-complicated relationships amongst events. Four-pointcorrelators associated with multiple times signal quantum chaos and information scrambling. Quantum information scrambles upon spreading across a system through many-body entanglement. Could you measure arbitrary-point, arbitrary-time correlators?

New material

Supermetatopoconsulator (artist’s conception)

Yes, collaborators and I have written, using weak measurements. Weak measurements barely disturb the system being measured. But they extract little information about the measured system. So, to measure a correlator, you’d have to perform many trials. Moreover, your postdocs and students might have little experience with weak measurements. They might not want to learn the techniques required, to recalibrate their detectors, etc. Could you measure these correlators easily?

Yes, if the material consists of qubits,2 according to a paper I published with Justin Dressel, José Raúl González Alsonso, and Mordecai Waegell this summer. You could build such a system from, e.g., superconducting circuits, trapped ions, or quantum dots.

You can measure \langle \underbrace{ A(0) B (\tau') C (\tau'') \ldots M (\tau^{(k)}) }_k \rangle, we show, by measuring A at t = 0, waiting until t = \tau', measuring B, and so on until measuring M at t = \tau^{(k)}. The t-values needn’t increase sequentially: \tau'' could be less than \tau', for instance. You’d have to effectively reverse the flow of time experienced by the qubits. Experimentalists can do so by, for example, flipping magnetic fields upside-down.

Each measurement requires an ancilla, or helper qubit. The ancilla acts as a detector that records the measurement’s outcome. Suppose that A is an observable of qubit #1 of the system of interest. You bring an ancilla to qubit 1, entangle the qubits (force them to interact), and look at the ancilla. (Experts: You perform a controlled rotation on the ancilla, conditioning on the system qubit.)

Each trial yields k measurement outcomes. They form a sequence S, such as (1, 1, 1, -1, -1, \ldots). You should compute a number \alpha, according to a formula we provide, from each measurement outcome and from the measurement’s settings. These numbers form a new sequence S' = \mathbf{(} \alpha_S(1), \alpha_S(1), \ldots \mathbf{)}. Why bother? So that you can force errors to cancel.

Multiply the \alpha’s together, \alpha_S(1) \times \alpha_S(1) \times \ldots, and average the product over the possible sequences S. This average equals the correlator \langle \underbrace{ A(0) \ldots M (\tau^{(k)}) }_k \rangle. Congratulations; you’ve characterized transport in your supermetatopoconsulator.


When measuring, you can couple the ancillas to the system weakly or strongly, disturbing the system a little or a lot. Wouldn’t strong measurements perturb the state \rho whose properties you hope to measure? Wouldn’t the perturbations by measurements one through \ell throw off measurement \ell + 1?

Yes. But the errors introduced by those perturbations cancel in the average. The reason stems from how we construct \alpha’s: Our formula makes some products positive and some negative. The positive and negative terms sum to zero.

Balance 2

The cancellation offers hope for my journal assessment: Errors can come out in the wash. Not of their own accord, not without forethought. But errors can cancel out in the wash—if you soap your \alpha’s with care.


1and six-point, eight-point, etc.

2Rather, each measured observable must square to the identity, e.g., A^2 = 1. Qubit Pauli operators satisfy this requirement.


With apologies to Aristotle.

I get knocked down…

“You’ll have to have a thick skin.”

Marcelo Gleiser, a college mentor of mine, emailed the warning. I’d sent a list of physics PhD programs and requested advice about which to attend. Marcelo’s and my department had fostered encouragement and consideration.

Suit up, Marcelo was saying.

Criticism fuels science, as Oxford physicist David Deutsch has written. We have choices about how we criticize. Some criticism styles reflect consideration for the criticized work’s creator. Tufts University philosopher Daniel Dennett has devised guidelines for “criticizing with kindness”:1

1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.

2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

3. You should mention anything you have learned from your target.

4. Only then are you permitted to say so much as a word of rebuttal or criticism.

Scientists skip to step four often—when refereeing papers submitted to journals, when posing questions during seminars, when emailing collaborators, when colleagues sketch ideas at a blackboard. Why? Listening and criticizing require time, thought, and effort—three of a scientist’s most valuable resources. Should any scientist spend those resources on an idea of mine, s/he deserves my gratitude. Spending empathy atop time, thought, and effort can feel supererogatory. Nor do all scientists prioritize empathy and kindness. Others of us prioritize empathy but—as I have over the past five years—grown so used to its latency, I forget to demonstrate it.

Doing science requires facing not only criticism, but also “That doesn’t make sense,” “Who cares?” “Of course not,” and other morale boosters.

Doing science requires resilience.


So do measurements of quantum information (QI) scrambling. Scrambling is a subtle, late, quantum stage of equilibration2 in many-body systems. Example systems include chains of spins,3 such as in ultracold atoms, that interact with each other strongly. Exotic examples include black holes in anti-de Sitter space.4

Imagine whacking one side of a chain of interacting spins. Information about the whack will disseminate throughout the chain via entanglement.5 After a long interval (the scrambling time, t_*), spins across the systems will share many-body entanglement. No measurement of any few, close-together spins can disclose much about the whack. Information will have scrambled across the system.

QI scrambling has the subtlety of an assassin treading a Persian carpet at midnight. Can we observe scrambling?


A Stanford team proposed a scheme for detecting scrambling using interferometry.6 Justin Dressel, Brian Swingle, and I proposed a scheme based on weak measurements, which refrain from disturbing the measured system much. Other teams have proposed alternatives.

Many schemes rely on effective time reversal: The experimentalist must perform the quantum analog of inverting particles’ momenta. One must negate the Hamiltonian \hat{H}, the observable that governs how the system evolves: \hat{H} \mapsto - \hat{H}.

At least, the experimentalist must try. The experimentalist will likely map \hat{H} to - \hat{H} + \varepsilon. The small error \varepsilon could wreak havoc: QI scrambling relates to chaos, exemplified by the butterfly effect. Tiny perturbations, such as the flap of a butterfly’s wings, can snowball in chaotic systems, as by generating tornadoes. Will the \varepsilon snowball, obscuring observations of scrambling?


It needn’t, Brian and I wrote in a recent paper. You can divide out much of the error until t_*.

You can detect scrambling by measuring an out-of-time-ordered correlator (OTOC), an object I’ve effused about elsewhere. Let’s denote the time-t correlator by F(t). You can infer an approximation \tilde{F}(t) to F(t) upon implementing an \varepsilon-ridden interferometry or weak-measurement protocol. Remove some steps from that protocol, Brian and I say. Infer a simpler, easier-to-measure object \tilde{F}_{\rm simple}(t). Divide the two measurement outcomes to approximate the OTOC:

F(t)  \approx \frac{ \tilde{F}(t) }{ \tilde{F}_{\rm simple}(t) }.

OTOC measurements exhibit resilience to error.

Arm 2

Physicists need resilience. Brian criticizes with such grace, he could serve as the poster child for Daniel Dennett’s guidelines. But not every scientist could. How can we withstand kindness-lite criticism?

By drawing confidence from what we’ve achieved, with help from mentors like Marcelo. I couldn’t tell what about me—if anything—could serve as a rock on which to plant a foot, as an undergrad. Mentors identified what I had too little experience to appreciate. You question what you don’t understand, they said. You assimilate perspectives from textbooks, lectures, practice problems, and past experiences. You scrutinize details while keeping an eye on the big picture. So don’t let so-and-so intimidate you.

I still lack my mentors’ experience, but I’ve imbibed a drop of their insight. I savor calculations that I nail, congratulate myself upon nullifying referees’ concerns, and celebrate the theorems I prove.

I’ve also created an email folder entitled “Nice messages.” In go “I loved your new paper; combining those topics was creative,” “Well done on the seminar; I’m now thinking of exploring that field,” and other rarities. The folder affords an umbrella when physics clouds gather.

Finally, I try to express appreciation of others’ work.7 Science thrives on criticism, but scientists do science. And scientists are human—undergrads, postdocs, senior researchers, and everyone else.

Doing science—and attempting to negate Hamiltonians—we get knocked down. But we can get up again.


Around the time Brian and I released “Resilience” two other groups proposed related renormalizations. Check out their schemes here and here.

1Thanks to Sean Carroll for alerting me to this gem of Dennett’s.

2A system equilibrates as its large-scale properties, like energy, flatline.

3Angular-momentum-like quantum properties

4Certain space-times different from ours

5Correlations, shareable by quantum systems, stronger than any achievable by classical systems

6The cancellation (as by a crest of one wave and a trough of another) of components of a quantum state, or the addition of components (as two waves’ crests)

7Appreciation of specific qualities. “Nice job” can reflect a speaker’s belief but often reflects a desire to buoy a receiver whose work has few merits to elaborate on. I applaud that desire and recommend reinvesting it. “Nice job” carries little content, which evaporates under repetition. Specificity provides content: “Your idea is alluringly simple but could reverberate across multiple fields” has gristle.

What’s the worst that could happen?

The archaeologist Howard Carter discovered Tutankhamun’s burial site in 1922. No other Egyptian pharaoh’s tomb had survived mostly intact until the modern era. Gold and glass and faience, statues and pendants and chariots, had evaded looting. The discovery would revolutionize the world’s understanding of, and enthusiasm for, ancient Egypt.

First, the artifacts had to leave the tomb.

Tutankhamun lay in three coffins nested like matryoshka dolls. Carter describes the nesting in his book The Tomb of Tutankhamen. Lifting the middle coffin from the outer coffin raised his blood pressure:

Everything may seem to be going well until suddenly, in the crisis of the process, you hear a crack—little pieces of surface ornament fall. Your nerves are at an almost painful tension. What is happening? All available room in the narrow space is crowded by your men. What action is needed to avert a catastrophe?

In other words, “What’s the worst that could happen?”

Matryoshka dolls

Collaborators and I asked that question in a paper published last month. We had in mind less Egyptology than thermodynamics and information theory. But never mind the distinction; you’re reading Quantum Frontiers! Let’s mix the fields like flour and oil in a Biblical grain offering.

Carter’s team had trouble separating the coffins: Ancient Egyptian priests (presumably) had poured fluid atop the innermost, solid-gold coffin. The fluid had congealed into a brown gunk, gluing the gold coffin to the bottom of the middle coffin. Removing the gold coffin required work—thermodynamic work.

Work consists of “well-ordered” energy usable in tasks like levering coffins out of sarcophagi and motoring artifacts from Egypt’s Valley of the Kings toward Cairo. We can model the gunk as a spring, one end of which was fixed to the gold coffin and one end of which was fixed to the middle coffin. The work W required to stretch a spring depends on the spring’s stiffness (the gunk’s viscosity) and on the distance stretched through.

W depends also on details: How many air molecules struck the gold coffin from above, opposing the team’s effort? How quickly did Carter’s team pull? Had the gunk above Tuankhamun’s nose settled into a hump or spread out? How about the gunk above Tutankhamun’s left eye socket? Such details barely impact the work required to open a 6.15-foot-long coffin. But air molecules would strongly impact W if Tutankhamun measured a few nanometers in length. So imagine Egyptian matryoshka dolls as long as stubs of DNA.


Imagine that Carter found one million sets of these matryoshka dolls. Lifting a given set’s innermost coffin would require an amount W of work that would vary from set of coffins to set of coffins. W would satisfy fluctuation relations, equalities I’ve blogged about many times.

Fluctuation relations resemble the Second Law of Thermodynamics, which illuminates why time flows in just one direction. But fluctuation relations imply more-precise predictions about W than the Second Law does.

Some predictions concern dissipated work: Carter’s team could avoid spending much work by opening the coffin infinitesimally slowly. Speeding up would heat the gunk, roil air molecules, and more. The heating and roiling would cost extra work, called dissipated work, denoted by W_{\rm diss}.

Suppose that Carter’s team has chosen a lid-opening speed v. Consider the greatest W_{\rm diss} that the team might have to waste on any nanoscale coffin. W_{\rm diss}^{\rm worst} is proportional to each of three information-theoretic quantities, my coauthors and I proved.

For experts: Each information-theoretic quantity is an order-infinity Rényi divergence D_\infty ( X || Y). The Rényi divergences generalize the relative entropy D ( X || Y ). D quantifies how efficiently one can distinguish between probability distributions, or quantum states, X and Y on average. The average is over many runs of a guessing game.

Imagine the worst possible run, which offers the lowest odds of guessing correctly. D_\infty quantifies your likelihood of winning. We related W_{\rm diss}^{\rm worst} to a D_\infty between two statistical-mechanical phase-space distributions (when we described classical systems), to a D_\infty between two quantum states (when we described quantum systems), and to a D_\infty between two probability distributions over work quantities W (when we described systems quantum and classical).


The worst case marks an extreme. How do the extremes consistent with physical law look? As though they’ve escaped from a mythologist’s daydream.

In an archaeologist’s worst case, arriving at home in the evening could lead to the following conversation:

“How was your day, honey?”

“The worst possible.”

“What happened?”

“I accidentally eviscerated a 3.5-thousand-year-old artifact—the most valuable, best-preserved, most information-rich, most lavishly wrought ancient Egyptian coffin that existed yesterday.”

Suppose that the archaeologist lived with a physicist. My group (guided by a high-energy physicist) realized that the conversation could continue as follows:

“And how was your day?”

“Also the worst possible.”

“What happened?”

“I created a black hole.”

General relativity and high-energy physics have begun breeding with quantum information and thermodynamics. The offspring bear extremes like few other systems imaginable. I wonder what our results would have to say about those offspring.


National Geographic reprinted Carter’s The Tomb of Tutankhamen in its “Adventure Classics” series. The series title fits Tomb as a mummy’s bandages fit the mummy. Carter’s narrative stretches from Egypt’s New Kingdom (of 3.5 thousand years ago) through the five-year hunt for the tomb (almost fruitless until the final season), to a water boy’s discovery of steps into the tomb, to the unsealing of the burial chamber, to the confrontation of Tutankhamun’s mummy.

Carter’s book guided me better than any audio guide could have at the California Science Center. The center is hosting the exhibition “King Tut: Treasures of the Golden Pharaoh.” After debuting in Los Angeles, the exhibition will tour the world. The tour showcases 150 artifacts from Tutankhamun’s tomb.

Those artifacts drove me to my desk—to my physics—as soon as I returned home from the museum. Tutankhamun’s tomb, Carter argues in his book, ranks amongst the 20th century’s most important scientific discoveries. I’d seen a smidgeon of the magnificence that Carter’s team— with perseverance, ingenuity, meticulousness, and buckets of sweat shed in Egypt’s heat—had discovered. I don’t expect to discover anything a tenth as magnificent. But how can a young scientist resist trying?

People say, “Prepare for the worst. Hope for the best.” I prefer “Calculate the worst. Hope and strive for a Tutankhamun.”

Outside exhibition

Postscript: Carter’s team failed to unglue the gold coffin by just “stretching” the gunky “spring.” The team resorted to heat, a thermodynamic quantity alternative to work: The team flipped the middle coffin upside-down above a heat lamp. The lamp raised the temperature to 932°F, melting the goo. The melting, with more work, caused the gold coffin to plop out of the middle coffin.

Catching up with the quantum-thermo crowd

You have four hours to tour Oxford University.

What will you visit? The Ashmolean Museum, home to da Vinci drawings, samurai armor, and Egyptian mummies? The Bodleian, one of Europe’s oldest libraries? Turf Tavern, where former president Bill Clinton reportedly “didn’t inhale” marijuana?

Felix Binder showed us a cemetery.

Of course he showed us a cemetery. We were at a thermodynamics conference.

The Fifth Quantum Thermodynamics Conference took place in the City of Dreaming Spires.Participants enthused about energy, information, engines, and the flow of time. About 160 scientists attended—roughly 60 more than attended the first conference, co-organizer Janet Anders estimated.


Weak measurements and quasiprobability distributions were trending. The news delighted me, Quantum Frontiers regulars won’t be surprised to hear.

Measurements disturb quantum systems, as early-20th-century physicist Werner Heisenberg intuited. Measure a system’s position strongly, and you forfeit your ability to predict the outcomes of future momentum measurements. Weak measurements don’t disturb the system much. In exchange, weak measurements provide little information about the system. But you can recoup information by performing a weak measurement in each of many trials, then processing the outcomes.

Strong measurements lead to probability distributions: Imagine preparing a particle in some quantum state, then measuring its position strongly, in each of many trials. From the outcomes, you can infer a probability distribution \{ p(x) \}, wherein p(x) denotes the probability that the next trial will yield position x.

Weak measurements lead analogously to quasiprobability distributions. Quasiprobabilities resemble probabilities but can misbehave: Probabilities are real numbers no less than zero. Quasiprobabilities can dip below zero and can assume nonreal values.

Do not disturb 2

What relevance have weak measurements and quasiprobabilities to quantum thermodynamics? Thermodynamics involves work and heat. Work is energy harnessed to perform useful tasks, like propelling a train from London to Oxford. Heat is energy that jiggles systems randomly.

Quantum properties obscure the line between work and heat. (Here’s an illustration for experts: Consider an isolated quantum, such as a spin chain. Let H(t) denote the Hamiltonian that evolves with the time t \in [0, t_f]. Consider preparing the system in an energy eigenstate | E_i(0) \rangle. This state has zero diagonal entropy: Measuring the energy yields E_i(0) deterministically. Considering tuning H(t), as by changing a magnetic field. This change constitutes work, we learn in electrodynamics class. But if H(t) changes quickly, the state can acquire weight on multiple energy eigenstates. The diagonal entropy rises. The system’s energetics have gained an unreliability characteristic of heat absorption. But the system has remained isolated from any heat bath. Work mimics heat.)

Quantum thermodynamicists have defined work in terms of a two-point measurement scheme: Initialize the quantum system, such as by letting heat flow between the system and a giant, fixed-temperature heat reservoir until the system equilibrates. Measure the system’s energy strongly, and call the outcome E_i. Isolate the system from the reservoir. Tune the Hamiltonian, performing the quantum equivalent of propelling the London train up a hill. Measure the energy, and call the outcome E_f.

Any change \Delta E in a system’s energy comes from heat Q and/or from work W, by the First Law of Thermodynamics, \Delta E = Q + W.  Our system hasn’t exchanged energy with any heat reservoir between the measurements. So the energy change consists of work: E_f - E_i =: W.


Imagine performing this protocol in each of many trials. Different trials will require different amounts W of work. Upon recording the amounts, you can infer a distribution \{ p(W) \}. p(W) denotes the probability that the next trial will require an amount W of work.

Measuring the system’s energy disturbs the system, squashing some of its quantum properties. (The measurement eliminates coherences, relative to the energy eigenbasis, from the state.) Quantum properties star in quantum thermodynamics. So the two-point measurement scheme doesn’t satisfy everyone.

Enter weak measurements. They can provide information about the system’s energy without disturbing the system much. Work probability distributions \{ p(W) \} give way to quasiprobability distributions \{ \tilde{p}(W) \}.

So propose Solinas and Gasparinetti, in these papers. Other quantum thermodynamicists apply weak measurements and quasiprobabilities differently.2 I proposed applying them to characterize chaos, and the scrambling of quantum information in many-body systems, at the conference.3 Feel free to add your favorite applications to the “comments” section.


All the quantum ladies: The conference’s female participants gathered for dinner one conference night.

Wednesday afforded an afternoon for touring. Participants congregated at the college of conference co-organizer Felix Binder.3 His tour evoked, for me, the ghosts of thermo conferences past: One conference, at the University of Cambridge, had brought me to the grave of thermodynamicist Arthur Eddington. Another conference, about entropies in information theory, had convened near Canada’s Banff Cemetery. Felix’s tour began with St. Edmund Hall’s cemetery. Thermodynamics highlights equilibrium, a state in which large-scale properties—like temperature and pressure—remain constant. Some things never change.



With thanks to Felix, Janet, and the other coordinators for organizing the conference.

1Oxford derives its nickname from an elegy by Matthew Arnold. Happy National Poetry Month!


3Michele Campisi joined me in introducing out-of-time-ordered correlators (OTOCs) into the quantum-thermo conference: He, with coauthor John Goold, combined OTOCs with the two-point measurement scheme.

3Oxford University contains 38 colleges, the epicenters of undergraduates’ social, dining, and housing experiences. Graduate students and postdoctoral scholars affiliate with colleges, and senior fellows—faculty members—govern the colleges.