The Book of Mark

Mark Srednicki doesn’t look like a high priest. He’s a professor of physics at the University of California, Santa Barbara (UCSB); and you’ll sooner find him in khakis than in sacred vestments. Humor suits his round face better than channeling divine wrath would; and I’ve never heard him speak in tongues—although, when an idea excites him, his hands rise to shoulder height of their own accord, as though halfway toward a priestly blessing. Mark belongs less on a ziggurat than in front of a chalkboard. Nevertheless, he called himself a high priest.

Specifically, Mark jokingly called himself a high priest of the eigenstate thermalization hypothesis, a framework for understanding how quantum many-body systems thermalize internally. The eigenstate thermalization hypothesis has an unfortunate number of syllables, so I’ll call it the ETH. The ETH illuminates closed quantum many-body systems, such as a clump of N ultracold atoms. The clump can begin in a pure product state | \psi(0) \rangle, then evolve under a chaotic1 Hamiltonian H. The time-t state | \psi(t) \rangle will remain pure; its von Neumann entropy will always vanish. Yet entropy grows according to the second law of thermodynamics. Breaking the second law amounts almost to a enacting a miracle, according to physicists. Does the clump of atoms deserve consideration for sainthood?

No—although the clump’s state remains pure, a small subsystem’s state does not. A subsystem consists of, for example, a few atoms. They’ll entangle with the other atoms, which serve as an effective environment. The entanglement will mix the few atoms’ state, whose von Neumann entropy will grow.

The ETH predicts this growth. The ETH is an ansatz about H and an operator O—say, an observable of the few-atom subsystem. We can represent O as a matrix relative to the energy eigenbasis. The matrix elements have a certain structure, if O and H satisfy the ETH. Suppose that the operators do and that H lacks degeneracies—that no two energy eigenvalues equal each other. We can prove that O thermalizes: Imagine measuring the expectation value \langle \psi(t) | O | \psi(t) \rangle at each of many instants t. Averaging over instants produces the time-averaged expectation value \overline{ \langle O \rangle_t }

Another average is the thermal average—the expectation value of O in the appropriate thermal state. If H conserves just itself,2 the appropriate thermal state is the canonical state, \rho_{\rm can} := e^{-\beta H}/ Z. The average energy \langle \psi(0) | H | \psi(0) \rangle defines the inverse temperature \beta, and Z normalizes the state. Hence the thermal average is \langle O \rangle_{\rm th}  :=  {\rm Tr} ( O \rho_{\rm can} )

The time average approximately equals the thermal average, according to the ETH: \overline{ \langle O \rangle_t }  =  \langle O \rangle_{\rm th} + O \big( N^{-1} \big). The correction is small in the total number N of atoms. Through the lens of O, the atoms thermalize internally. Local observables tend to satisfy the ETH, and we can easily observe only local observables. We therefore usually observe thermalization, consistently with the second law of thermodynamics.

I agree that Mark Srednicki deserves the title high priest of the ETH. He and Joshua Deutsch independently dreamed up the ETH in 1994 and 1991. Since numericists reexamined it in 2008, studies and applications of the ETH have exploded like a desert religion. Yet Mark had never encountered the question I posed about it in 2021. Next month’s blog post will share the good news about that question.

1Nonintegrable.

2Apart from trivial quantities, such as projectors onto eigenspaces of H.

Let the great world spin

I first heard the song “Fireflies,” by Owl City, shortly after my junior year of college. During the refrain, singer Adam Young almost whispers, “I’d like to make myself believe / that planet Earth turns slowly.” Goosebumps prickled along my neck. Yes, I thought, I’ve studied Foucault’s pendulum.

Léon Foucault practiced physics in France during the mid-1800s. During one of his best-known experiments, he hung a pendulum from high up in a building. Imagine drawing a wide circle on the floor, around the pendulum’s bob.1

Pendulum bob and encompassing circle, as viewed from above.

Imagine pulling the bob out to a point above the circle, then releasing the pendulum. The bob will swing back and forth, tracing out a straight line across the circle.

You might expect the bob to keep swinging back and forth along that line, and to do nothing more, forever (or until the pendulum has spent all its energy on pushing air molecules out of its way). After all, the only forces acting on the bob seem to be gravity and the tension in the pendulum’s wire. But the line rotates; its two tips trace out the circle.

How long the tips take to trace the circle depends on your latitude. At the North and South Poles, the tips take one day.

Why does the line rotate? Because the pendulum dangles from a building on the Earth’s surface. As the Earth rotates, so does the building, which pushes the pendulum. You’ve experienced such a pushing if you’ve ridden in a car. Suppose that the car is zipping along at a constant speed, in an unchanging direction, on a smooth road. With your eyes closed, you won’t feel like you’re moving. The only forces you can sense are gravity and the car seat’s preventing you from sinking into the ground (analogous to the wire tension that prevents the pendulum bob from crashing into the floor). If the car turns a bend, it pushes you sidewise in your seat. This push is called a centrifugal force. The pendulum feels a centrifugal force because the Earth’s rotation is an acceleration like the car’s. The pendulum also feels another force—a Coriolis force—because it’s not merely sitting, but moving on the rotating Earth.

We can predict the rotation of Foucault’s pendulum by assuming that the Earth rotates, then calculating the centrifugal and Coriolis forces induced, and then calculating how those forces will influence the pendulum’s motion. The pendulum evidences the Earth’s rotation as nothing else had before debuting in 1851. You can imagine the stir created by the pendulum when Foucault demonstrated it at the Observatoire de Paris and at the Panthéon monument. Copycat pendulums popped up across the world. One ended up next to my college’s physics building, as shown in this video. I reveled in understanding that pendulum’s motion, junior year.

My professor alluded to a grander Foucault pendulum in Paris. It hangs in what sounded like a temple to the Enlightenment—beautiful in form, steeped in history, and rich in scientific significance. I’m a romantic about the Enlightenment; I adore the idea of creating the first large-scale organizational system for knowledge. So I hungered to make a pilgrimage to Paris.

I made the pilgrimage this spring. I was attending a quantum-chaos workshop at the Institut Pascal, an interdisciplinary institute in a suburb of Paris. One quiet Saturday morning, I rode a train into the city center. The city houses a former priory—a gorgeous, 11th-century, white-stone affair of the sort for which I envy European cities. For over 200 years, the former priory has housed the Musée des Arts et Métiers, a museum of industry and technology. In the priory’s chapel hangs Foucault’s pendulum.2

A pendulum of Foucault’s own—the one he exhibited at the Panthéon—used to hang in the chapel. That pendulum broke in 2010; but still, the pendulum swinging today is all but a holy relic of scientific history. Foucault’s pendulum! Demonstrating that the Earth rotates! And in a jewel of a setting—flooded with light from stained-glass windows and surrounded by Gothic arches below a painted ceiling. I flitted around the little chapel like a pollen-happy bee for maybe 15 minutes, watching the pendulum swing, looking at other artifacts of Foucault’s, wending my way around the carved columns.

Almost alone. A handful of visitors trickled in and out. They contrasted with my visit, the previous weekend, to the Louvre. There, I’d witnessed a Disney World–esque line of tourists waiting for a glimpse of the Mona Lisa, camera phones held high. Nobody was queueing up in the musée’s chapel. But this was Foucault’s pendulum! Demonstrating that the Earth rotates!

I confess to capitalizing on the lack of visitors to take a photo with Foucault’s pendulum and Foucault’s Pendulum, though.

Shortly before I’d left for Paris, a librarian friend had recommended Umberto Eco’s novel Foucault’s Pendulum. It occupied me during many a train ride to or from the center of Paris.

The rest of the museum could model in an advertisement for steampunk. I found automata, models of the steam engines that triggered the Industrial Revolution, and a phonograph of Thomas Edison’s. The gadgets, many formed from brass and dark wood, contrast with the priory’s light-toned majesty. Yet the priory shares its elegance with the inventions, many of which gleam and curve in decorative flutes. 

The grand finale at the Musée des Arts et Métiers.

I tore myself away from the Musée des Arts et Métiers after several hours. I returned home a week later and heard the song “Fireflies” again not long afterward. The goosebumps returned worse. Thanks to Foucault, I can make myself believe that planet Earth turns.

With thanks to Kristina Lynch for tolerating my many, many, many questions throughout her classical-mechanics course.

This story’s title refers to a translation of Goethe’s Faust. In the translation, the demon Mephistopheles tells the title character, “You let the great world spin and riot; / we’ll nest contented in our quiet” (to within punctuational and other minor errors, as I no longer have the text with me). A prize-winning 2009 novel is called Let the Great World Spin; I’ve long wondered whether Faust inspired its title.

1Why isn’t the bottom of the pendulum called the alice?

2After visiting the musée, I learned that my classical-mechanics professor had been referring to the Foucault pendulum that hangs in the Panthéon, rather than to the pendulum in the musée. The musée still contains the pendulum used by Foucault in 1851, whereas the Panthéon has only a copy, so I’m content. Still, I wouldn’t mind making a pilgrimage to the Panthéon. Let me know if more thermodynamic workshops take place in Paris!

Memories of things past

My best friend—who’s held the title of best friend since kindergarten—calls me the keeper of her childhood memories. I recall which toys we played with, the first time I visited her house,1 and which beverages our classmates drank during snack time in kindergarten.2 She wouldn’t be surprised to learn that the first workshop I’ve co-organized centered on memory.

Memory—and the loss of memory—stars in thermodynamics. As an example, take what my husband will probably do this evening: bake tomorrow’s breakfast. I don’t know whether he’ll bake fruit-and-oat cookies, banana muffins, pear muffins, or pumpkin muffins. Whichever he chooses, his baking will create a scent. That scent will waft across the apartment, seep into air vents, and escape into the corridor—will disperse into the environment. By tomorrow evening, nobody will be able to tell by sniffing what my husband will have baked. 

That is, the kitchen’s environment lacks a memory. This lack contributes to our experience of time’s arrow: We sense that time passes partially by smelling less and less of breakfast. Physicists call memoryless systems and processes Markovian.

Our kitchen’s environment is Markovian because it’s large and particles churn through it randomly. But not all environments share these characteristics. Metaphorically speaking, a dispersed memory of breakfast may recollect, return to a kitchen, and influence the following week’s baking. For instance, imagine an atom in a quantum computer, rather than a kitchen in an apartment. A few other atoms may form our atom’s environment. Quantum information may leak from our atom into that environment, swish around in the environment for a time, and then return to haunt our atom. We’d call the atom’s evolution and environment non-Markovian.

I had the good fortune to co-organize a workshop about non-Markovianity—about memory—this February. The workshop took place at the Banff International Research Station, abbreviated BIRS, which you pronounce like the plural of what you say when shivering outdoors in Canada. BIRS operates in the Banff Centre for Arts and Creativity, high in the Rocky Mountains. The Banff Centre could accompany a dictionary entry for pristine, to my mind. The air feels crisp, the trees on nearby peaks stand out against the snow like evergreen fringes on white velvet, and the buildings balance a rustic-mountain-lodge style with the avant-garde. 

The workshop balanced styles, too, but skewed toward the theoretical and abstract. We learned about why the world behaves classically in our everyday experiences; about information-theoretic measures of the distances between quantum states; and how to simulate, on quantum computers, chemical systems that interact with environments. One talk, though, brought our theory back down to (the snow-dusted) Earth.

Gabriela Schlau-Cohen runs a chemistry lab at MIT. She wants to understand how plants transport energy. Energy arrives at a plant from the sun in the form of light. The light hits a pigment-and-protein complex. If the plant is lucky, the light transforms into a particle-like packet of energy called an exciton. The exciton traverses the receptor complex, then other complexes. Eventually, the exciton finds a spot where it can enable processes such as leaf growth. 

A high fraction of the impinging photons—85%—transform into excitons. How do plants convert and transport energy as efficiently as they do?

Gabriela’s group aims to find out—not by testing natural light-harvesting complexes, but by building complexes themselves. The experimentalists mimic the complex’s protein using DNA. You can fold DNA into almost any shape you want, by choosing the DNA’s base pairs (basic units) adroitly and by using “staples” formed from more DNA scraps. The sculpted molecules are called DNA origami.

Gabriela’s group engineers different DNA structures, analogous to complexes’ proteins, to have different properties. For instance, the experimentalists engineer rigid structures and flexible structures. Then, the group assesses how energy moves through each structure. Each structure forms an environment that influences excitons’ behaviors, similarly to how a memory-containing environment influences an atom.

Courtesy of Gabriela Schlau-Cohen

The Banff environment influenced me, stirring up memories like powder displaced by a skier on the slopes above us. I first participated in a BIRS workshop as a PhD student, and then I returned as a postdoc. Now, I was co-organizing a workshop to which I brought a PhD student of my own. Time flows, as we’re reminded while walking down the mountain from the Banff Centre into town: A cemetery borders part of the path. Time flows, but we belong to that thermodynamically remarkable class of systems that retain memories…memories and a few other treasures that resist change, such as friendships held since kindergarten.

1Plushy versions of Simba and Nala from The Lion King. I remain grateful to her for letting me play at being Nala.

2I’d request milk, another kid would request apple juice, and everyone else would request orange juice.

A (quantum) complex legacy: Part deux

I didn’t fancy the research suggestion emailed by my PhD advisor.

A 2016 email from John Preskill led to my publishing a paper about quantum complexity in 2022, as I explained in last month’s blog post. But I didn’t explain what I thought of his email upon receiving it.

It didn’t float my boat. (Hence my not publishing on it until 2022.)

The suggestion contained ingredients that ordinarily would have caulked any cruise ship of mine: thermodynamics, black-hole-inspired quantum information, and the concept of resources. John had forwarded a paper drafted by Stanford physicists Adam Brown and Lenny Susskind. They act as grand dukes of the community sussing out what happens to information swallowed by black holes. 

From Rare-Gallery

We’re not sure how black holes work. However, physicists often model a black hole with a clump of particles squeezed close together and so forced to interact with each other strongly. The interactions entangle the particles. The clump’s quantum state—let’s call it | \psi(t) \rangle—grows not only complicated with time (t), but also complex in a technical sense: Imagine taking a fresh clump of particles and preparing it in the state | \psi(t) \rangle via a sequence of basic operations, such as quantum gates performable with a quantum computer. The number of basic operations needed is called the complexity of | \psi(t) \rangle. A black hole’s state has a complexity believed to grow in time—and grow and grow and grow—until plateauing. 

This growth echoes the second law of thermodynamics, which helps us understand why time flows in only one direction. According to the second law, every closed, isolated system’s entropy grows until plateauing.1 Adam and Lenny drew parallels between the second law and complexity’s growth.

The less complex a quantum state is, the better it can serve as a resource in quantum computations. Recall, as we did last month, performing calculations in math class. You needed clean scratch paper on which to write the calculations. So does a quantum computer. “Scratch paper,” to a quantum computer, consists of qubits—basic units of quantum information, realized in, for example, atoms or ions. The scratch paper is “clean” if the qubits are in a simple, unentangled quantum state—a low-complexity state. A state’s greatest possible complexity, minus the actual complexity, we can call the state’s uncomplexity. Uncomplexity—a quantum state’s blankness—serves as a resource in quantum computation.

Manny Knill and Ray Laflamme realized this point in 1998, while quantifying the “power of one clean qubit.” Lenny arrived at a similar conclusion while reasoning about black holes and firewalls. For an introduction to firewalls, see this blog post by John. Suppose that someone—let’s call her Audrey—falls into a black hole. If it contains a firewall, she’ll burn up. But suppose that someone tosses a qubit into the black hole before Audrey falls. The qubit kicks the firewall farther away from the event horizon, so Audrey will remain safe for longer. Also, the qubit increases the uncomplexity of the black hole’s quantum state. Uncomplexity serves as a resource also to Audrey.

A resource is something that’s scarce, valuable, and useful for accomplishing tasks. Different things qualify as resources in different settings. For instance, imagine wanting to communicate quantum information to a friend securely. Entanglement will serve as a resource. How can we quantify and manipulate entanglement? How much entanglement do we need to perform a given communicational or computational task? Quantum scientists answer such questions with a resource theory, a simple information-theoretic model. Theorists have defined resource theories for entanglement, randomness, and more. In many a blog post, I’ve eulogized resource theories for thermodynamic settings. Can anyone define, Adam and Lenny asked, a resource theory for quantum uncomplexity?

Resource thinking pervades our world.

By late 2016, I was a quantum thermodynamicist, I was a resource theorist, and I’d just debuted my first black-hole–inspired quantum information theory. Moreover, I’d coauthored a review about the already-extant resource theory that looked closest to what Adam and Lenny sought. Hence John’s email, I expect. Yet that debut had uncovered reams of questions—questions that, as a budding physicist heady with the discovery of discovery, I could own. Why would I answer a question of someone else’s instead?

So I thanked John, read the paper draft, and pondered it for a few days. Then, I built a research program around my questions and waited for someone else to answer Adam and Lenny.

Three and a half years later, I was still waiting. The notion of uncomplexity as a resource had enchanted the black-hole-information community, so I was preparing a resource-theory talk for a quantum-complexity workshop. The preparations set wheels churning in my mind, and inspiration struck during a long walk.2

After watching my workshop talk, Philippe Faist reached out about collaborating. Philippe is a coauthor, a friend, and a fellow quantum thermodynamicist and resource theorist. Caltech’s influence had sucked him, too, into the black-hole community. We Zoomed throughout the pandemic’s first spring, widening our circle to include Teja Kothakonda, Jonas Haferkamp, and Jens Eisert of Freie University Berlin. Then, Anthony Munson joined from my nascent group in Maryland. Physical Review A published our paper, “Resource theory of quantum uncomplexity,” in January.

The next four paragraphs, I’ve geared toward experts. An agent in the resource theory manipulates a set of n qubits. The agent can attempt to perform any gate U on any two qubits. Noise corrupts every real-world gate implementation, though. Hence the agent effects a gate chosen randomly from near U. Such fuzzy gates are free. The agent can’t append or discard any system for free: Appending even a maximally mixed qubit increases the state’s uncomplexity, as Knill and Laflamme showed. 

Fuzzy gates’ randomness prevents the agent from mapping complex states to uncomplex states for free (with any considerable probability). Complexity only grows or remains constant under fuzzy operations, under appropriate conditions. This growth echoes the second law of thermodynamics. 

We also defined operational tasks—uncomplexity extraction and expenditure analogous to work extraction and expenditure. Then, we bounded the efficiencies with which the agent can perform these tasks. The efficiencies depend on a complexity entropy that we defined—and that’ll star in part trois of this blog-post series.

Now, I want to know what purposes the resource theory of uncomplexity can serve. Can we recast black-hole problems in terms of the resource theory, then leverage resource-theory results to solve the black-hole problem? What about problems in condensed matter? Can our resource theory, which quantifies the difficulty of preparing quantum states, merge with the resource theory of magic, which quantifies that difficulty differently?

Unofficial mascot for fuzzy operations

I don’t regret having declined my PhD advisor’s recommendation six years ago. Doing so led me to explore probability theory and measurement theory, collaborate with two experimental labs, and write ten papers with 21 coauthors whom I esteem. But I take my hat off to Adam and Lenny for their question. And I remain grateful to the advisor who kept my goals and interests in mind while checking his email. I hope to serve Anthony and his fellow advisees as well.

1…en route to obtaining a marriage license. My husband and I married four months after the pandemic throttled government activities. Hours before the relevant office’s calendar filled up, I scored an appointment to obtain our license. Regarding the metro as off-limits, my then-fiancé and I walked from Cambridge, Massachusetts to downtown Boston for our appointment. I thank him for enduring my requests to stop so that I could write notes.

2At least, in the thermodynamic limit—if the system is infinitely large. If the system is finite-size, its entropy grows on average.

A (quantum) complex legacy

Early in the fourth year of my PhD, I received a most John-ish email from John Preskill, my PhD advisor. The title read, “thermodynamics of complexity,” and the message was concise the way that the Amazon River is damp: “Might be an interesting subject for you.” 

Below the signature, I found a paper draft by Stanford physicists Adam Brown and Lenny Susskind. Adam is a Brit with an accent and a wit to match his Oxford degree. Lenny, known to the public for his books and lectures, is a New Yorker with an accent that reminds me of my grandfather. Before the physicists posted their paper online, Lenny sought feedback from John, who forwarded me the email.

The paper concerned a confluence of ideas that you’ve probably encountered in the media: string theory, black holes, and quantum information. String theory offers hope for unifying two physical theories: relativity, which describes large systems such as our universe, and quantum theory, which describes small systems such as atoms. A certain type of gravitational system and a certain type of quantum system participate in a duality, or equivalence, known since the 1990s. Our universe isn’t such a gravitational system, but never mind; the duality may still offer a toehold on a theory of quantum gravity. Properties of the gravitational system parallel properties of the quantum system and vice versa. Or so it seemed.

The gravitational system can have two black holes linked by a wormhole. The wormhole’s volume can grow linearly in time for a time exponentially long in the black holes’ entropy. Afterward, the volume hits a ceiling and approximately ceases changing. Which property of the quantum system does the wormhole’s volume parallel?

Envision the quantum system as many particles wedged close together, so that they interact with each other strongly. Initially uncorrelated particles will entangle with each other quickly. A quantum system has properties, such as average particle density, that experimentalists can measure relatively easily. Does such a measurable property—an observable of a small patch of the system—parallel the wormhole volume? No; such observables cease changing much sooner than the wormhole volume does. The same conclusion applies to the entanglement amongst the particles.

What about a more sophisticated property of the particles’ quantum state? Researchers proposed that the state’s complexity parallels the wormhole’s volume. To grasp complexity, imagine a quantum computer performing a computation. When performing computations in math class, you needed blank scratch paper on which to write your calculations. A quantum computer needs the quantum equivalent of blank scratch paper: qubits (basic units of quantum information, realized, for example, as atoms) in a simple, unentangled, “clean” state. The computer performs a sequence of basic operations—quantum logic gates—on the qubits. These operations resemble addition and subtraction but can entangle the qubits. What’s the minimal number of basic operations needed to prepare a desired quantum state (or to “uncompute” a given state to the blank state)? The state’s quantum complexity.1 

Quantum complexity has loomed large over multiple fields of physics recently: quantum computing, condensed matter, and quantum gravity. The latter, we established, entails a duality between a gravitational system and a quantum system. The quantum system begins in a simple quantum state that grows complicated as the particles interact. The state’s complexity parallels the volume of a wormhole in the gravitational system, according to a conjecture.2 

The conjecture would hold more water if the quantum state’s complexity grew similarly to the wormhole’s volume: linearly in time, for a time exponentially large in the quantum system’s size. Does the complexity grow so? The expectation that it does became the linear-growth conjecture.

Evidence supported the conjecture. For instance, quantum information theorists modeled the quantum particles as interacting randomly, as though undergoing a quantum circuit filled with random quantum gates. Leveraging probability theory,3 the researchers proved that the state’s complexity grows linearly at short times. Also, the complexity grows linearly for long times if each particle can store a great deal of quantum information. But what if the particles are qubits, the smallest and most ubiquitous unit of quantum information? The question lingered for years.

Jonas Haferkamp, a PhD student in Berlin, dreamed up an answer to an important version of the question.4 I had the good fortune to help formalize that answer with him and members of his research group: master’s student Teja Kothakonda, postdoc Philippe Faist, and supervisor Jens Eisert. Our paper, published in Nature Physics last year, marked step one in a research adventure catalyzed by John Preskill’s email 4.5 years earlier.

Imagine, again, qubits undergoing a circuit filled with random quantum gates. That circuit has some architecture, or arrangement of gates. Slotting different gates into the architecture effects different transformations5 on the qubits. Consider the set of all transformations implementable with one architecture. This set has some size, which we defined and analyzed.

What happens to the set’s size if you add more gates to the circuit—let the particles interact for longer? We can bound the size’s growth using the mathematical toolkits of algebraic geometry and differential topology. Upon bounding the size’s growth, we can bound the state’s complexity. The complexity, we concluded, grows linearly in time for a time exponentially long in the number of qubits.

Our result lends weight to the complexity-equals-volume hypothesis. The result also introduces algebraic geometry and differential topology into complexity as helpful mathematical toolkits. Finally, the set size that we bounded emerged as a useful concept that may elucidate circuit analyses and machine learning.

John didn’t have machine learning in mind when forwarding me an email in 2017. He didn’t even have in mind proving the linear-growth conjecture. The proof enables step two of the research adventure catalyzed by that email: thermodynamics of quantum complexity, as the email’s title stated. I’ll cover that thermodynamics in its own blog post. The simplest of messages can spin a complex legacy.

The links provided above scarcely scratch the surface of the quantum-complexity literature; for a more complete list, see our paper’s bibliography. For a seminar about the linear-growth paper, see this video hosted by Nima Lashkari’s research group.

1The term complexity has multiple meanings; forget the rest for the purposes of this article.

2According to another conjecture, the quantum state’s complexity parallels a certain space-time region’s action. (An action, in physics, isn’t a motion or a deed or something that Hamlet keeps avoiding. An action is a mathematical object that determines how a system can and can’t change in time.) The first two conjectures snowballed into a paper entitled “Does complexity equal anything?” Whatever it parallels, complexity plays an important role in the gravitational–quantum duality. 

3Experts: Such as unitary t-designs.

4Experts: Our work concerns quantum circuits, rather than evolutions under fixed Hamiltonians. Also, our work concerns exact circuit complexity, the minimal number of gates needed to prepare a state exactly. A natural but tricky extension eluded us: approximate circuit complexity, the minimal number of gates needed to approximate the state.

5Experts: Unitary operators.

Rocks that roll

In Terry Pratchett’s fantasy novel Soul Music, rock ’n roll arrives in Ankh-Morpork. Ankh-Morpork resembles the London of yesteryear—teeming with heroes and cutthroats, palaces and squalor—but also houses vampires, golems, wizards, and a sentient suitcase. Against this backdrop, a young harpist stumbles upon a mysterious guitar. He forms a band with a dwarf and with a troll who plays tuned rocks, after which the trio calls its style “Music with Rocks In.” The rest of the story consists of satire, drums, and rocks that roll. 

The topic of rolling rocks sounds like it should elicit more yawns than an Elvis concert elicited screams. But rocks’ rolling helped recent University of Maryland physics PhD student Zackery Benson win a National Research Council Fellowship. He and his advisor, Wolfgang Losert, converted me into a fan of granular flow.

What I’ve been studying recently. Kind of.

Grains make up materials throughout the galaxy, such as the substance of avalanches. Many granular materials undergo repeated forcing by their environments. For instance, the grains that form an asteroid suffer bombardment from particles flying through outer space. The gravel beneath train tracks is compressed whenever a train passes. 

Often, a pattern characterizes the forces in a granular system’s environment. For instance, trains in a particular weight class may traverse some patch of gravel, and the trains may arrive with a particular frequency. Some granular systems come to encode information about those patterns in their microscopic configurations and large-scale properties. So granular flow—little rocks that roll—can impact materials science, engineering, geophysics, and thermodynamics.

Granular flow sounds so elementary, you might expect us to have known everything about it since long before the Beatles’ time. But we didn’t even know until recently how to measure rolling in granular flows. 

Envision a grain as a tiny sphere, like a globe of the Earth. Scientists focused mostly on how far grains are translated through space in a flow, analogouslly to how far a globe travels across a desktop if flicked. Recently, scientists measured how far a grain rotates about one axis, like a globe fixed in a frame. Sans frame, though, a globe can spin about more than one axis—about three independent axes. Zack performed the first measurement of all the rotations and translations of all the particles in a granular flow.

Each grain was an acrylic bead about as wide as my pinky nail. Two holes were drilled into each bead, forming an X, for reasons I’ll explain. 

Image credit: Benson et al., Phys. Rev. Lett. 129, 048001 (2022).

Zack dumped over 10,000 beads into a rectangular container. Then, he poured in a fluid that filled the spaces between the grains. Placing a weight atop the grains, he exerted a constant pressure on them. Zack would push one of the container’s walls inward, compressing the grains similarly to how a train compresses gravel. Then, he’d decompress the beads. He repeated this compression cycle many times.

Image credit: Benson et al., Phys. Rev. E 103, 062906 (2021).

Each cycle consisted of many steps: Zack would compress the beads a tiny amount, pause, snap pictures, and then compress a tiny amount more. During each pause, the camera activated a fluorescent dye in the fluid, which looked clear in the photographs. Lacking the fluorescent dye, the beads showed up as dark patches. Clear X’s cut through the dark patches, as dye filled the cavities drilled into the beads. From the X’s, Zack inferred every grain’s orientation. He inferred how every grain rotated by comparing the orientation in one snapshot with the orientation in the next snapshot. 

Image credit: Benson et al., Phys. Rev. Lett. 129, 048001 (2022).

Wolfgang’s lab had been trying for fifteen years to measure all the motions in a granular flow. The feat required experimental and computational skill. I appreciated the chance to play a minor role, in analyzing the data. Physical Review Letters published our paper last month.

From Zack’s measurements, we learned about the unique roles played by rotations in granular flow. For instance, rotations dominate the motion in a granular system’s bulk, far from the container’s walls. Importantly, the bulk dissipates the most energy. Also, whereas translations are reversible—however far grains shift while compressed, they tend to shift oppositely while decompressed—rotations are not. Such irreversibility can contribute to materials’ aging.

In Soul Music, the spirit of rock ’n roll—conceived of as a force in its own right—offers the guitarist the opportunity to never age. He can live fast, die young, and enjoy immortality as a legend, for his guitar comes from a dusty little shop not entirely of Ankh-Morpork’s world. Such shops deal in fate and fortune, the author maintains. Doing so, he takes a dig at the River Ankh, which flows through the city of Ankh-Morpork. The Ankh’s waters hold so much garbage, excrement, midnight victims, and other muck that they scarcely count as waters:

And there was even an Ankh-Morpork legend, wasn’t there, about some old drum [ . . . ] that was supposed to bang itself if an enemy fleet was seen sailing up the Ankh? The legend had died out in recent centuries, partly because this was the Age of Reason and also because no enemy fleet could sail up the Ankh without a gang of men with shovels going in front.

Such a drum would qualify as magic easily, but don’t underestimate the sludge. As a granular-flow system, it’s more incredible than you might expect.

Quantum connections

We were seated in the open-air back of a boat, motoring around the Stockholm archipelago. The Swedish colors fluttered above our heads; the occasional speedboat zipped past, rocking us in its wake; and wildflowers dotted the bank on either side. Suddenly, a wood-trimmed boat glided by, and the captain waved from his perch.

The gesture surprised me. If I were in a vehicle of the sort most familiar to me—a car—I wouldn’t wave to other drivers. In a tram, I wouldn’t wave to passengers on a parallel track. Granted, trams and cars are closed, whereas boats can be open-air. But even as a pedestrian in a downtown crossing, I wouldn’t wave to everyone I passed. Yet, as boat after boat pulled alongside us, we received salutation after salutation.

The outing marked the midpoint of the Quantum Connections summer school. Physicists Frank Wilczek, Antti Niemi, and colleagues coordinate the school, which draws students and lecturers from across the globe. Although sponsored by Stockholm University, the school takes place at a century-old villa whose name I wish I could pronounce: Högberga Gård. The villa nestles atop a cliff on an island in the archipelago. We ventured off the island after a week of lectures.

Charlie Marcus lectured about materials formed from superconductors and semiconductors; John Martinis, about superconducting qubits; Jianwei Pan, about quantum advantages; and others, about symmetries, particle statistics, and more. Feeling like an ant among giants, I lectured about quantum thermodynamics. Two other lectures linked quantum physics with gravity—and in a way you might not expect. I appreciated the opportunity to reconnect with the lecturer: Igor Pikovski.

Cruising around Stockholm

Igor doesn’t know it, but he’s one of the reasons why I joined the Harvard-Smithsonian Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP) as an ITAMP Postdoctoral Fellow in 2018. He’d held the fellowship beginning a few years before, and he’d earned a reputation for kindness and consideration. Also, his research struck me as some of the most fulfilling that one could undertake.

If you’ve heard about the intersection of quantum physics and gravity, you’ve probably heard of approaches other than Igor’s. For instance, physicists are trying to construct a theory of quantum gravity, which would describe black holes and the universe’s origin. Such a “theory of everything” would reduce to Einstein’s general theory of relativity when applied to planets and would reduce to quantum theory when applied to atoms. In another example, physicists leverage quantum technologies to observe properties of gravity. Such technologies enabled the observatory LIGO to register gravitational waves—ripples in space-time. 

Igor and his colleagues pursue a different goal: to observe phenomena whose explanations depend on quantum theory and on gravity.

In his lectures, Igor illustrated with an experiment first performed in 1975. The experiment relies on what happens if you jump: You gain energy associated with resisting the Earth’s gravitational pull—gravitational potential energy. A quantum object’s energy determines how the object’s quantum state changes in time. The experimentalists applied this fact to a beam of neutrons. 

They put the beam in a superposition of two locations: closer to the Earth’s surface and farther away. The closer component changed in time in one way, and the farther component changed another way. After a while, the scientists recombined the components. The two interfered with each other similarly to the waves created by two raindrops falling near each other on a puddle. The interference evidenced gravity’s effect on the neutrons’ quantum state.

Summer-school venue. I’d easily say it’s gorgeous but not easily pronounce its name.

The experimentalists approximated gravity as dominated by the Earth alone. But other masses can influence the gravitational field noticeably. What if you put a mass in a superposition of different locations? What would happen to space-time?

Or imagine two quantum particles too far apart to interact with each other significantly. Could a gravitational field entangle the particles by carrying quantum correlations from one to the other?

Physicists including Igor ponder these questions…and then ponder how experimentalists could test their predictions. The more an object influences gravity, the more massive the object tends to be, and the more easily the object tends to decohere—to spill the quantum information that it holds into its surroundings.

The “gravity-quantum interface,” as Igor entitled his lectures, epitomizes what I hoped to study in college, as a high-school student entranced by physics, math, and philosophy. What’s more curious and puzzling than superpositions, entanglement, and space-time? What’s more fundamental than quantum theory and gravity? Little wonder that connecting them inspires wonder.

But we humans are suckers for connections. I appreciated the opportunity to reconnect with a colleague during the summer school. Boaters on the Stockholm archipelago waved to our cohort as they passed. And who knows—gravitational influences may even have rippled between the boats, entangling us a little.

Requisite physicist-visiting-Stockholm photo

With thanks to the summer-school organizers, including Pouya Peighami and Elizabeth Yang, for their invitation and hospitality.

One equation to rule them all?

In lieu of composing a blog post this month, I’m publishing an article in Quanta Magazine. The article provides an introduction to fluctuation relations, souped-up variations on the second law of thermodynamics, which helps us understand why time flows in only one direction. The earliest fluctuation relations described classical systems, such as single strands of DNA. Many quantum versions have been proved since. Their proliferation contrasts with the stereotype of physicists as obsessed with unification—with slimming down a cadre of equations into one über-equation. Will one quantum fluctuation relation emerge to rule them all? Maybe, and maybe not. Maybe the multiplicity of quantum fluctuation relations reflects the richness of quantum thermodynamics.

You can read more in Quanta Magazine here and yet more in chapter 9 of my book. For recent advances in fluctuation relations, as opposed to the broad introduction there, check out earlier Quantum Frontiers posts here, here, here, here, and here.

The power of being able to say “I can explain that”

Caltech condensed-matter theorist Gil Refael explained his scientific raison dê’tre early in my grad-school career: “What really gets me going is seeing a plot [of experimental data] and being able to say, ‘I can explain that.’” The quote has stuck with me almost word for word. When I heard it, I was working deep in abstract quantum information theory and thermodynamics, proving theorems about thought experiments. Embedding myself in pure ideas has always held an aura of romance for me, so I nodded along without seconding Gil’s view.

Roughly nine years later, I concede his point.

The revelation walloped me last month, as I was polishing a paper with experimental collaborators. Members of the Institute for Quantum Optics and Quantum Information (IQOQI) in Innsbruck, Austria—Florian Kranzl, Manoj Joshi, and Christian Roos—had performed an experiment in trapped-ion guru Rainer Blatt’s lab. Their work realized an experimental proposal that I’d designed with fellow theorists near the beginning of my postdoc stint. We aimed to observe signatures of particularly quantum thermalization

Throughout the universe, small systems exchange stuff with their environments. For instance, the Earth exchanges heat and light with the rest of the solar system. After exchanging stuff for long enough, the small system equilibrates with the environment: Large-scale properties of the small system (such as its volume and energy) remain fairly constant; and as much stuff enters the small system as leaves, on average. The Earth remains far from equilibrium, which is why we aren’t dead yet

Far from equilibrium and proud of it

In many cases, in equilibrium, the small system shares properties of the environment, such as the environment’s temperature. In these cases, we say that the small system has thermalized and, if it’s quantum, has reached a thermal state.

The stuff exchanged can consist of energy, particles, electric charge, and more. Unlike classical planets, quantum systems can exchange things that participate in quantum uncertainty relations (experts: that fail to commute). Quantum uncertainty mucks up derivations of the thermal state’s mathematical form. Some of us quantum thermodynamicists discovered the mucking up—and identified exchanges of quantum-uncertain things as particularly nonclassical thermodynamics—only a few years ago. We reworked conventional thermodynamic arguments to accommodate this quantum uncertainty. The small system, we concluded, likely equilibrates to near a thermal state whose mathematical form depends on the quantum-uncertain stuff—what we termed a non-Abelian thermal state. I wanted to see this equilibration in the lab. So I proposed an experiment with theory collaborators; and Manoj, Florian, and Christian took a risk on us.

The experimentalists arrayed between six and fifteen ions in a line. Two ions formed the small system, and the rest formed the quantum environment. The ions exchanged the x-, y-, and z-components of their spin angular momentum—stuff that participates in quantum uncertainty relations. The ions began with a fairly well-defined amount of each spin component, as described in another blog post. The ions exchanged stuff for a while, and then the experimentalists measured the small system’s quantum state.

The small system equilibrated to near the non-Abelian thermal state, we found. No conventional thermal state modeled the results as accurately. Score!

My postdoc and numerical-simulation wizard Aleks Lasek modeled the experiment on his computer. The small system, he found, remained farther from the non-Abelian thermal state in his simulation than in the experiment. Aleks plotted the small system’s distance to the non-Abelian thermal state against the ion chain’s length. The points produced experimentally sat lower down than the points produced numerically. Why?

I think I can explain that, I said. The two ions exchange stuff with the rest of the ions, which serve as a quantum environment. But the two ions exchange stuff also with the wider world, such as stray electromagnetic fields. The latter exchanges may push the small system farther toward equilibrium than the extra ions alone do.

Fortunately for the development of my explanatory skills, collaborators prodded me to hone my argument. The wider world, they pointed out, effectively has a very high temperature—an infinite temperature.1 Equilibrating with that environment, the two ions would acquire an infinite temperature themselves. The two ions would approach an infinite-temperature thermal state, which differs from the non-Abelian thermal state we aimed to observe.

Fair, I said. But the extra ions probably have a fairly high temperature themselves. So the non-Abelian thermal state is probably close to the infinite-temperature thermal state. Analogously, if someone cooks goulash similarly to his father, and the father cooks goulash similarly to his grandfather, then the youngest chef cooks goulash similarly to his grandfather. If the wider world pushes the two ions to equilibrate to infinite temperature, then, because the infinite-temperature state lies near the non-Abelian thermal state, the wider world pushes the two ions to equilibrate to near the non-Abelian thermal state.

Tasty, tasty thermodynamicis

I plugged numbers into a few equations to check that the extra ions do have a high temperature. (Perhaps I should have done so before proposing the argument above, but my collaborators were kind enough not to call me out.) 

Aleks hammered the nail into the problem’s coffin by incorporating into his simulations the two ions’ interaction with an infinite-temperature wider world. His numerical data points dropped to near the experimental data points. The new plot supported my story.

I can explain that! Aleks’s results buoyed me the whole next day; I found myself smiling at random times throughout the afternoon. Not that I’d explained a grand mystery, like the unexpected hiss heard by Arno Penzias and Robert Wilson when they turned on a powerful antenna in 1964. The hiss turned out to come from the cosmic microwave background (CMB), a collection of photons that fill the visible universe. The CMB provided evidence for the then-controversial Big Bang theory of the universe’s origin. Discovering the CMB earned Penzias and Wilson a Nobel Prize. If the noise caused by the CMB was music to cosmologists’ ears, the noise in our experiment is the quiet wailing of a shy banshee. But it’s our experiment’s noise, and we understand it now.

The experience hasn’t weaned me off the romance of proving theorems about thought experiments. Theorems about thermodynamic quantum uncertainty inspired the experiment that yielded the plot that confused us. But I now second Gil’s sentiment. In the throes of an experiment, “I can explain that” can feel like a battle cry.

1Experts: The wider world effectively has an infinite temperature because (i) the dominant decoherence is dephasing relative to the \sigma_z product eigenbasis and (ii) the experimentalists rotate their qubits often, to simulate a rotationally invariant Hamiltonian evolution. So the qubits effectively undergo dephasing relative to the \sigma_x, \sigma_y, and \sigma_z eigenbases.

Building a Koi pond with Lie algebras

When I was growing up, one of my favourite places was the shabby all-you-can-eat buffet near our house. We’d walk in, my mom would approach the hostess to explain that, despite my being abnormally large for my age, I qualified for kids-eat-free, and I would peel away to stare at the Koi pond. The display of different fish rolling over one another was bewitching. Ten-year-old me would have been giddy to build my own Koi pond, and now I finally have. However, I built one using Lie algebras.

The different fish swimming in the Koi pond are, in many ways, like charges being exchanged between subsystems. A “charge” is any globally conserved quantity. Examples of charges include energy, particles, electric charge, or angular momentum. Consider a system consisting of a cup of coffee in your office. The coffee will dynamically exchange charges with your office in the form of heat energy. Still, the total energy of the coffee and office is conserved (assuming your office walls are really well insulated). In this example, we had one type of charge (heat energy) and two subsystems (coffee and office). Consider now a closed system consisting of many subsystems and many different types of charges. The closed system is like the finite Koi pond with different charges like the different fish species. The charges can move around locally, but the total number of charges is globally fixed, like how the fish swim around but can’t escape the pond. Also, the presence of one type of charge can alter another’s movement, just as a big fish might block a little one’s path. 

Unfortunately, the Koi pond analogy reaches its limit when we move to quantum charges. Classically, charges commute. This means that we can simultaneously determine the amount of each charge in our system at each given moment. In quantum mechanics, this isn’t necessarily true. In other words, classically, I can count the number of glossy fish and matt fish. But, in quantum mechanics, I can’t.

So why does this matter? Subsystems exchanging charges are prevalent in thermodynamics. Quantum thermodynamics extends thermodynamics to include small systems and quantum effects. Noncommutation underlies many important quantum phenomena. Hence, studying the exchange of noncommuting charges is pivotal in understanding quantum thermodynamics. Consequently, noncommuting charges have emerged as a rapidly growing subfield of quantum thermodynamics. Many interesting results have been discovered from no longer assuming that charges commute (such as these). Until recently, most of these discoveries have been theoretical. Bridging these discoveries to experimental reality requires Hamiltonians (functions that tell you how your system evolves in time) that move charges locally but conserve them globally. Last year it was unknown whether these Hamiltonians exist, what they look like generally, how to build them, and for what charges you could find them.

Nicole Yunger Halpern (NIST physicist, my co-advisor, and Quantum Frontiers blogger) and I developed a prescription for building Koi ponds for noncommuting charges. Our prescription allows you to systematically build Hamiltonians that overtly move noncommuting charges between subsystems while conserving the charges globally. These Hamiltonians are built using Lie algebras, abstract mathematical tools that can describe many physical quantities (including everything in the standard model of particle physics and space-time metric). Our results were recently published in npj QI. We hope that our prescription will bolster the efforts to bridge the results of noncommuting charges to experimental reality.

In the end, a little group theory was all I needed for my Koi pond. Maybe I’ll build a treehouse next with calculus or a remote control car with combinatorics.