Quantum estuary

Tourism websites proclaim, “There’s beautiful…and then there’s Santa Barbara.” I can’t accuse them of hyperbole, after living in Santa Barbara for several months. Santa Barbara’s beauty manifests in its whitewashed buildings, capped with red tiles; in the glint of sunlight on ocean wave; and in the pockets of tranquility enfolded in meadows and copses. An example lies about an hour’s walk from the Kavli Institute for Theoretical Physics (KITP), where I spent the late summer and early fall: an estuary. According to National Geographic, “[a]n estuary is an area where a freshwater river or stream meets the ocean.” The meeting of freshwater and saltwater echoed the meeting of disciplines at the KITP.

The KITP fosters science as a nature reserve fosters an ecosystem. Every year, the institute hosts several programs, each centered on one scientific topic. A program lasts a few weeks or months, during which scientists visit from across the world. We present our perspectives on the program topic, identify intersections of interests, collaborate, and exclaim over the ocean views afforded by our offices.

From August to October, the KITP hosted two programs about energy and information. The first program was called “Energy and Information Transport in Non-Equilibrium Quantum Systems,” or “Information,” for short. The second program was called “Non-Equilibrium Universality: From Classical to Quantum and Back,” or “Universality.” The programs’ topics and participant lists overlapped, so the KITP merged “Information” and “Universality” to form “Infoversality.” Don’t ask me which program served as the saltwater and which as the fresh.

But the mingling of minds ran deeper. Much of “Information” centered on quantum many-body physics, the study of behaviors emergent in collections of quantum particles. But the program introduced many-body quantum physicists to quantum thermodynamics and vice versa. (Quantum thermodynamicists re-envision thermodynamics, the Victorian science of energy, for quantum, small, information-processing, and far-from-equilibrium systems.) Furthermore, quantum thermodynamicists co-led the program and presented research at it. Months ago, someone advertised the program in the quantum-thermodynamics Facebook group as an activity geared toward group members.

The ocean of many-body physics was to meet the river of quantum thermodynamics, and I was thrilled as a trout swimming near a hiker who’s discovered cracker crumbs in her pocket.

A few of us live in this estuary, marrying quantum thermodynamics and many-body physics. I waded into the waters in 2016, by codesigning an engine (the star of Victorian thermodynamics) formed from a quantum material (studied in many-body physics). We can use tools from one field to solve problems in the other, draw inspiration from one to design questions in the other, and otherwise do what the United States Food and Drug Administration recently announced that we can do with COVID19 vaccines: mix and match.

It isn’t easy being interdisciplinary, so I wondered how this estuary would fare when semi-institutionalized in a program. I collected observations like seashells—some elegantly molded, some liable to cut a pedestrian’s foot, and some both.

A sand dollar washed up early in the program, as I ate lunch with a handful of many-body physicists. An experimentalist had just presented a virtual talk about nanoscale clocks, which grew from studies of autonomous quantum clocks. The latter run on their own, without needing any external system to wind or otherwise control them. You’d want such clocks if building quantum engines, computers, or drones that operate remotely. Clocks measure time, time complements energy mathematically in physics, and thermodynamics is the study of energy; so autonomous quantum clocks have taken root in quantum thermodynamics. So I found myself explaining autonomous quantum clocks over sandwiches. My fellow diners expressed interest alongside confusion.

A scallop shell, sporting multiple edges, washed up later in the program: Many-body physicists requested an introduction to quantum thermodynamics. I complied one afternoon, at a chalkboard in the KITP’s outdoor courtyard. The discussion lasted for an hour, whereas most such conversations lasted for two. But three participants peppered me with questions over the coming weeks.

A conch shell surfaced, whispering when held to an ear. One program participant, a member of one community, had believed the advertising that had portrayed the program as intended for his cohort. The portrayal didn’t match reality, to him, and he’d have preferred to dive more deeply into his own field.

I dove into a collaboration with other KITPists—a many-body project inspired by quantum thermodynamics. Keep an eye out for a paper and a dedicated blog post.

A conference talk served as a polished shell, reflecting light almost as a mirror. The talk centered on erasure, a process that unites thermodynamics with information processing: Imagine performing computations in math class. You need blank paper (or the neurological equivalent) on which to scribble. Upon computing a great deal, you have to erase the paper—to reset it to a clean state. Erasing calls for rubbing an eraser across the paper and so for expending energy. This conclusion extends beyond math class and paper: To compute—or otherwise process information—for a long time, we have to erase information-storage systems and so to expend energy. This conclusion renders erasure sacred to us thermodynamicists who study information processing. Erasure litters our papers, conferences, and conversations.

Erasure’s energy cost trades off with time: The more time you can spend on erasure, the less energy you need.1 The conference talk explored this tradeoff, absorbing the quantum thermodynamicist in me. A many-body physicist asked, at the end of the talk, why we were discussing erasure. What quantum thermodynamicists took for granted, he hadn’t heard of. He reflected back at our community an image of ourselves from an outsider’s perspective. The truest mirror might not be the flattest and least clouded.

Plants and crustaceans, mammals and birds, grow in estuaries. Call me a bent-nosed clam, but I prefer a quantum estuary to all other environments. Congratulations to the scientists who helped create a quantum estuary this summer and fall, and I look forward to the harvest.

1The least amount of energy that erasure can cost, on average over trials, is called Landauer’s bound. You’d pay this bound’s worth of energy if you erased infinitely slowly.

How a liberal-arts education has enhanced my physics research

I attended a liberal-arts college, and I reveled in the curriculum’s breadth. My coursework included art history, psychology, biology, economics, computer science, German literature, archaeology, and chemistry. My major sat halfway between the physics major and the create-your-own major; the requirements consisted mostly of physics but included math, philosophy, and history. By the end of college, I’d determined to dive into physics. So I undertook a physics research assistantship, enlisted in a Master’s program and then a PhD program, and became a theoretical physicist. I’m now building a physics research group that spans a government institute and the University of Maryland. One might think that I became a physicist despite my art history and archaeology.

My liberal-arts education did mortify me a little as I pursued my Master’s degree. Most of my peers had focused on physics, mathematics, and computer science while I’d been reading Aristotle. They seemed to breeze through coursework that I clawed my way through. I still sigh wistfully over math courses, such as complex analysis, that I’ve never taken. Meanwhile, a debate about the liberal arts has been raging across the nation. Debt is weighing down recent graduates, and high-school students are loading up on STEMM courses. Colleges are cutting liberal-arts departments, and educational organizations are broadcasting the value of liberal-arts educations.

I’m not an expert in public policy or school systems; I’m a physicist. As a physicist, I’m grateful for my liberal-arts education. It’s enhanced my physics research in at least five ways.

(1) I learned to seek out, and take advantage of, context. Early in my first German-literature course, I’d just completed my first reading assignment. My professor told my class to fetch out our books and open them to the beginning. A few rustles later, we held our books open to page one of the main text.

No, no, said my professor. Open your books to the beginning. Did anyone even look at the title page?

We hadn’t, we admitted. We’d missed a wealth of information, as the book contained a reproduction of an old title page. Publishers, fonts, and advertisement styles have varied across the centuries and the globe. They, together with printing and reprinting dates, tell stories about the book’s origin, popularity, role in society, and purposes. Furthermore, a frontispiece is worth a thousand words, all related before the main text begins. When my class turned to the main text, much later in the lecture, we saw it in a new light. Context deepens and broadens our understanding.

When I read a physics paper, I start at the beginning—the true beginning. I note the publication date, the authors, their institutions and countries, and the journal. X’s lab performed the experiment reported on? X was the world’s expert in Y back then but nursed a bias against Z, a bias later proved to be unjustified. So I should aim to learn from the paper about Y but should take statements about Z with a grain of salt. Seeking and processing context improves my use of physics papers, thanks to a German-literature course.

(2) I learned argumentation. Doing physics involves building, analyzing, criticizing, and repairing arguments. I argue that mathematical X models physical system Y accurately, that an experiment I’ve proposed is feasible with today’s technology, and that observation Z supports a conjecture of mine. Physicists also prove mathematical statements deductively. I received proof-writing lessons in a math course, halfway through college. One of the most competent teachers I’ve ever encountered taught the course. But I learned about classes of arguments and about properties of arguments in a philosophy course, Informal Logic.

There, I learned to distinguish deduction from inference and an argument’s validity and soundness from an argument’s strength and cogency. I learned strategies for proving arguments and learned fallacies to criticize. I came to respect the difference between “any” and “every,” which I see interchanged in many physics papers. This philosophical framework helps me formulate, process, dissect, criticize, and correct physics arguments.

For instance, I often parse long, dense, technical proofs of mathematical statements. First, I identify whether the proof strategy is reductio ad absurdum, proof by counterexample, or another strategy. Upon identifying the overarching structure, I can fill my understanding with details. Additionally, I check proofs by students, and I respond to criticisms of my papers by journal referees. I could say, upon reading an argument, “Something feels a bit off, and it’s sort of like the thing that felt a bit off in that paper I read last Tuesday.” But I’d rather group the argument I’m given together with arguments I know how to tackle. I’d rather be able to say, “They’re straw-manning my argument” or “That argument begs the question.” Doing so, I put my finger on the problem and take a step toward solving it.

(3) I learned to analyze materials to bits, then extract meaning from the analysis. English and German courses trained me to wring from literature every drop of meaning that I could discover. I used to write one to three pages about a few-line quotation. The analysis would proceed from diction and punctuation to literary devices; allusions; characters’ relationships with each other, themselves, and nature; and the quotation’s role in the monograph. Everything from minutia to grand themes required scrutiny, according to the dissection technique I trained in. Every pincer probe lifted another skein of skin or drew aside another tendon, offering deeper insights into the literary work. I learned to find the skeins to lift, lift them in the right direction, pinpoint the insights revealed, and integrate the insights into a coherent takeaway.

This training has helped me assess and interpret mathematics. Physicists pick a physical system to study, model the system with equations, and solve the equations. The next two steps are intertwined: evaluating whether one solved the equations correctly and translating the solution into the physical system’s behavior. These two steps necessitate a dissection of everything from minutia to grand themes: Why should this exponent be 4/5, rather than any other number? Should I have expected this energy to depend on that length in this way? Is the physical material aging quickly or resisting change? These questions’ answers inform more-important questions: Who cares? Do my observations shed light worth anyone’s time, or did I waste a week solving equations no one should care about?

To answer all these questions, I draw on my literary training: I dissect content, pinpoint insights, and extract meaning. Having performed this analysis in literature courses facilitates an arguably deeper analysis than my physics training did: In literature courses, I had to organize my thoughts and articulate them in essays. This process revealed holes in my argumentation, as well as connections that I’d overlooked. In contrast, a couple of lines in my physics homework earned full marks. The critical analysis of literature has deepened my assessment of solutions’ correctness, physical interpretation of mathematics, and extraction of meaning from solutions.

(4) I learned what makes a physicist a physicist. In college, I had a friend who was studying applied mathematics and economics. Over dinner, he described a problem he’d encountered in his studies. I replied, almost without thinking, “From a physics perspective, I’d approach the problem like this.” I described my view, which my friend said he wouldn’t have thought of. I hadn’t thought of myself, and of the tools I was obtaining in the physics department, the way I did after our conversation.

Physics involves a unique toolkit,1 set of goals, and philosophy. Physicists identify problems, model them, solve them, and analyze the results in certain ways. Students see examples of these techniques in lectures and practice these techniques for homework. But, as a student, I rarely heard articulations of the general principles that underlay the examples scattered across my courses like a handful of marbles across a kitchen floor. Example principles include, if you don’t understand an abstract idea, construct a simple example. Once you’ve finished a calculation, check whether your answer makes sense in the most extreme scenarios possible. After solving an equation, interpret the solution in terms of physical systems—of how particles and waves move and interact.

I was learning these techniques, in college, without realizing that I was learning them. I became conscious of the techniques by comparing the approach natural to me with the approach taken in another discipline. Becoming conscious of my toolkit enabled me to wield it more effectively; one can best fry eggs when aware that one owns a spatula. The other disciplines at my liberal-arts college served as a foil for physics. Seeing other disciplines, I saw what makes physics physics—and improved my ability to apply my physics toolkit.

(5) I learned to draw connections between diverse ideas. Senior year of high school, my courses extended from physics to English literature. One might expect such a curriculum to feel higgledy-piggledy, but I found threads that ran through all my courses. For instance, I practiced public speaking in Reasoning, Research, and Rhetoric. Because I studied rhetoric, my philosophy teacher turned to me for input when introducing the triumvirate “thesis, antithesis, synthesis.”2 The philosophy curriculum included the feminist essay “If Men Could Menstruate,” which complemented the feminist book Wide Sargasso Sea in my English-literature course. In English literature, I learned that Baldassare Castiglione codified how Renaissance noblemen should behave, in The Book of the Courtier. The author’s name was the answer to the first question on my AP Modern European History exam. My history course covered Isaac Newton and Gottfried Wilhelm Leibniz, who invented calculus during the 17th century. I leveraged their discoveries in my calculus course, which I applied in my physics course. My physics teacher hoped that his students would solve the world’s energy problems—perhaps averting the global thermonuclear war that graced every debate in my rhetoric course (“If you don’t accept my team’s policy, then X will happen, leading to Y, leading to Z, which will cause a global thermonuclear war”).

Threads linked everything across my liberal-arts education; every discipline featured an idea that paralleled an idea in another discipline. Finding those parallels grew into a game for me, a game that challenged my creativity. Cultivating that creativity paid off when I began doing physics research. Much of my research has resulted from finding, in one field, a concept that resembles a concept in another field. I smash the ideas together to gain insight into each discipline from the other discipline’s perspective. For example, during my PhD studies, I found a thread connecting the physics of DNA strands to the physics of black holes. That thread initiated a research program of mine that’s yielded nine papers, garnered 19 collaborators, and spawned two experiments. Studying diverse subjects trained me to draw creative connections, which underlie much physics research.

I haven’t detailed all the benefits that a liberal-arts education can accrue to a physics career. For instance, the liberal arts enhance one’s communication skills, key to collaborating on research and to conveying one’s research. Without conveying one’s research adroitly, one likely won’t impact a field much. Also, a liberal-arts education can help one connect with researchers from across the globe on a personal level.3 Personal connections enhance science, which scientists—humans—undertake.

As I began building my research group, I sought advice from an MIT professor who’d attended MIT as an undergraduate. He advised me to seek students who have unusual backgrounds, including liberal-arts educations. Don’t get me wrong; I respect and cherish the colleagues and friends of mine who attended MIT, Caltech, and other tech schools as undergraduates. Still, I wouldn’t trade my German literature and economics. The liberal arts have enriched my physics research no less than they’ve enriched the rest of my life.

1A toolkit that overlaps partially with other disciplines’ toolkits, as explained in (3).

2I didn’t help much. When asked to guess the last concept in the triumvirate, I tried “debate.”

3I once met a Ukrainian physicist who referred to Ilya Muromets in a conversation. Ilya Muromets is a bogatyr, a knight featured in Slavic epics set in the Middle Ages. I happened to have taken a Slavic-folklore course the previous year. So I responded with a reference to Muromets’s pals, Dobrynya Nikitich and Alyosha Popovich. The physicist and I hit it off, and he taught me much about condensed matter over the following months.

Cutting the quantum mustard

I had a relative to whom my parents referred, when I was little, as “that great-aunt of yours who walked into a glass door at your cousin’s birthday party.” I was a small child in a large family that mostly lived far away; little else distinguished this great-aunt from other relatives, in my experience. She’d intended to walk from my grandmother’s family room to the back patio. A glass door stood in the way, but she didn’t see it. So my great-aunt whammed into the glass; spent part of the party on the couch, nursing a nosebleed; and earned the epithet via which I identified her for years.

After growing up, I came to know this great-aunt as a kind, gentle woman who adored her family and was adored in return. After growing into a physicist, I came to appreciate her as one of my earliest instructors in necessary and sufficient conditions.

My great-aunt’s intended path satisfied one condition necessary for her to reach the patio: Nothing visible obstructed the path. But the path failed to satisfy a sufficient condition: The invisible obstruction—the glass door—had been neither slid nor swung open. Sufficient conditions, my great-aunt taught me, mustn’t be overlooked.

Her lesson underlies a paper I published this month, with coauthors from the Cambridge other than mine—Cambridge, England: David Arvidsson-Shukur and Jacob Chevalier Drori. The paper concerns, rather than pools and patios, quasiprobabilities, which I’ve blogged about many times [1,2,3,4,5,6,7]. Quasiprobabilities are quantum generalizations of probabilities. Probabilities describe everyday, classical phenomena, from Monopoly to March Madness to the weather in Massachusetts (and especially the weather in Massachusetts). Probabilities are real numbers (not dependent on the square-root of -1); they’re at least zero; and they compose in certain ways (the probability of sun or hail equals the probability of sun plus the probability of hail). Also, the probabilities that form a distribution, or a complete set, sum to one (if there’s a 70% chance of rain, there’s a 30% chance of no rain).

In contrast, quasiprobabilities can be negative and nonreal. We call such values nonclassical, as they’re unavailable to the probabilities that describe classical phenomena. Quasiprobabilities represent quantum states: Imagine some clump of particles in a quantum state described by some quasiprobability distribution. We can imagine measuring the clump however we please. We can calculate the possible outcomes’ probabilities from the quasiprobability distribution.

My favorite quasiprobability is an obscure fellow unbeknownst even to most quantum physicists: the Kirkwood-Dirac distribution. John Kirkwood defined it in 1933, and Paul Dirac defined it independently in 1945. Then, quantum physicists forgot about it for decades. But the quasiprobability has undergone a renaissance over the past few years: Experimentalists have measured it to infer particles’ quantum states in a new way. Also, colleagues and I have generalized the quasiprobability and discovered applications of the generalization across quantum physics, from quantum chaos to metrology (the study of how we can best measure things) to quantum thermodynamics to the foundations of quantum theory.

In some applications, nonclassical quasiprobabilities enable a system to achieve a quantum advantage—to usefully behave in a manner impossible for classical systems. Examples include metrology: Imagine wanting to measure a parameter that characterizes some piece of equipment. You’ll perform many trials of an experiment. In each trial, you’ll prepare a system (for instance, a photon) in some quantum state, send it through the equipment, and measure one or more observables of the system. Say that you follow the protocol described in this blog post. A Kirkwood-Dirac quasiprobability distribution describes the experiment.1 From each trial, you’ll obtain information about the unknown parameter. How much information can you obtain, on average over trials? Potentially more information if some quasiprobabilities are negative than if none are. The quasiprobabilities can be negative only if the state and observables fail to commute with each other. So noncommutation—a hallmark of quantum physics—underlies exceptional metrological results, as shown by Kirkwood-Dirac quasiprobabilities.

Exceptional results are useful, and we might aim to design experiments that achieve them. We can by designing experiments described by nonclassical Kirkwood-Dirac quasiprobabilities. When can the quasiprobabilities become nonclassical? Whenever the relevant quantum state and observables fail to commute, the quantum community used to believe. This belief turns out to mirror the expectation that one could access my grandmother’s back patio from the living room whenever no visible barriers obstructed the path. As a lack of visible barriers was necessary for patio access, noncommutation is necessary for Kirkwood-Dirac nonclassicality. But noncommutation doesn’t suffice, according to my paper with David and Jacob. We identified a sufficient condition, sliding back the metaphorical glass door on Kirkwood-Dirac nonclassicality. The condition depends on simple properties of the system, state, and observables. (Experts: Examples include the Hilbert space’s dimensionality.) We also quantified and upper-bounded the amount of nonclassicality that a Kirkwood-Dirac quasiprobability can contain.

From an engineering perspective, our results can inform the design of experiments intended to achieve certain quantum advantages. From a foundational perspective, the results help illuminate the sources of certain quantum advantages. To achieve certain advantages, noncommutation doesn’t cut the mustard—but we now know a condition that does.

For another take on our paper, check out this news article in Physics Today.

1Really, a generalized Kirkwood-Dirac quasiprobability. But that phrase contains a horrendous number of syllables, so I’ll elide the “generalized.”

The autumn of my sophomore year of college was mildly hellish. I took the equivalent of three semester-long computer-science and physics courses, atop other classwork; co-led a public-speaking self-help group; and coordinated a celebrity visit to campus. I lived at my desk and in office hours, always declining my flatmates’ invitations to watch The West Wing

Hard as I studied, my classmates enjoyed greater facility with the computer-science curriculum. They saw immediately how long an algorithm would run, while I hesitated and then computed the run time step by step. I felt behind. So I protested when my professor said, “You’re good at this.”

I now see that we were focusing on different facets of learning. I rued my lack of intuition. My classmates had gained intuition by exploring computer science in high school, then slow-cooking their experiences on a mental back burner. Their long-term exposure to the material provided familiarity—the ability to recognize a new problem as belonging to a class they’d seen examples of. I was cooking course material in a mental microwave set on “high,” as a semester’s worth of material was crammed into ten weeks at my college.

My professor wasn’t measuring my intuition. He only saw that I knew how to compute an algorithm’s run time. I’d learned the material required of me—more than I realized, being distracted by what I hadn’t learned that difficult autumn.

We can learn a staggering amount when pushed far from our comfort zones—and not only we humans can. So can simple collections of particles.

Examples include a classical spin glass. A spin glass is a collection of particles that shares some properties with a magnet. Both a magnet and a spin glass consist of tiny mini-magnets called spins. Although I’ve blogged about quantum spins before, I’ll focus on classical spins here. We can imagine a classical spin as a little arrow that points upward or downward.  A bunch of spins can form a material. If the spins tend to point in the same direction, the material may be a magnet of the sort that’s sticking the faded photo of Fluffy to your fridge.

The spins may interact with each other, similarly to how electrons interact with each other. Not entirely similarly, though—electrons push each other away. In contrast, a spin may coax its neighbors into aligning or anti-aligning with it. Suppose that the interactions are random: Any given spin may force one neighbor into alignment, gently ask another neighbor to align, entreat a third neighbor to anti-align, and having nothing to say to neighbors four and five.

The spin glass can interact with the external world in two ways. First, we can stick the spins in a magnetic field, as by placing magnets above and below the glass. If aligned with the field, a spin has negative energy; and, if antialigned, positive energy. We can sculpt the field so that it varies across the spin glass. For instance, spin 1 can experience a strong upward-pointing field, while spin 2 experiences a weak downward-pointing field.

Second, say that the spins occupy a fixed-temperature environment, as I occupy a 74-degree-Fahrenheit living room. The spins can exchange heat with the environment. If releasing heat to the environment, a spin flips from having positive energy to having negative—from antialigning with the field to aligning.

Let’s perform an experiment on the spins. First, we design a magnetic field using random numbers. Whether the field points upward or downward at any given spin is random, as is the strength of the field experienced by each spin. We sculpt three of these random fields and call the trio a drive.

Let’s randomly select a field from the drive and apply it to the spin glass for a while; again, randomly select a field from the drive and apply it; and continue many times. The energy absorbed by the spins from the fields spikes, then declines.

Now, let’s create another drive of three random fields. We’ll randomly pick a field from this drive and apply it; again, randomly pick a field from this drive and apply it; and so on. Again, the energy absorbed by the spins spikes, then tails off.

Here comes the punchline. Let’s return to applying the initial fields. The energy absorbed by the glass will spike—but not as high as before. The glass responds differently to a familiar drive than to a new drive. The spin glass recognizes the original drive—has learned the first fields’ “fingerprint.” This learning happens when the fields push the glass far from equilibrium,1 as I learned when pushed during my mildly hellish autumn.

So spin glasses learn drives that push them far from equilibrium. So do many other simple, classical, many-particle systems: polymers, viscous liquids, crumpled sheets of Mylar, and more. Researchers have predicted such learning and observed it experimentally.

Scientists have detected many-particle learning by measuring thermodynamic observables. Examples include the energy absorbed by the spin glass—what thermodynamicists call work. But thermodynamics developed during the 1800s, to describe equilibrium systems, not to study learning.

One study of learning—the study of machine learning—has boomed over the past two decades. As described by the MIT Technology Review, “[m]achine-learning algorithms use statistics to find patterns in massive amounts of data.” Users don’t tell the algorithms how to find those patterns.

It seems natural and fitting to use machine learning to learn about the learning by many-particle systems. That’s what I did with collaborators from the group of Jeremy England, a GlaxoSmithKline physicist who studies complex behaviors of many particle systems. Weishun Zhong, Jacob Gold, Sarah Marzen, Jeremy, and I published our paper last month.

Using machine learning, we detected and measured many-particle learning more reliably and precisely than thermodynamic measures seem able to. Our technique works on multiple facets of learning, analogous to the intuition and the computational ability I encountered in my computer-science course. We illustrated our technique on a spin glass, but one can apply our approach to other systems, too. I’m exploring such applications with collaborators at the University of Maryland.

The project pushed me far from my equilibrium: I’d never worked with machine learning or many-body learning. But it’s amazing, what we can learn when pushed far from equilibrium. I first encountered this insight sophomore fall of college—and now, we can quantify it better than ever.

1Equilibrium is a quiet, restful state in which the glass’s large-scale properties change little. No net flow of anything—such as heat or particles—enter or leave the system.

Project Ant-Man

The craziest challenge I’ve undertaken hasn’t been skydiving; sailing the Amazon on a homemade raft; scaling Mt. Everest; or digging for artifacts atop a hill in a Middle Eastern desert, near midday, during high summer.1 The craziest challenge has been to study the possibility that quantum phenomena affect cognition significantly.

Most physicists agree that quantum phenomena probably don’t affect cognition significantly. Cognition occurs in biological systems, which have high temperatures, many particles, and watery components. Such conditions quash entanglement (a relationship that quantum particles can share and that can produce correlations stronger than any produceable by classical particles).

Yet Matthew Fisher, a condensed-matter physicist, proposed a mechanism by which entanglement might enhance coordinated neuron firing. Phosphorus nuclei have spins (quantum properties similar to angular momentum) that might store quantum information for long times when in Posner molecules. These molecules may protect the information from decoherence (leaking quantum information to the environment), via mechanisms that Fisher described.

I can’t check how correct Fisher’s proposal is; I’m not a biochemist. But I’m a quantum information theorist. So I can identify how Posners could process quantum information if Fisher were correct. I undertook this task with my colleague Elizabeth Crosson, during my PhD

Experimentalists have begun testing elements of Fisher’s proposal. What if, years down the road, they find that Posners exist in biofluids and protect quantum information for long times? We’ll need to test whether Posners can share entanglement. But detecting entanglement tends to require control finer than you can exert with a stirring rod. How could you check whether a beakerful of particles contains entanglement?

I asked that question of Adam Bene Watts, a PhD student at MIT, and John Wright, then an MIT postdoc and now an assistant professor in Texas. John gave our project its codename. At a meeting one day, he reported that he’d watched the film Avengers: Endgame. Had I seen it? he asked.

No, I replied. The only superhero movie I’d seen recently had been Ant-Man and the Wasp—and that because, according to the film’s scientific advisor, the movie riffed on research of mine.

Go on, said John.

Spiros Michalakis, the Caltech mathematician in charge of this blog, served as the advisor. The film came out during my PhD; during a meeting of our research group, Spiros advised me to watch the movie. There was something in it “for you,” he said. “And you,” he added, turning to Elizabeth. I obeyed, to hear Laurence Fishburne’s character tell Ant-Man that another character had entangled with the Posner molecules in Ant-Man’s brain.2

John insisted on calling our research Project Ant-Man.

John and Adam study Bell tests. Bell test sounds like a means of checking whether the collar worn by your cat still jingles. But the test owes its name to John Stewart Bell, a Northern Irish physicist who wrote a groundbreaking paper in 1964

Say you’d like to check whether two particles share entanglement. You can run an experiment, described by Bell, on them. The experiment ends with a measurement of the particles. You repeat this experiment in many trials, using identical copies of the particles in subsequent trials. You accumulate many measurement outcomes, whose statistics you calculate. You plug those statistics into a formula concocted by Bell. If the result exceeds some number that Bell calculated, the particles shared entanglement.

We needed a variation on Bell’s test. In our experiment, every trial would involve hordes of particles. The experimentalists—large, clumsy, classical beings that they are—couldn’t measure the particles individually. The experimentalists could record only aggregate properties, such as the intensity of the phosphorescence emitted by a test tube.

Adam, MIT physicist Aram Harrow, and I concocted such a Bell test, with help from John. Physical Review A published our paper this month—as a Letter and an Editor’s Suggestion, I’m delighted to report.

For experts: The trick was to make the Bell correlation function nonlinear in the state. We assumed that the particles shared mostly pairwise correlations, though our Bell inequality can accommodate small aberrations. Alas, no one can guarantee that particles share only mostly pairwise correlations. Violating our Bell inequality therefore doesn’t rule out hidden-variables theories. Under reasonable assumptions, though, a not-completely-paranoid experimentalist can check for entanglement using our test.

One can run our macroscopic Bell test on photons, using present-day technology. But we’re more eager to use the test to characterize lesser-known entities. For instance, we sketched an application to Posner molecules. Detecting entanglement in chemical systems will require more thought, as well as many headaches for experimentalists. But our paper broaches the cask—which I hope to see flow in the next Ant-Man film. Due to debut in 2022, the movie has the subtitle Quantumania. Sounds almost as crazy as studying the possibility that quantum phenomena affect cognition.

1Of those options, I’ve undertaken only the last.

2In case of any confusion: We don’t know that anyone’s brain contains Posner molecules. The movie features speculative fiction.

Random walks

A college professor of mine proposed a restaurant venture to our class. He taught statistical mechanics, the physics of many-particle systems. Examples range from airplane fuel to ice cubes to primordial soup. Such systems contain 1024 particles each—so many particles that we couldn’t track them all if we tried. We can gather only a little information about the particles, so their actions look random.

So does a drunkard’s walk. Imagine a college student who (outside of the pandemic) has stayed out an hour too late and accepted one too many red plastic cups. He’s arrived halfway down a sidewalk, where he’s clutching a lamppost, en route home. Each step has a 50% chance of carrying him leftward and a 50% chance of carrying him rightward. This scenario repeats itself every Friday. On average, five minutes after arriving at the lamppost, he’s back at the lamppost. But, if we wait for a time $T$, we have a decent chance of finding him a distance $\sqrt{T}$ away. These characteristic typify a simple random walk.

Random walks crop up across statistical physics. For instance, consider a grain of pollen dropped onto a thin film of water. The water molecules buffet the grain, which random-walks across the film. Robert Brown observed this walk in 1827, so we call it Brownian motion. Or consider a magnet at room temperature. The magnet’s constituents don’t walk across the surface, but they orient themselves according random-walk mathematics. And, in quantum many-particle systems, information can spread via a random walk.

So, my statistical-mechanics professor said, someone should open a restaurant near MIT. Serve lo mein and Peking duck, and call the restaurant the Random Wok.

This is the professor who, years later, confronted another alumna and me at a snack buffet.

“You know what this is?” he asked, waving a pastry in front of us. We stared for a moment, concluded that the obvious answer wouldn’t suffice, and shook our heads.

“A brownie in motion!”

Not only pollen grains undergo Brownian motion, and not only drunkards undergo random walks. Many people random-walk to their careers, trying out and discarding alternatives en route. We may think that we know our destination, but we collide with a water molecule and change course.

Such is the thrust of Random Walks, a podcast to which I contributed an interview last month. Abhigyan Ray, an undergraduate in Mumbai, created the podcast. Courses, he thought, acquaint us only with the successes in science. Stereotypes cast scientists as lone geniuses working in closed offices and silent labs. He resolved to spotlight the collaborations, the wrong turns, the lessons learned the hard way—the random walks—of science. Interviewees range from a Microsoft researcher to a Harvard computer scientist to a neurobiology professor to a genomicist.

You can find my episode on Instagram, Apple Podcasts, Google Podcasts, and Spotify. We discuss the bridging of disciplines; the usefulness of a liberal-arts education in physics; Quantum Frontiers; and the delights of poking fun at my PhD advisor, fellow blogger and Institute for Quantum Information and Matter director John Preskill

Love in the time of thermo

An 81-year-old medical doctor has fallen off a ladder in his house. His pet bird hopped out of his reach, from branch to branch of a tree on the patio. The doctor followed via ladder and slipped. His servants cluster around him, the clamor grows, and he longs for his wife to join him before he dies. She arrives at last. He gazes at her face; utters, “Only God knows how much I loved you”; and expires.

I set the book down on my lap and looked up. I was nestled in a wicker chair outside the Huntington Art Gallery in San Marino, California. Busts of long-dead Romans kept me company. The lawn in front of me unfurled below a sky that—unusually for San Marino—was partially obscured by clouds. My final summer at Caltech was unfurling. I’d walked to the Huntington, one weekend afternoon, with a novel from Caltech’s English library.1

What a novel.

You may have encountered the phrase “love in the time of corona.” Several times. Per week. Throughout the past six months. Love in the Time of Cholera predates the meme by 35 years. Nobel laureate Gabriel García Márquez captured the inhabitants, beliefs, architecture, mores, and spirit of a Colombian city around the turn of the 20th century. His work transcends its setting, spanning love, death, life, obsession, integrity, redemption, and eternity. A thermodynamicist couldn’t ask for more-fitting reading.

Love in the Time of Cholera centers on a love triangle. Fermina Daza, the only child of a wealthy man, excels in her studies. She holds herself with poise and self-assurance, and she spits fire whenever others try to control her. The girl dazzles Florentino Ariza, a poet, who restructures his life around his desire for her. Fermina Daza’s pride impresses Dr. Juvenal Urbino, a doctor renowned for exterminating a cholera epidemic. After rejecting both men, Fermina Daza marries Dr. Juvenal Urbino. The two personalities clash, and one betrays the other, but they cling together across the decades. Florentino Ariza retains his obsession with Fermina Daza, despite having countless affairs. Dr. Juvenal Urbino dies by ladder, whereupon Florentino Ariza swoops in to win Fermina Daza over. Throughout the book, characters mistake symptoms of love for symptoms of cholera; and lovers block out the world by claiming to have cholera and self-quarantining.

As a thermodynamicist, I see the second law of thermodynamics in every chapter. The second law implies that time marches only forward, order decays, and randomness scatters information to the wind. García Márquez depicts his characters aging, aging more, and aging more. Many characters die. Florentino Ariza’s mother loses her memory to dementia or Alzheimer’s disease. A pawnbroker, she buys jewels from the elite whose fortunes have eroded. Forgetting the jewels’ value one day, she mistakes them for candies and distributes them to children.

The second law bites most, to me, in the doctor’s final words, “Only God knows how much I loved you.” Later, the widow Fermina Daza sighs, “It is incredible how one can be happy for so many years in the midst of so many squabbles, so many problems, damn it, and not really know if it was love or not.” She doesn’t know how much her husband loved her, especially in light of the betrayal that rocked the couple and a rumor of another betrayal. Her husband could have affirmed his love with his dying breath, but he refused: He might have loved her with all his heart, and he might not have loved her; he kept the truth a secret to all but God. No one can retrieve the information after he dies.2

Love in the Time of Cholera—and thermodynamics—must sound like a mouthful of horseradish. But each offers nourishment, an appetizer and an entrée. According to the first law of thermodynamics, the amount of energy in every closed, isolated system remains constant: Physics preserves something. Florentino Ariza preserves his love for decades, despite Fermina Daza’s marrying another man, despite her aging.

The latter preservation can last only so long in the story: Florentino Ariza, being mortal, will die. He claims that his love will last “forever,” but he won’t last forever. At the end of the novel, he sails between two harbors—back and forth, back and forth—refusing to finish crossing a River Styx. I see this sailing as prethermalization: A few quantum systems resist thermalizing, or flowing to the physics analogue of death, for a while. But they succumb later. Florentino Ariza can’t evade the far bank forever, just as the second law of thermodynamics forbids his boat from functioning as a perpetuum mobile.

Though mortal within his story, Florentino Ariza survives as a book character. The book survives. García Márquez wrote about a country I’d never visited, and an era decades before my birth, 33 years before I checked his book out of the library. But the book dazzled me. It pulsed with the vibrancy, color, emotion, and intellect—with the fullness—of life. The book gained another life when the coronavius hit. Thermodynamics dictates that people age and die, but the laws of thermodynamics remain.3 I hope and trust—with the caveat about humanity’s not destroying itself—that Love in the Time of Cholera will pulse in 350 years.

What’s not to love?

1Yes, Caltech has an English library. I found gems in it, and the librarians ordered more when I inquired about books they didn’t have. I commend it to everyone who has access.

2I googled “Only God knows how much I loved you” and was startled to see the line depicted as a hallmark of romance. Please tell your romantic partners how much you love them; don’t make them guess till the ends of their lives.

3Lee Smolin has proposed that the laws of physics change. If they do, the change seems to have to obey metalaws that remain constant.

If the (quantum-metrology) key fits…

My maternal grandfather gave me an antique key when I was in middle school. I loved the workmanship: The handle consisted of intertwined loops. I loved the key’s gold color and how the key weighed on my palm. Even more, I loved the thought that the key opened something. I accompanied my mother to antique shops, where I tried unlocking chests, boxes, and drawers.

My grandfather’s antique key

I found myself holding another such key, metaphorically, during the autumn of 2018. MIT’s string theorists had requested a seminar, so I presented about quasiprobabilities. Quasiprobabilities represent quantum states similarly to how probabilities represent a swarm of classical particles. Consider the steam rising from asphalt on a summer day. Calculating every steam particle’s position and momentum would require too much computation for you or me to perform. But we can predict the probability that, if we measure every particle’s position and momentum, we’ll obtain such-and-such outcomes. Probabilities are real numbers between zero and one. Quasiprobabilities can assume negative and nonreal values. We call these values “nonclassical,” because they’re verboten to the probabilities that describe classical systems, such as steam. I’d defined a quasiprobability, with collaborators, to describe quantum chaos.

David Arvidsson-Shukur was sitting in the audience. David is a postdoctoral fellow at the University of Cambridge and a visiting scholar in the other Cambridge (at MIT). He has a Swedish-and-southern-English accent that I’ve heard only once before and, I learned over the next two years, an academic intensity matched by his kindliness.1 Also, David has a name even longer than mine: David Roland Miran Arvidsson-Shukur. We didn’t know then, but we were destined to journey together, as postdoctoral knights-errant, on a quest for quantum truth.

David studies the foundations of quantum theory: What distinguishes quantum theory from classical? David suspected that a variation on my quasiprobability could unlock a problem in metrology, the study of measurements.

Suppose that you’ve built a quantum computer. It consists of gates—uses of, e.g., magnets or lasers to implement logical operations. A classical gate implements operations such as “add 11.” A quantum gate can implement an operation that involves some number $\theta$ more general than 11. You can try to build your gate correctly, but it might effect the wrong $\theta$ value. You need to measure $\theta$.

How? You prepare some quantum state $| \psi \rangle$ and operate on it with the gate. $\theta$ imprints itself on the state, which becomes $| \psi (\theta) \rangle$. Measure some observable $\hat{O}$. You repeat this protocol in each of many trials. The measurement yields different outcomes in different trials, according to quantum theory. The average amount of information that you learn about $\theta$ per trial is called the Fisher information.

Let’s change this protocol. After operating with the gate, measure another observable, $\hat{F}$, and postselect: If the $\hat{F}$ measurement yields a desirable outcome $f$, measure $\hat{O}$. If the $\hat{F}$-measurement doesn’t yield the desirable outcome, abort the trial, and begin again. If you choose $\hat{F}$ and $f$ adroitly, you’ll measure $\hat{O}$ only when the trial will provide oodles of information about $\theta$. You’ll save yourself many $\hat{O}$ measurements that would have benefited you little.2

Why does postselection help us? We could understand easily if the system were classical: The postselection would effectively improve the input state. To illustrate, let’s suppose that (i) a magnetic field implemented the gate and (ii) the input were metal or rubber. The magnetic field wouldn’t affect the rubber; measuring $\hat{O}$ in rubber trials would provide no information about the field. So you could spare yourself $\hat{O}$ measurements by postselecting on the system’s consisting of metal.

Postselection on a quantum system can defy this explanation. Consider optimizing your input state $| \psi \rangle$, beginning each trial with the quantum equivalent of metal. Postselection could still increase the average amount of information information provided about $\theta$ per trial. Postselection can enhance quantum metrology even when postselection can’t enhance the classical analogue.

David suspected that he could prove this result, using, as a mathematical tool, the quasiprobability that collaborators and I had defined. We fulfilled his prediction, with Hugo Lepage, Aleks Lasek, Seth Lloyd, and Crispin Barnes. Nature Communications published our paper last month. The work bridges the foundations of quantum theory with quantum metrology and quantum information theory—and, through that quasiprobability, string theory. David’s and my quantum quest continues, so keep an eye out for more theory from us, as well as a photonic experiment based on our first paper.

I still have my grandfather’s antique key. I never found a drawer, chest, or box that it opened. But I don’t mind. I have other mysteries to help unlock.

1The morning after my wedding this June, my husband and I found a bouquet ordered by David on our doorstep.

2Postselection has a catch: The $\hat{F}$ measurement has a tiny probability of yielding the desirable outcome. But, sometimes, measuring $\hat{O}$ costs more than preparing $| \psi \rangle$, performing the gate, and postselecting. For example, suppose that the system is a photon. A photodetector will measure $\hat{O}$. Some photodetectors have a dead time: After firing, they take a while to reset, to be able to fire again. The dead time can outweigh the cost of the beginning of the experiment.

A quantum walk down memory lane

In elementary and middle school, I felt an affinity for the class three years above mine. Five of my peers had siblings in that year. I carpooled with a student in that class, which partnered with mine in holiday activities and Grandparents’ Day revues. Two students in that class stood out. They won academic-achievement awards, represented our school in science fairs and speech competitions, and enrolled in rigorous high-school programs.

Those students came to mind as I grew to know David Limmer. David is an assistant professor of chemistry at the University of California, Berkeley. He studies statistical mechanics far from equilibrium, using information theory. Though a theorist ardent about mathematics, he partners with experimentalists. He can pass as a physicist and keeps an eye on topics as far afield as black holes. According to his faculty page, I discovered while writing this article, he’s even three years older than I.

I met David in the final year of my PhD. I was looking ahead to postdocking, as his postdoc fellowship was fading into memory. The more we talked, the more I thought, I’d like to be like him.

I had the good fortune to collaborate with David on a paper published by Physical Review A this spring (as an Editors’ Suggestion!). The project has featured in Quantum Frontiers as the inspiration for a rewriting of “I’m a little teapot.”

We studied a molecule prevalent across nature and technologies. Such molecules feature in your eyes, solar-fuel-storage devices, and more. The molecule has two clumps of atoms. One clump may rotate relative to the other if the molecule absorbs light. The rotation switches the molecule from a “closed” configuration to an “open” configuration.

These molecular switches are small, quantum, and far from equilibrium; so modeling them is difficult. Making assumptions offers traction, but many of the assumptions disagreed with David. He wanted general, thermodynamic-style bounds on the probability that one of these molecular switches would switch. Then, he ran into me.

I traffic in mathematical models, developed in quantum information theory, called resource theories. We use resource theories to calculate which states can transform into which in thermodynamics, as a dime can transform into ten pennies at a bank. David and I modeled his molecule in a resource theory, then bounded the molecule’s probability of switching from “closed” to “open.” I accidentally composed a theme song for the molecule; you can sing along with this post.

That post didn’t mention what David and I discovered about quantum clocks. But what better backdrop for a mental trip to elementary school or to three years into the future?

I’ve blogged about autonomous quantum clocks (and ancient Assyria) before. Autonomous quantum clocks differ from quantum clocks of another type—the most precise clocks in the world. Scientists operate the latter clocks with lasers; autonomous quantum clocks need no operators. Autonomy benefits you if you want for a machine, such as a computer or a drone, to operate independently. An autonomous clock in the machine ensures that, say, the computer applies the right logical gate at the right time.

What’s an autonomous quantum clock? First, what’s a clock? A clock has a degree of freedom (e.g., a pair of hands) that represents the time and that moves steadily. When the clock’s hands point to 12 PM, you’re preparing lunch; when the clock’s hands point to 6 PM, you’re reading Quantum Frontiers. An autonomous quantum clock has a degree of freedom that represents the time fairly accurately and moves fairly steadily. (The quantum uncertainty principle prevents a perfect quantum clock from existing.)

Suppose that the autonomous quantum clock constitutes one part of a machine, such as a quantum computer, that the clock guides. When the clock is in one quantum state, the rest of the machine undergoes one operation, such as one quantum logical gate. (Experts: The rest of the machine evolves under one Hamiltonian.) When the clock is in another state, the rest of the machine undergoes another operation (evolves under another Hamiltonian).

Physicists have been modeling quantum clocks using the resource theory with which David and I modeled our molecule. The math with which we represented our molecule, I realized, coincided with the math that represents an autonomous quantum clock.

Think of the molecular switch as a machine that operates (mostly) independently and that contains an autonomous quantum clock. The rotating clump of atoms constitutes the clock hand. As a hand rotates down a clock face, so do the nuclei rotate downward. The hand effectively points to 12 PM when the switch occupies its “closed” position. The hand effectively points to 6 PM when the switch occupies its “open” position.

The nuclei account for most of the molecule’s weight; electrons account for little. They flit about the landscape shaped by the atomic clumps’ positions. The landscape governs the electrons’ behavior. So the electrons form the rest of the quantum machine controlled by the nuclear clock.

Experimentalists can create and manipulate these molecular switches easily. For instance, experimentalists can set the atomic clump moving—can “wind up” the clock—with ultrafast lasers. In contrast, the only other autonomous quantum clocks that I’d read about live in theory land. Can these molecules bridge theory to experiment? Reach out if you have ideas!

And check out David’s theory lab on Berkeley’s website and on Twitter. We all need older siblings to look up to.

Up we go! or From abstract theory to experimental proposal

Mr. Mole is trapped indoors, alone. Spring is awakening outside, but he’s confined to his burrow. Birds are twittering, and rabbits are chattering, but he has only himself for company.

Sound familiar?

Spring—crocuses, daffodils, and hyacinths budding; leaves unfurling; and birds warbling—burst upon Cambridge, Massachusetts last month. The city’s shutdown vied with the season’s vivaciousness. I relieved the tension by rereading The Wind in the Willows, which I’ve read every spring since 2017.

Project Gutenberg offers free access to Kenneth Grahame’s 1908 novel. He wrote the book for children, but never mind that. Many masterpieces of literature happen to have been written for children.

One line in the novel demanded, last year, that I memorize it. On page one, Mole is cleaning his house beneath the Earth’s surface. He’s been dusting and whitewashing for hours when the spring calls to him. Life is pulsating on the ground and in the air above him, and he can’t resist joining the party. Mole throws down his cleaning supplies and tunnels upward through the soil: “he scraped and scratched and scrabbled and scrooged, and then he scrooged again and scrabbled and scratched and scraped.”

The quotation appealed to me not only because of its alliteration and chiasmus. Mole’s journey reminded me of research.

Take a paper that I published last month with Michael Beverland of Microsoft Research and Amir Kalev of the Joint Center for Quantum Information and Computer Science (now of the Information Sciences Institute at the University of Southern California). We translated a discovery from the abstract, mathematical language of quantum-information-theoretic thermodynamics into an experimental proposal. We had to scrabble, but we kept on scrooging.

Over four years ago, other collaborators and I uncovered a thermodynamics problem, as did two other groups at the same time. Thermodynamicists often consider small systems that interact with large environments, like a magnolia flower releasing its perfume into the air. The two systems—magnolia flower and air—exchange things, such as energy and scent particles. The total amount of energy in the flower and the air remains constant, as does the total number of perfume particles. So we call the energy and the perfume-particle number conserved quantities.

We represent quantum conserved quantities with matrices $Q_1$ and $Q_2$. We nearly always assume that, in this thermodynamic problem, those matrices commute with each other: $Q_1 Q_2 = Q_2 Q_1$. Almost no one mentions this assumption; we make it without realizing. Eliminating this assumption invalidates a derivation of the state reached by the small system after a long time. But why assume that the matrices commute? Noncommutation typifies quantum physics and underlies quantum error correction and quantum cryptography.

What if the little system exchanges with the large system thermodynamic quantities represented by matrices that don’t commute with each other?

Colleagues and I began answering this question, four years ago. The small system, we argued, thermalizes to near a quantum state that contains noncommuting matrices. We termed that state, $e^{ - \sum_\alpha \beta_\alpha Q_\alpha } / Z$, the non-Abelian thermal state. The $Q_\alpha$’s represent conserved quantities, and the $\beta_\alpha$’s resemble temperatures. The real number $Z$ ensures that, if you measure any property of the state, you’ll obtain some outcome. Our arguments relied on abstract mathematics, resource theories, and more quantum information theory.

Over the past four years, noncommuting conserved quantities have propagated across quantum-information-theoretic thermodynamics.1 Watching the idea take root has been exhilarating, but the quantum information theory didn’t satisfy me. I wanted to see a real physical system thermalize to near the non-Abelian thermal state.

Michael and Amir joined the mission to propose an experiment. We kept nosing toward a solution, then dislodging a rock that would shower dirt on us and block our path. But we scrabbled onward.

Imagine a line of ions trapped by lasers. Each ion contains the physical manifestation of a qubit—a quantum two-level system, the basic unit of quantum information. You can think of a qubit as having a quantum analogue of angular momentum, called spin. The spin has three components, one per direction of space. These spin components are represented by matrices $Q_x = S_x$, $Q_y = S_y$, and $Q_z = S_z$ that don’t commute with each other.

A couple of qubits can form the small system, analogous to the magnolia flower. The rest of the qubits form the large system, analogous to the air. I constructed a Hamiltonian—a matrix that dictates how the qubits evolve—that transfers quanta of all the spin’s components between the small system and the large. (Experts: The Heisenberg Hamiltonian transfers quanta of all the spin components between two qubits while conserving $S_{x, y, z}^{\rm tot}$.)

The Hamiltonian led to our first scrape: I constructed an integrable Hamiltonian, by accident. Integrable Hamiltonians can’t thermalize systems. A system thermalizes by losing information about its initial conditions, evolving to a state with an exponential form, such as $e^{ - \sum_\alpha \beta_\alpha Q_\alpha } / Z$. We clawed at the dirt and uncovered a solution: My Hamiltonian coupled together nearest-neighbor qubits. If the Hamiltonian coupled also next-nearest-neighbor qubits, or if the ions formed a 2D or 3D array, the Hamiltonian would be nonintegrable.

We had to scratch at every stage—while formulating the setup, preparation procedure, evolution, measurement, and prediction. But we managed; Physical Review E published our paper last month. We showed how a quantum system can evolve to the non-Abelian thermal state. Trapped ions, ultracold atoms, and quantum dots can realize our experimental proposal. We imported noncommuting conserved quantities in thermodynamics from quantum information theory to condensed matter and atomic, molecular, and optical physics.

As Grahame wrote, the Mole kept “working busily with his little paws and muttering to himself, ‘Up we go! Up we go!’ till at last, pop! his snout came out into the sunlight and he found himself rolling in the warm grass of a great meadow.”

1See our latest paper’s introduction for references. https://journals.aps.org/pre/abstract/10.1103/PhysRevE.101.042117