One equation to rule them all?

In lieu of composing a blog post this month, I’m publishing an article in Quanta Magazine. The article provides an introduction to fluctuation relations, souped-up variations on the second law of thermodynamics, which helps us understand why time flows in only one direction. The earliest fluctuation relations described classical systems, such as single strands of DNA. Many quantum versions have been proved since. Their proliferation contrasts with the stereotype of physicists as obsessed with unification—with slimming down a cadre of equations into one über-equation. Will one quantum fluctuation relation emerge to rule them all? Maybe, and maybe not. Maybe the multiplicity of quantum fluctuation relations reflects the richness of quantum thermodynamics.

You can read more in Quanta Magazine here and yet more in chapter 9 of my book. For recent advances in fluctuation relations, as opposed to the broad introduction there, check out earlier Quantum Frontiers posts here, here, here, here, and here.

The power of being able to say “I can explain that”

Caltech condensed-matter theorist Gil Refael explained his scientific raison dê’tre early in my grad-school career: “What really gets me going is seeing a plot [of experimental data] and being able to say, ‘I can explain that.’” The quote has stuck with me almost word for word. When I heard it, I was working deep in abstract quantum information theory and thermodynamics, proving theorems about thought experiments. Embedding myself in pure ideas has always held an aura of romance for me, so I nodded along without seconding Gil’s view.

Roughly nine years later, I concede his point.

The revelation walloped me last month, as I was polishing a paper with experimental collaborators. Members of the Institute for Quantum Optics and Quantum Information (IQOQI) in Innsbruck, Austria—Florian Kranzl, Manoj Joshi, and Christian Roos—had performed an experiment in trapped-ion guru Rainer Blatt’s lab. Their work realized an experimental proposal that I’d designed with fellow theorists near the beginning of my postdoc stint. We aimed to observe signatures of particularly quantum thermalization

Throughout the universe, small systems exchange stuff with their environments. For instance, the Earth exchanges heat and light with the rest of the solar system. After exchanging stuff for long enough, the small system equilibrates with the environment: Large-scale properties of the small system (such as its volume and energy) remain fairly constant; and as much stuff enters the small system as leaves, on average. The Earth remains far from equilibrium, which is why we aren’t dead yet

Far from equilibrium and proud of it

In many cases, in equilibrium, the small system shares properties of the environment, such as the environment’s temperature. In these cases, we say that the small system has thermalized and, if it’s quantum, has reached a thermal state.

The stuff exchanged can consist of energy, particles, electric charge, and more. Unlike classical planets, quantum systems can exchange things that participate in quantum uncertainty relations (experts: that fail to commute). Quantum uncertainty mucks up derivations of the thermal state’s mathematical form. Some of us quantum thermodynamicists discovered the mucking up—and identified exchanges of quantum-uncertain things as particularly nonclassical thermodynamics—only a few years ago. We reworked conventional thermodynamic arguments to accommodate this quantum uncertainty. The small system, we concluded, likely equilibrates to near a thermal state whose mathematical form depends on the quantum-uncertain stuff—what we termed a non-Abelian thermal state. I wanted to see this equilibration in the lab. So I proposed an experiment with theory collaborators; and Manoj, Florian, and Christian took a risk on us.

The experimentalists arrayed between six and fifteen ions in a line. Two ions formed the small system, and the rest formed the quantum environment. The ions exchanged the x-, y-, and z-components of their spin angular momentum—stuff that participates in quantum uncertainty relations. The ions began with a fairly well-defined amount of each spin component, as described in another blog post. The ions exchanged stuff for a while, and then the experimentalists measured the small system’s quantum state.

The small system equilibrated to near the non-Abelian thermal state, we found. No conventional thermal state modeled the results as accurately. Score!

My postdoc and numerical-simulation wizard Aleks Lasek modeled the experiment on his computer. The small system, he found, remained farther from the non-Abelian thermal state in his simulation than in the experiment. Aleks plotted the small system’s distance to the non-Abelian thermal state against the ion chain’s length. The points produced experimentally sat lower down than the points produced numerically. Why?

I think I can explain that, I said. The two ions exchange stuff with the rest of the ions, which serve as a quantum environment. But the two ions exchange stuff also with the wider world, such as stray electromagnetic fields. The latter exchanges may push the small system farther toward equilibrium than the extra ions alone do.

Fortunately for the development of my explanatory skills, collaborators prodded me to hone my argument. The wider world, they pointed out, effectively has a very high temperature—an infinite temperature.1 Equilibrating with that environment, the two ions would acquire an infinite temperature themselves. The two ions would approach an infinite-temperature thermal state, which differs from the non-Abelian thermal state we aimed to observe.

Fair, I said. But the extra ions probably have a fairly high temperature themselves. So the non-Abelian thermal state is probably close to the infinite-temperature thermal state. Analogously, if someone cooks goulash similarly to his father, and the father cooks goulash similarly to his grandfather, then the youngest chef cooks goulash similarly to his grandfather. If the wider world pushes the two ions to equilibrate to infinite temperature, then, because the infinite-temperature state lies near the non-Abelian thermal state, the wider world pushes the two ions to equilibrate to near the non-Abelian thermal state.

Tasty, tasty thermodynamicis

I plugged numbers into a few equations to check that the extra ions do have a high temperature. (Perhaps I should have done so before proposing the argument above, but my collaborators were kind enough not to call me out.) 

Aleks hammered the nail into the problem’s coffin by incorporating into his simulations the two ions’ interaction with an infinite-temperature wider world. His numerical data points dropped to near the experimental data points. The new plot supported my story.

I can explain that! Aleks’s results buoyed me the whole next day; I found myself smiling at random times throughout the afternoon. Not that I’d explained a grand mystery, like the unexpected hiss heard by Arno Penzias and Robert Wilson when they turned on a powerful antenna in 1964. The hiss turned out to come from the cosmic microwave background (CMB), a collection of photons that fill the visible universe. The CMB provided evidence for the then-controversial Big Bang theory of the universe’s origin. Discovering the CMB earned Penzias and Wilson a Nobel Prize. If the noise caused by the CMB was music to cosmologists’ ears, the noise in our experiment is the quiet wailing of a shy banshee. But it’s our experiment’s noise, and we understand it now.

The experience hasn’t weaned me off the romance of proving theorems about thought experiments. Theorems about thermodynamic quantum uncertainty inspired the experiment that yielded the plot that confused us. But I now second Gil’s sentiment. In the throes of an experiment, “I can explain that” can feel like a battle cry.

1Experts: The wider world effectively has an infinite temperature because (i) the dominant decoherence is dephasing relative to the \sigma_z product eigenbasis and (ii) the experimentalists rotate their qubits often, to simulate a rotationally invariant Hamiltonian evolution. So the qubits effectively undergo dephasing relative to the \sigma_x, \sigma_y, and \sigma_z eigenbases.

Building a Koi pond with Lie algebras

When I was growing up, one of my favourite places was the shabby all-you-can-eat buffet near our house. We’d walk in, my mom would approach the hostess to explain that, despite my being abnormally large for my age, I qualified for kids-eat-free, and I would peel away to stare at the Koi pond. The display of different fish rolling over one another was bewitching. Ten-year-old me would have been giddy to build my own Koi pond, and now I finally have. However, I built one using Lie algebras.

The different fish swimming in the Koi pond are, in many ways, like charges being exchanged between subsystems. A “charge” is any globally conserved quantity. Examples of charges include energy, particles, electric charge, or angular momentum. Consider a system consisting of a cup of coffee in your office. The coffee will dynamically exchange charges with your office in the form of heat energy. Still, the total energy of the coffee and office is conserved (assuming your office walls are really well insulated). In this example, we had one type of charge (heat energy) and two subsystems (coffee and office). Consider now a closed system consisting of many subsystems and many different types of charges. The closed system is like the finite Koi pond with different charges like the different fish species. The charges can move around locally, but the total number of charges is globally fixed, like how the fish swim around but can’t escape the pond. Also, the presence of one type of charge can alter another’s movement, just as a big fish might block a little one’s path. 

Unfortunately, the Koi pond analogy reaches its limit when we move to quantum charges. Classically, charges commute. This means that we can simultaneously determine the amount of each charge in our system at each given moment. In quantum mechanics, this isn’t necessarily true. In other words, classically, I can count the number of glossy fish and matt fish. But, in quantum mechanics, I can’t.

So why does this matter? Subsystems exchanging charges are prevalent in thermodynamics. Quantum thermodynamics extends thermodynamics to include small systems and quantum effects. Noncommutation underlies many important quantum phenomena. Hence, studying the exchange of noncommuting charges is pivotal in understanding quantum thermodynamics. Consequently, noncommuting charges have emerged as a rapidly growing subfield of quantum thermodynamics. Many interesting results have been discovered from no longer assuming that charges commute (such as these). Until recently, most of these discoveries have been theoretical. Bridging these discoveries to experimental reality requires Hamiltonians (functions that tell you how your system evolves in time) that move charges locally but conserve them globally. Last year it was unknown whether these Hamiltonians exist, what they look like generally, how to build them, and for what charges you could find them.

Nicole Yunger Halpern (NIST physicist, my co-advisor, and Quantum Frontiers blogger) and I developed a prescription for building Koi ponds for noncommuting charges. Our prescription allows you to systematically build Hamiltonians that overtly move noncommuting charges between subsystems while conserving the charges globally. These Hamiltonians are built using Lie algebras, abstract mathematical tools that can describe many physical quantities (including everything in the standard model of particle physics and space-time metric). Our results were recently published in npj QI. We hope that our prescription will bolster the efforts to bridge the results of noncommuting charges to experimental reality.

In the end, a little group theory was all I needed for my Koi pond. Maybe I’ll build a treehouse next with calculus or a remote control car with combinatorics.

Space-time and the city

I felt like a gum ball trying to squeeze my way out of a gum-ball machine. 

I was one of 50-ish physicists crammed into the lobby—and in the doorway, down the stairs, and onto the sidewalk—of a Manhattan hotel last December. Everyone had received a COVID vaccine, and the omicron variant hadn’t yet begun chewing up North America. Everyone had arrived on the same bus that evening, feeding on the neon-bright views of Fifth Avenue through dinnertime. Everyone wanted to check in and offload suitcases before experiencing firsthand the reason for the nickname “the city that never sleeps.” So everyone was jumbled together in what passed for a line.

We’d just passed the halfway point of the week during which I was pretending to be a string theorist. I do that whenever my research butts up against black holes, chaos, quantum gravity (the attempt to unify quantum physics with Einstein’s general theory of relativity), and alternative space-times. These topics fall under the heading “It from Qubit,” which calls for understanding puzzling physics (“It”) by analyzing how quantum systems process information (“Qubit”). The “It from Qubit” crowd convenes for one week each December, to share progress and collaborate.1 The group spends Monday through Wednesday at Princeton’s Institute for Advanced Study (IAS), dogged by photographs of Einstein, busts of Einstein, and roads named after Einstein. A bus ride later, the group spends Thursday and Friday at the Simons Foundation in New York City.

I don’t usually attend “It from Qubit” gatherings, as I’m actually a quantum information theorist and quantum thermodynamicist. Having admitted as much during the talk I presented at the IAS, I failed at pretending to be a string theorist. Happily, I adore being the most ignorant person in a roomful of experts, as the experience teaches me oodles. At lunch and dinner, I’d plunk down next to people I hadn’t spoken to and ask what they see as trending in the “It from Qubit” community. 

One buzzword, I’d first picked up on shortly before the pandemic had begun (replicas). Having lived a frenetic life, that trend seemed to be declining. Rising buzzwords (factorization and islands), I hadn’t heard in black-hole contexts before. People were still tossing around terms from when I’d first forayed into “It from Qubit” (scrambling and out-of-time-ordered correlator), but differently from then. Five years ago, the terms identified the latest craze. Now, they sounded entrenched, as though everyone expected everyone else to know and accept their significance.

One buzzword labeled my excuse for joining the workshops: complexity. Complexity wears as many meanings as the stereotypical New Yorker wears items of black clothing. Last month, guest blogger Logan Hillberry wrote about complexity that emerges in networks such as brains and social media. To “It from Qubit,” complexity quantifies the difficulty of preparing a quantum system in a desired state. Physicists have conjectured that a certain quantum state’s complexity parallels properties of gravitational systems, such as the length of a wormhole that connects two black holes. The wormhole’s length grows steadily for a time exponentially large in the gravitational system’s size. So, to support the conjecture, researchers have been trying to prove that complexity typically grows similarly. Collaborators and I proved that it does, as I explained in my talk and as I’ll explain in a future blog post. Other speakers discussed experimental complexities, as well as the relationship between complexity and a simplified version of Einstein’s equations for general relativity.

Inside the Simons Foundation on Fifth Avenue in Manhattan

I learned a bushel of physics, moonlighting as a string theorist that week. The gum-ball-machine lobby, though, retaught me something I’d learned long before the pandemic. Around the time I squeezed inside the hotel, a postdoc struck up a conversation with the others of us who were clogging the doorway. We had a decent fraction of an hour to fill; so we chatted about quantum thermodynamics, grant applications, and black holes. I asked what the postdoc was working on, he explained a property of black holes, and it reminded me of a property of thermodynamics. I’d nearly reached the front desk when I realized that, out of the sheer pleasure of jawing about physics with physicists in person, I no longer wanted to reach the front desk. The moment dangles in my memory like a crystal ornament from the lobby’s tree—pendant from the pandemic, a few inches from the vaccines suspended on one side and from omicron on the other. For that moment, in a lobby buoyed by holiday lights, wrapped in enough warmth that I’d forgotten the December chill outside, I belonged to the “It from Qubit” community as I hadn’t belonged to any community in 22 months.

Happy new year.

Presenting at the IAS was a blast. Photo credit: Jonathan Oppenheim.

1In person or virtually, pandemic-dependently.

Thanks to the organizers of the IAS workshop—Ahmed Almheiri, Adam Bouland, Brian Swingle—for the invitation to present and to the organizers of the Simons Foundation workshop—Patrick Hayden and Matt Headrick—for the invitation to attend.

Balancing the tradeoff

So much to do, so little time. Tending to one task is inevitably at the cost of another, so how does one decide how to spend their time? In the first few years of my PhD, I balanced problem sets, literature reviews, and group meetings, but at the detriment to my hobbies. I have played drums my entire life, but I largely fell out of practice in graduate school. Recently, I made time to play with a group of musicians, even landing a couple gigs in downtown Austin, Texas, “live music capital of the world.” I have found attending to my non-physics interests makes my research hours more productive and less taxing. Finding the right balance of on- versus off-time has been key to my success as my PhD enters its final year.

Of course, life within physics is also full of tradeoffs. My day job is as an experimentalist. I use tightly focused laser beams, known as optical tweezers, to levitate micrometer-sized glass spheres. I monitor a single microsphere’s motion as it undergoes collisions with air molecules, and I study the system as an environmental sensor of temperature, fluid flow, and acoustic waves; however, by night I am a computational physicist. I code simulations of interacting qubits subject to kinetic constraints, so-called quantum cellular automata (QCA). My QCA work started a few years ago for my Master’s degree, but my interest in the subject persists. I recently co-authored one paper summarizing the work so far and another detailing an experimental implementation.

The author doing his part to “keep Austin weird” by playing the drums dressed as grackle (note the beak), the central-Texas bird notorious for overrunning grocery store parking lots.
Balancing research interests: Trapping a glass microsphere with optical tweezers.
Balancing research interests: Visualizing the time evolution of four different QCA rules.

QCA, the subject of this post, are themselves tradeoff-aware systems. To see what I mean, first consider their classical counterparts cellular automata. In their simplest construction, the system is a one-dimensional string of bits. Each bit takes a value of 0 or 1 (white or black). The bitstring changes in discrete time steps based on a simultaneously-applied local update rule: Each bit, along with its two nearest-neighbors, determine the next state of the central bit. Put another way, a bit either flips, i.e., changes 0 to 1 or 1 to 0, or remains unchanged over a timestep depending on the state of that bit’s local neighborhood. Thus, by choosing a particular rule, one encodes a trade off between activity (bit flips) and inactivity (bit remains unchanged). Despite their simple construction, cellular automata dynamics are diverse; they can produce fractals and encryption-quality random numbers. One rule even has the ability to run arbitrary computer algorithms, a property known as universal computation.

Classical cellular automata. Left: rule 90 producing the fractal Sierpiński’s triangle. Middle: rule 30 can be used to generate random numbers. Right: rule 110 is capable of universal computation.

In QCA, bits are promoted to qubits. Instead of being just 0 or 1 like a bit, a qubit can be a continuous mixture of both 0 and 1, a property called superposition. In QCA, a qubit’s two neighbors being 0 or 1 determine whether or not it changes. For example, when in an active neighborhood configuration, a qubit can be coded to change from 0 to “0 plus 1” or from 1 to “0 minus 1”. This is already a head-scratcher, but things get even weirder. If a qubit’s neighbors are in a superposition, then the center qubit can become entangled with those neighbors. Entanglement correlates qubits in a way that is not possible with classical bits.

Do QCA support the emergent complexity observed in their classical cousins? What are the effects of a continuous state space, superposition, and entanglement? My colleagues and I attacked these questions by re-examining many-body physics tools through the lens of complexity science. Singing the lead, we have a workhorse of quantum and solid-state physics: two-point correlations. Singing harmony we have the bread-and-butter of network analysis: complex-network measures. The duet between the two tells the story of structured correlations in QCA dynamics.

In a bit more detail, at each QCA timestep we calculate the mutual information between all qubits i and all other qubits j. Doing so reveals how much there is to learn about one qubit by measuring another, including effects of quantum entanglement. Visualizing each qubit as a node, the mutual information can be depicted as weighted links between nodes: the more correlated two qubits are, the more strongly they are linked. The collection of nodes and links makes a network. Some QCA form unstructured, randomly-linked networks while others are highly structured. 

Complex-network measures are designed to highlight certain structural patterns within a network. Historically, these measures have been used to study diverse networked-systems like friend groups on Facebook, biomolecule pathways in metabolism, and functional-connectivity in the brain. Remarkably, the most structured QCA networks we observed quantitatively resemble those of the complex systems just mentioned despite their simple construction and quantum unitary dynamics. 

Visualizing mutual information networks. Left: A Goldilocks-QCA generated network. Right: a random network.

What’s more, the particular QCA that generate the most complex networks are those that balance the activity-inactivity trade-off. From this observation, we formulate what we call the Goldilocks principle: QCA that generate the most complexity are those that change a qubit if and only if the qubit’s neighbors contain an equal number of 1’s and 0’s. The Goldilocks rules are neither too inactive nor too active, balancing the tradeoff to be “just right.”  We demonstrated the Goldilocks principle for QCA with nearest-neighbor constraints as well as QCA with nearest-and-next-nearest-neighbor constraints.

To my delight, the scientific conclusions of my QCA research resonate with broader lessons-learned from my time as a PhD student: Life is full of trade-offs, and finding the right balance is key to achieving that “just right” feeling.

Quantum estuary

Tourism websites proclaim, “There’s beautiful…and then there’s Santa Barbara.” I can’t accuse them of hyperbole, after living in Santa Barbara for several months. Santa Barbara’s beauty manifests in its whitewashed buildings, capped with red tiles; in the glint of sunlight on ocean wave; and in the pockets of tranquility enfolded in meadows and copses. An example lies about an hour’s walk from the Kavli Institute for Theoretical Physics (KITP), where I spent the late summer and early fall: an estuary. According to National Geographic, “[a]n estuary is an area where a freshwater river or stream meets the ocean.” The meeting of freshwater and saltwater echoed the meeting of disciplines at the KITP.

The KITP fosters science as a nature reserve fosters an ecosystem. Every year, the institute hosts several programs, each centered on one scientific topic. A program lasts a few weeks or months, during which scientists visit from across the world. We present our perspectives on the program topic, identify intersections of interests, collaborate, and exclaim over the ocean views afforded by our offices.

Not a bad office view, eh?

From August to October, the KITP hosted two programs about energy and information. The first program was called “Energy and Information Transport in Non-Equilibrium Quantum Systems,” or “Information,” for short. The second program was called “Non-Equilibrium Universality: From Classical to Quantum and Back,” or “Universality.” The programs’ topics and participant lists overlapped, so the KITP merged “Information” and “Universality” to form “Infoversality.” Don’t ask me which program served as the saltwater and which as the fresh.

But the mingling of minds ran deeper. Much of “Information” centered on quantum many-body physics, the study of behaviors emergent in collections of quantum particles. But the program introduced many-body quantum physicists to quantum thermodynamics and vice versa. (Quantum thermodynamicists re-envision thermodynamics, the Victorian science of energy, for quantum, small, information-processing, and far-from-equilibrium systems.) Furthermore, quantum thermodynamicists co-led the program and presented research at it. Months ago, someone advertised the program in the quantum-thermodynamics Facebook group as an activity geared toward group members. 

The ocean of many-body physics was to meet the river of quantum thermodynamics, and I was thrilled as a trout swimming near a hiker who’s discovered cracker crumbs in her pocket. 

A few of us live in this estuary, marrying quantum thermodynamics and many-body physics. I waded into the waters in 2016, by codesigning an engine (the star of Victorian thermodynamics) formed from a quantum material (studied in many-body physics). We can use tools from one field to solve problems in the other, draw inspiration from one to design questions in the other, and otherwise do what the United States Food and Drug Administration recently announced that we can do with COVID19 vaccines: mix and match.

It isn’t easy being interdisciplinary, so I wondered how this estuary would fare when semi-institutionalized in a program. I collected observations like seashells—some elegantly molded, some liable to cut a pedestrian’s foot, and some both.

Across the street from the KITP.

A sand dollar washed up early in the program, as I ate lunch with a handful of many-body physicists. An experimentalist had just presented a virtual talk about nanoscale clocks, which grew from studies of autonomous quantum clocks. The latter run on their own, without needing any external system to wind or otherwise control them. You’d want such clocks if building quantum engines, computers, or drones that operate remotely. Clocks measure time, time complements energy mathematically in physics, and thermodynamics is the study of energy; so autonomous quantum clocks have taken root in quantum thermodynamics. So I found myself explaining autonomous quantum clocks over sandwiches. My fellow diners expressed interest alongside confusion.

A scallop shell, sporting multiple edges, washed up later in the program: Many-body physicists requested an introduction to quantum thermodynamics. I complied one afternoon, at a chalkboard in the KITP’s outdoor courtyard. The discussion lasted for an hour, whereas most such conversations lasted for two. But three participants peppered me with questions over the coming weeks.

A conch shell surfaced, whispering when held to an ear. One program participant, a member of one community, had believed the advertising that had portrayed the program as intended for his cohort. The portrayal didn’t match reality, to him, and he’d have preferred to dive more deeply into his own field.

I dove into a collaboration with other KITPists—a many-body project inspired by quantum thermodynamics. Keep an eye out for a paper and a dedicated blog post.

A conference talk served as a polished shell, reflecting light almost as a mirror. The talk centered on erasure, a process that unites thermodynamics with information processing: Imagine performing computations in math class. You need blank paper (or the neurological equivalent) on which to scribble. Upon computing a great deal, you have to erase the paper—to reset it to a clean state. Erasing calls for rubbing an eraser across the paper and so for expending energy. This conclusion extends beyond math class and paper: To compute—or otherwise process information—for a long time, we have to erase information-storage systems and so to expend energy. This conclusion renders erasure sacred to us thermodynamicists who study information processing. Erasure litters our papers, conferences, and conversations.

Erasure’s energy cost trades off with time: The more time you can spend on erasure, the less energy you need.1 The conference talk explored this tradeoff, absorbing the quantum thermodynamicist in me. A many-body physicist asked, at the end of the talk, why we were discussing erasure. What quantum thermodynamicists took for granted, he hadn’t heard of. He reflected back at our community an image of ourselves from an outsider’s perspective. The truest mirror might not be the flattest and least clouded.

The author, wearing a KITP hat, not far from either estuary—natural or quantum.

Plants and crustaceans, mammals and birds, grow in estuaries. Call me a bent-nosed clam, but I prefer a quantum estuary to all other environments. Congratulations to the scientists who helped create a quantum estuary this summer and fall, and I look forward to the harvest.

1The least amount of energy that erasure can cost, on average over trials, is called Landauer’s bound. You’d pay this bound’s worth of energy if you erased infinitely slowly.

How a liberal-arts education has enhanced my physics research

I attended a liberal-arts college, and I reveled in the curriculum’s breadth. My coursework included art history, psychology, biology, economics, computer science, German literature, archaeology, and chemistry. My major sat halfway between the physics major and the create-your-own major; the requirements consisted mostly of physics but included math, philosophy, and history. By the end of college, I’d determined to dive into physics. So I undertook a physics research assistantship, enlisted in a Master’s program and then a PhD program, and became a theoretical physicist. I’m now building a physics research group that spans a government institute and the University of Maryland. One might think that I became a physicist despite my art history and archaeology. 

My liberal-arts education did mortify me a little as I pursued my Master’s degree. Most of my peers had focused on physics, mathematics, and computer science while I’d been reading Aristotle. They seemed to breeze through coursework that I clawed my way through. I still sigh wistfully over math courses, such as complex analysis, that I’ve never taken. Meanwhile, a debate about the liberal arts has been raging across the nation. Debt is weighing down recent graduates, and high-school students are loading up on STEMM courses. Colleges are cutting liberal-arts departments, and educational organizations are broadcasting the value of liberal-arts educations.

I’m not an expert in public policy or school systems; I’m a physicist. As a physicist, I’m grateful for my liberal-arts education. It’s enhanced my physics research in at least five ways.

(1) I learned to seek out, and take advantage of, context. Early in my first German-literature course, I’d just completed my first reading assignment. My professor told my class to fetch out our books and open them to the beginning. A few rustles later, we held our books open to page one of the main text. 

No, no, said my professor. Open your books to the beginning. Did anyone even look at the title page?

We hadn’t, we admitted. We’d missed a wealth of information, as the book contained a reproduction of an old title page. Publishers, fonts, and advertisement styles have varied across the centuries and the globe. They, together with printing and reprinting dates, tell stories about the book’s origin, popularity, role in society, and purposes. Furthermore, a frontispiece is worth a thousand words, all related before the main text begins. When my class turned to the main text, much later in the lecture, we saw it in a new light. Context deepens and broadens our understanding.

When I read a physics paper, I start at the beginning—the true beginning. I note the publication date, the authors, their institutions and countries, and the journal. X’s lab performed the experiment reported on? X was the world’s expert in Y back then but nursed a bias against Z, a bias later proved to be unjustified. So I should aim to learn from the paper about Y but should take statements about Z with a grain of salt. Seeking and processing context improves my use of physics papers, thanks to a German-literature course.

(2) I learned argumentation. Doing physics involves building, analyzing, criticizing, and repairing arguments. I argue that mathematical X models physical system Y accurately, that an experiment I’ve proposed is feasible with today’s technology, and that observation Z supports a conjecture of mine. Physicists also prove mathematical statements deductively. I received proof-writing lessons in a math course, halfway through college. One of the most competent teachers I’ve ever encountered taught the course. But I learned about classes of arguments and about properties of arguments in a philosophy course, Informal Logic. 

There, I learned to distinguish deduction from inference and an argument’s validity and soundness from an argument’s strength and cogency. I learned strategies for proving arguments and learned fallacies to criticize. I came to respect the difference between “any” and “every,” which I see interchanged in many physics papers. This philosophical framework helps me formulate, process, dissect, criticize, and correct physics arguments. 

For instance, I often parse long, dense, technical proofs of mathematical statements. First, I identify whether the proof strategy is reductio ad absurdum, proof by counterexample, or another strategy. Upon identifying the overarching structure, I can fill my understanding with details. Additionally, I check proofs by students, and I respond to criticisms of my papers by journal referees. I could say, upon reading an argument, “Something feels a bit off, and it’s sort of like the thing that felt a bit off in that paper I read last Tuesday.” But I’d rather group the argument I’m given together with arguments I know how to tackle. I’d rather be able to say, “They’re straw-manning my argument” or “That argument begs the question.” Doing so, I put my finger on the problem and take a step toward solving it.

(3) I learned to analyze materials to bits, then extract meaning from the analysis. English and German courses trained me to wring from literature every drop of meaning that I could discover. I used to write one to three pages about a few-line quotation. The analysis would proceed from diction and punctuation to literary devices; allusions; characters’ relationships with each other, themselves, and nature; and the quotation’s role in the monograph. Everything from minutia to grand themes required scrutiny, according to the dissection technique I trained in. Every pincer probe lifted another skein of skin or drew aside another tendon, offering deeper insights into the literary work. I learned to find the skeins to lift, lift them in the right direction, pinpoint the insights revealed, and integrate the insights into a coherent takeaway.

This training has helped me assess and interpret mathematics. Physicists pick a physical system to study, model the system with equations, and solve the equations. The next two steps are intertwined: evaluating whether one solved the equations correctly and translating the solution into the physical system’s behavior. These two steps necessitate a dissection of everything from minutia to grand themes: Why should this exponent be 4/5, rather than any other number? Should I have expected this energy to depend on that length in this way? Is the physical material aging quickly or resisting change? These questions’ answers inform more-important questions: Who cares? Do my observations shed light worth anyone’s time, or did I waste a week solving equations no one should care about?

To answer all these questions, I draw on my literary training: I dissect content, pinpoint insights, and extract meaning. Having performed this analysis in literature courses facilitates an arguably deeper analysis than my physics training did: In literature courses, I had to organize my thoughts and articulate them in essays. This process revealed holes in my argumentation, as well as connections that I’d overlooked. In contrast, a couple of lines in my physics homework earned full marks. The critical analysis of literature has deepened my assessment of solutions’ correctness, physical interpretation of mathematics, and extraction of meaning from solutions. 

(4) I learned what makes a physicist a physicist. In college, I had a friend who was studying applied mathematics and economics. Over dinner, he described a problem he’d encountered in his studies. I replied, almost without thinking, “From a physics perspective, I’d approach the problem like this.” I described my view, which my friend said he wouldn’t have thought of. I hadn’t thought of myself, and of the tools I was obtaining in the physics department, the way I did after our conversation. 

Physics involves a unique toolkit,1 set of goals, and philosophy. Physicists identify problems, model them, solve them, and analyze the results in certain ways. Students see examples of these techniques in lectures and practice these techniques for homework. But, as a student, I rarely heard articulations of the general principles that underlay the examples scattered across my courses like a handful of marbles across a kitchen floor. Example principles include, if you don’t understand an abstract idea, construct a simple example. Once you’ve finished a calculation, check whether your answer makes sense in the most extreme scenarios possible. After solving an equation, interpret the solution in terms of physical systems—of how particles and waves move and interact. 

I was learning these techniques, in college, without realizing that I was learning them. I became conscious of the techniques by comparing the approach natural to me with the approach taken in another discipline. Becoming conscious of my toolkit enabled me to wield it more effectively; one can best fry eggs when aware that one owns a spatula. The other disciplines at my liberal-arts college served as a foil for physics. Seeing other disciplines, I saw what makes physics physics—and improved my ability to apply my physics toolkit.

(5) I learned to draw connections between diverse ideas. Senior year of high school, my courses extended from physics to English literature. One might expect such a curriculum to feel higgledy-piggledy, but I found threads that ran through all my courses. For instance, I practiced public speaking in Reasoning, Research, and Rhetoric. Because I studied rhetoric, my philosophy teacher turned to me for input when introducing the triumvirate “thesis, antithesis, synthesis.”2 The philosophy curriculum included the feminist essay “If Men Could Menstruate,” which complemented the feminist book Wide Sargasso Sea in my English-literature course. In English literature, I learned that Baldassare Castiglione codified how Renaissance noblemen should behave, in The Book of the Courtier. The author’s name was the answer to the first question on my AP Modern European History exam. My history course covered Isaac Newton and Gottfried Wilhelm Leibniz, who invented calculus during the 17th century. I leveraged their discoveries in my calculus course, which I applied in my physics course. My physics teacher hoped that his students would solve the world’s energy problems—perhaps averting the global thermonuclear war that graced every debate in my rhetoric course (“If you don’t accept my team’s policy, then X will happen, leading to Y, leading to Z, which will cause a global thermonuclear war”). 

Threads linked everything across my liberal-arts education; every discipline featured an idea that paralleled an idea in another discipline. Finding those parallels grew into a game for me, a game that challenged my creativity. Cultivating that creativity paid off when I began doing physics research. Much of my research has resulted from finding, in one field, a concept that resembles a concept in another field. I smash the ideas together to gain insight into each discipline from the other discipline’s perspective. For example, during my PhD studies, I found a thread connecting the physics of DNA strands to the physics of black holes. That thread initiated a research program of mine that’s yielded nine papers, garnered 19 collaborators, and spawned two experiments. Studying diverse subjects trained me to draw creative connections, which underlie much physics research.

I haven’t detailed all the benefits that a liberal-arts education can accrue to a physics career. For instance, the liberal arts enhance one’s communication skills, key to collaborating on research and to conveying one’s research. Without conveying one’s research adroitly, one likely won’t impact a field much. Also, a liberal-arts education can help one connect with researchers from across the globe on a personal level.3 Personal connections enhance science, which scientists—humans—undertake.

As I began building my research group, I sought advice from an MIT professor who’d attended MIT as an undergraduate. He advised me to seek students who have unusual backgrounds, including liberal-arts educations. Don’t get me wrong; I respect and cherish the colleagues and friends of mine who attended MIT, Caltech, and other tech schools as undergraduates. Still, I wouldn’t trade my German literature and economics. The liberal arts have enriched my physics research no less than they’ve enriched the rest of my life.

1A toolkit that overlaps partially with other disciplines’ toolkits, as explained in (3).

2I didn’t help much. When asked to guess the last concept in the triumvirate, I tried “debate.”

3I once met a Ukrainian physicist who referred to Ilya Muromets in a conversation. Ilya Muromets is a bogatyr, a knight featured in Slavic epics set in the Middle Ages. I happened to have taken a Slavic-folklore course the previous year. So I responded with a reference to Muromets’s pals, Dobrynya Nikitich and Alyosha Popovich. The physicist and I hit it off, and he taught me much about condensed matter over the following months.

Cutting the quantum mustard

I had a relative to whom my parents referred, when I was little, as “that great-aunt of yours who walked into a glass door at your cousin’s birthday party.” I was a small child in a large family that mostly lived far away; little else distinguished this great-aunt from other relatives, in my experience. She’d intended to walk from my grandmother’s family room to the back patio. A glass door stood in the way, but she didn’t see it. So my great-aunt whammed into the glass; spent part of the party on the couch, nursing a nosebleed; and earned the epithet via which I identified her for years.

After growing up, I came to know this great-aunt as a kind, gentle woman who adored her family and was adored in return. After growing into a physicist, I came to appreciate her as one of my earliest instructors in necessary and sufficient conditions.

My great-aunt’s intended path satisfied one condition necessary for her to reach the patio: Nothing visible obstructed the path. But the path failed to satisfy a sufficient condition: The invisible obstruction—the glass door—had been neither slid nor swung open. Sufficient conditions, my great-aunt taught me, mustn’t be overlooked.

Her lesson underlies a paper I published this month, with coauthors from the Cambridge other than mine—Cambridge, England: David Arvidsson-Shukur and Jacob Chevalier Drori. The paper concerns, rather than pools and patios, quasiprobabilities, which I’ve blogged about many times [1,2,3,4,5,6,7]. Quasiprobabilities are quantum generalizations of probabilities. Probabilities describe everyday, classical phenomena, from Monopoly to March Madness to the weather in Massachusetts (and especially the weather in Massachusetts). Probabilities are real numbers (not dependent on the square-root of -1); they’re at least zero; and they compose in certain ways (the probability of sun or hail equals the probability of sun plus the probability of hail). Also, the probabilities that form a distribution, or a complete set, sum to one (if there’s a 70% chance of rain, there’s a 30% chance of no rain). 

In contrast, quasiprobabilities can be negative and nonreal. We call such values nonclassical, as they’re unavailable to the probabilities that describe classical phenomena. Quasiprobabilities represent quantum states: Imagine some clump of particles in a quantum state described by some quasiprobability distribution. We can imagine measuring the clump however we please. We can calculate the possible outcomes’ probabilities from the quasiprobability distribution.

Not from my grandmother’s house, although I wouldn’t mind if it were.

My favorite quasiprobability is an obscure fellow unbeknownst even to most quantum physicists: the Kirkwood-Dirac distribution. John Kirkwood defined it in 1933, and Paul Dirac defined it independently in 1945. Then, quantum physicists forgot about it for decades. But the quasiprobability has undergone a renaissance over the past few years: Experimentalists have measured it to infer particles’ quantum states in a new way. Also, colleagues and I have generalized the quasiprobability and discovered applications of the generalization across quantum physics, from quantum chaos to metrology (the study of how we can best measure things) to quantum thermodynamics to the foundations of quantum theory.

In some applications, nonclassical quasiprobabilities enable a system to achieve a quantum advantage—to usefully behave in a manner impossible for classical systems. Examples include metrology: Imagine wanting to measure a parameter that characterizes some piece of equipment. You’ll perform many trials of an experiment. In each trial, you’ll prepare a system (for instance, a photon) in some quantum state, send it through the equipment, and measure one or more observables of the system. Say that you follow the protocol described in this blog post. A Kirkwood-Dirac quasiprobability distribution describes the experiment.1 From each trial, you’ll obtain information about the unknown parameter. How much information can you obtain, on average over trials? Potentially more information if some quasiprobabilities are negative than if none are. The quasiprobabilities can be negative only if the state and observables fail to commute with each other. So noncommutation—a hallmark of quantum physics—underlies exceptional metrological results, as shown by Kirkwood-Dirac quasiprobabilities.

Exceptional results are useful, and we might aim to design experiments that achieve them. We can by designing experiments described by nonclassical Kirkwood-Dirac quasiprobabilities. When can the quasiprobabilities become nonclassical? Whenever the relevant quantum state and observables fail to commute, the quantum community used to believe. This belief turns out to mirror the expectation that one could access my grandmother’s back patio from the living room whenever no visible barriers obstructed the path. As a lack of visible barriers was necessary for patio access, noncommutation is necessary for Kirkwood-Dirac nonclassicality. But noncommutation doesn’t suffice, according to my paper with David and Jacob. We identified a sufficient condition, sliding back the metaphorical glass door on Kirkwood-Dirac nonclassicality. The condition depends on simple properties of the system, state, and observables. (Experts: Examples include the Hilbert space’s dimensionality.) We also quantified and upper-bounded the amount of nonclassicality that a Kirkwood-Dirac quasiprobability can contain.

From an engineering perspective, our results can inform the design of experiments intended to achieve certain quantum advantages. From a foundational perspective, the results help illuminate the sources of certain quantum advantages. To achieve certain advantages, noncommutation doesn’t cut the mustard—but we now know a condition that does.

For another take on our paper, check out this news article in Physics Today.  

1Really, a generalized Kirkwood-Dirac quasiprobability. But that phrase contains a horrendous number of syllables, so I’ll elide the “generalized.”

Learning about learning

The autumn of my sophomore year of college was mildly hellish. I took the equivalent of three semester-long computer-science and physics courses, atop other classwork; co-led a public-speaking self-help group; and coordinated a celebrity visit to campus. I lived at my desk and in office hours, always declining my flatmates’ invitations to watch The West Wing

Hard as I studied, my classmates enjoyed greater facility with the computer-science curriculum. They saw immediately how long an algorithm would run, while I hesitated and then computed the run time step by step. I felt behind. So I protested when my professor said, “You’re good at this.” 

I now see that we were focusing on different facets of learning. I rued my lack of intuition. My classmates had gained intuition by exploring computer science in high school, then slow-cooking their experiences on a mental back burner. Their long-term exposure to the material provided familiarity—the ability to recognize a new problem as belonging to a class they’d seen examples of. I was cooking course material in a mental microwave set on “high,” as a semester’s worth of material was crammed into ten weeks at my college.

My professor wasn’t measuring my intuition. He only saw that I knew how to compute an algorithm’s run time. I’d learned the material required of me—more than I realized, being distracted by what I hadn’t learned that difficult autumn.

We can learn a staggering amount when pushed far from our comfort zones—and not only we humans can. So can simple collections of particles.

Examples include a classical spin glass. A spin glass is a collection of particles that shares some properties with a magnet. Both a magnet and a spin glass consist of tiny mini-magnets called spins. Although I’ve blogged about quantum spins before, I’ll focus on classical spins here. We can imagine a classical spin as a little arrow that points upward or downward.  A bunch of spins can form a material. If the spins tend to point in the same direction, the material may be a magnet of the sort that’s sticking the faded photo of Fluffy to your fridge.

The spins may interact with each other, similarly to how electrons interact with each other. Not entirely similarly, though—electrons push each other away. In contrast, a spin may coax its neighbors into aligning or anti-aligning with it. Suppose that the interactions are random: Any given spin may force one neighbor into alignment, gently ask another neighbor to align, entreat a third neighbor to anti-align, and having nothing to say to neighbors four and five.

The spin glass can interact with the external world in two ways. First, we can stick the spins in a magnetic field, as by placing magnets above and below the glass. If aligned with the field, a spin has negative energy; and, if antialigned, positive energy. We can sculpt the field so that it varies across the spin glass. For instance, spin 1 can experience a strong upward-pointing field, while spin 2 experiences a weak downward-pointing field.

Second, say that the spins occupy a fixed-temperature environment, as I occupy a 74-degree-Fahrenheit living room. The spins can exchange heat with the environment. If releasing heat to the environment, a spin flips from having positive energy to having negative—from antialigning with the field to aligning.

Let’s perform an experiment on the spins. First, we design a magnetic field using random numbers. Whether the field points upward or downward at any given spin is random, as is the strength of the field experienced by each spin. We sculpt three of these random fields and call the trio a drive.

Let’s randomly select a field from the drive and apply it to the spin glass for a while; again, randomly select a field from the drive and apply it; and continue many times. The energy absorbed by the spins from the fields spikes, then declines.

Now, let’s create another drive of three random fields. We’ll randomly pick a field from this drive and apply it; again, randomly pick a field from this drive and apply it; and so on. Again, the energy absorbed by the spins spikes, then tails off.

Here comes the punchline. Let’s return to applying the initial fields. The energy absorbed by the glass will spike—but not as high as before. The glass responds differently to a familiar drive than to a new drive. The spin glass recognizes the original drive—has learned the first fields’ “fingerprint.” This learning happens when the fields push the glass far from equilibrium,1 as I learned when pushed during my mildly hellish autumn.

So spin glasses learn drives that push them far from equilibrium. So do many other simple, classical, many-particle systems: polymers, viscous liquids, crumpled sheets of Mylar, and more. Researchers have predicted such learning and observed it experimentally. 

Scientists have detected many-particle learning by measuring thermodynamic observables. Examples include the energy absorbed by the spin glass—what thermodynamicists call work. But thermodynamics developed during the 1800s, to describe equilibrium systems, not to study learning. 

One study of learning—the study of machine learning—has boomed over the past two decades. As described by the MIT Technology Review, “[m]achine-learning algorithms use statistics to find patterns in massive amounts of data.” Users don’t tell the algorithms how to find those patterns.

xkcd.com/1838

It seems natural and fitting to use machine learning to learn about the learning by many-particle systems. That’s what I did with collaborators from the group of Jeremy England, a GlaxoSmithKline physicist who studies complex behaviors of many particle systems. Weishun Zhong, Jacob Gold, Sarah Marzen, Jeremy, and I published our paper last month. 

Using machine learning, we detected and measured many-particle learning more reliably and precisely than thermodynamic measures seem able to. Our technique works on multiple facets of learning, analogous to the intuition and the computational ability I encountered in my computer-science course. We illustrated our technique on a spin glass, but one can apply our approach to other systems, too. I’m exploring such applications with collaborators at the University of Maryland.

The project pushed me far from my equilibrium: I’d never worked with machine learning or many-body learning. But it’s amazing, what we can learn when pushed far from equilibrium. I first encountered this insight sophomore fall of college—and now, we can quantify it better than ever.

1Equilibrium is a quiet, restful state in which the glass’s large-scale properties change little. No net flow of anything—such as heat or particles—enter or leave the system.

Project Ant-Man

The craziest challenge I’ve undertaken hasn’t been skydiving; sailing the Amazon on a homemade raft; scaling Mt. Everest; or digging for artifacts atop a hill in a Middle Eastern desert, near midday, during high summer.1 The craziest challenge has been to study the possibility that quantum phenomena affect cognition significantly. 

Most physicists agree that quantum phenomena probably don’t affect cognition significantly. Cognition occurs in biological systems, which have high temperatures, many particles, and watery components. Such conditions quash entanglement (a relationship that quantum particles can share and that can produce correlations stronger than any produceable by classical particles). 

Yet Matthew Fisher, a condensed-matter physicist, proposed a mechanism by which entanglement might enhance coordinated neuron firing. Phosphorus nuclei have spins (quantum properties similar to angular momentum) that might store quantum information for long times when in Posner molecules. These molecules may protect the information from decoherence (leaking quantum information to the environment), via mechanisms that Fisher described.

I can’t check how correct Fisher’s proposal is; I’m not a biochemist. But I’m a quantum information theorist. So I can identify how Posners could process quantum information if Fisher were correct. I undertook this task with my colleague Elizabeth Crosson, during my PhD

Experimentalists have begun testing elements of Fisher’s proposal. What if, years down the road, they find that Posners exist in biofluids and protect quantum information for long times? We’ll need to test whether Posners can share entanglement. But detecting entanglement tends to require control finer than you can exert with a stirring rod. How could you check whether a beakerful of particles contains entanglement?

I asked that question of Adam Bene Watts, a PhD student at MIT, and John Wright, then an MIT postdoc and now an assistant professor in Texas. John gave our project its codename. At a meeting one day, he reported that he’d watched the film Avengers: Endgame. Had I seen it? he asked.

No, I replied. The only superhero movie I’d seen recently had been Ant-Man and the Wasp—and that because, according to the film’s scientific advisor, the movie riffed on research of mine. 

Go on, said John.

Spiros Michalakis, the Caltech mathematician in charge of this blog, served as the advisor. The film came out during my PhD; during a meeting of our research group, Spiros advised me to watch the movie. There was something in it “for you,” he said. “And you,” he added, turning to Elizabeth. I obeyed, to hear Laurence Fishburne’s character tell Ant-Man that another character had entangled with the Posner molecules in Ant-Man’s brain.2 

John insisted on calling our research Project Ant-Man.

John and Adam study Bell tests. Bell test sounds like a means of checking whether the collar worn by your cat still jingles. But the test owes its name to John Stewart Bell, a Northern Irish physicist who wrote a groundbreaking paper in 1964

Say you’d like to check whether two particles share entanglement. You can run an experiment, described by Bell, on them. The experiment ends with a measurement of the particles. You repeat this experiment in many trials, using identical copies of the particles in subsequent trials. You accumulate many measurement outcomes, whose statistics you calculate. You plug those statistics into a formula concocted by Bell. If the result exceeds some number that Bell calculated, the particles shared entanglement.

We needed a variation on Bell’s test. In our experiment, every trial would involve hordes of particles. The experimentalists—large, clumsy, classical beings that they are—couldn’t measure the particles individually. The experimentalists could record only aggregate properties, such as the intensity of the phosphorescence emitted by a test tube.

Adam, MIT physicist Aram Harrow, and I concocted such a Bell test, with help from John. Physical Review A published our paper this month—as a Letter and an Editor’s Suggestion, I’m delighted to report.

For experts: The trick was to make the Bell correlation function nonlinear in the state. We assumed that the particles shared mostly pairwise correlations, though our Bell inequality can accommodate small aberrations. Alas, no one can guarantee that particles share only mostly pairwise correlations. Violating our Bell inequality therefore doesn’t rule out hidden-variables theories. Under reasonable assumptions, though, a not-completely-paranoid experimentalist can check for entanglement using our test. 

One can run our macroscopic Bell test on photons, using present-day technology. But we’re more eager to use the test to characterize lesser-known entities. For instance, we sketched an application to Posner molecules. Detecting entanglement in chemical systems will require more thought, as well as many headaches for experimentalists. But our paper broaches the cask—which I hope to see flow in the next Ant-Man film. Due to debut in 2022, the movie has the subtitle Quantumania. Sounds almost as crazy as studying the possibility that quantum phenomena affect cognition.

1Of those options, I’ve undertaken only the last.

2In case of any confusion: We don’t know that anyone’s brain contains Posner molecules. The movie features speculative fiction.