If I could do science like Spider-Man

A few Saturdays ago, I traveled home from a summer school at which I’d been lecturing in Sweden. Around 8:30 AM, before the taxi arrived, I settled into an armchair in my hotel room and refereed a manuscript from a colleague. After reaching the airport, I read an experimental proposal for measuring a quantity that colleagues and I had defined. I drafted an article for New Scientist on my trans-Atlantic flight, composed several emails, and provided feedback about a student’s results (we’d need more data). Around 8 PM Swedish time, I felt satisfyingly exhausted—and about ten hours of travel remained. So I switched on Finnair’s entertainment system and navigated to Spider-Man: No Way Home.

I found much to delight. Actor Alfred Molina plays the supervillain Doc Ock with charisma and verve that I hadn’t expected from a tentacled murderer. Playing on our heartstrings, Willem Dafoe imbues the supervillain Norman Osborn with frailty and humanity. Three characters (I won’t say which, for the spoiler-sensitive) exhibit a playful chemistry. To the writers who thought to bring the trio together, I tip my hat. I tip my hat also to the special-effects coders who sweated over reconciling Spider-Man’s swoops and leaps with the laws of mechanics.

I’m not a physicist to pick bones with films for breaking physical laws. You want to imagine a Mirror Dimension controlled by a flying erstwhile surgeon? Go for it. Falling into a vat of electrical eels endows you with the power to control electricity? Why not. Films like Spider-Man’s aren’t intended to portray physical laws accurately; they’re intended to portray people and relationships meaningfully. So I raised nary an eyebrow at characters’ zipping between universes (although I had trouble buying teenage New Yorkers who called adults “sir” and “ma’am”).

Anyway, no hard feelings about the portrayal of scientific laws. The portrayal of the scientific process, though, entertained me even more than Dr. Strange’s trademark facetiousness. In one scene, twelfth grader Peter Parker (Spider-Man’s alter-ego) commandeers a high-school lab with two buddies. In a fraction of a night, the trio concocts cures for four supervillains whose evil stems from physical, chemical, and biological accidents (e.g., falling into the aforementioned vat of electric eels).1 And they succeed. In a few hours. Without test subjects or even, as far as we could see, samples of their would-be test subjects. Without undergoing several thousand iterations of trying out their cures, failing, and tweaking their formulae—or even undergoing one iteration.

I once collaborated with an experimentalist renowned for his facility with superconducting qubits. He’d worked with a panjandrum of physics years before—a panjandrum who later reminisced to me, “A theorist would propose an experiment, [this experimentalist would tackle the proposal,] and boom—the proposal would work.” Yet even this experimentalist’s team invested a year in an experiment that he’d predicted would take a month.

Worse, the observatory LIGO detected gravitational waves in 2016 after starting to take data in 2002…after beginning its life during the 1960s.2 

Recalling the toil I’d undertaken all day—and only as a theorist, not even as an experimentalist charged with taking data through the night—I thought, I want to be like Spider-Man. Specifically, I want to do science like Spider-Man. Never mind shooting webs out of my wrists or swooping through the air. Never mind buddies in the Avengers, a Greek-statue physique, or high-tech Spandex. I want to try out a radical new idea and have it work. On the first try. Four times in a row on the same day. 

Daydreaming in the next airport (and awake past my bedtime), I imagined what a theorist could accomplish with Spider-Man’s scientific superpowers. I could calculate any integral…write code free of bugs on the first try3…prove general theorems in a single appendix!

Too few hours later, I woke up at home, jet-lagged but free of bites from radioactive calculators. I got up, breakfasted, showered, and settled down to work. Because that’s what scientists do—work. Long and hard, including when those around us are dozing or bartering frequent-flyer miles, such that the satisfaction of discoveries is well-earned. I have to go edit a paper now, but, if you have the time, I recommend watching the latest Spider-Man movie. It’s a feast of fantasy.

1And from psychological disorders, but the therapy needed to cure those would doom any blockbuster.

2You might complain that comparing Peter Parker’s labwork with LIGO’s is unfair. LIGO required the construction of large, high-tech facilities; Parker had only to cure a lizard-man of his reptilian traits and so on. But Tony Stark built a particle accelerator in his basement within a few hours, in Iron Man; and superheroes are all of a piece, as far as their scientific exploits are concerned.

3Except for spiders?

Quantum connections

We were seated in the open-air back of a boat, motoring around the Stockholm archipelago. The Swedish colors fluttered above our heads; the occasional speedboat zipped past, rocking us in its wake; and wildflowers dotted the bank on either side. Suddenly, a wood-trimmed boat glided by, and the captain waved from his perch.

The gesture surprised me. If I were in a vehicle of the sort most familiar to me—a car—I wouldn’t wave to other drivers. In a tram, I wouldn’t wave to passengers on a parallel track. Granted, trams and cars are closed, whereas boats can be open-air. But even as a pedestrian in a downtown crossing, I wouldn’t wave to everyone I passed. Yet, as boat after boat pulled alongside us, we received salutation after salutation.

The outing marked the midpoint of the Quantum Connections summer school. Physicists Frank Wilczek, Antti Niemi, and colleagues coordinate the school, which draws students and lecturers from across the globe. Although sponsored by Stockholm University, the school takes place at a century-old villa whose name I wish I could pronounce: Högberga Gård. The villa nestles atop a cliff on an island in the archipelago. We ventured off the island after a week of lectures.

Charlie Marcus lectured about materials formed from superconductors and semiconductors; John Martinis, about superconducting qubits; Jianwei Pan, about quantum advantages; and others, about symmetries, particle statistics, and more. Feeling like an ant among giants, I lectured about quantum thermodynamics. Two other lectures linked quantum physics with gravity—and in a way you might not expect. I appreciated the opportunity to reconnect with the lecturer: Igor Pikovski.

Cruising around Stockholm

Igor doesn’t know it, but he’s one of the reasons why I joined the Harvard-Smithsonian Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP) as an ITAMP Postdoctoral Fellow in 2018. He’d held the fellowship beginning a few years before, and he’d earned a reputation for kindness and consideration. Also, his research struck me as some of the most fulfilling that one could undertake.

If you’ve heard about the intersection of quantum physics and gravity, you’ve probably heard of approaches other than Igor’s. For instance, physicists are trying to construct a theory of quantum gravity, which would describe black holes and the universe’s origin. Such a “theory of everything” would reduce to Einstein’s general theory of relativity when applied to planets and would reduce to quantum theory when applied to atoms. In another example, physicists leverage quantum technologies to observe properties of gravity. Such technologies enabled the observatory LIGO to register gravitational waves—ripples in space-time. 

Igor and his colleagues pursue a different goal: to observe phenomena whose explanations depend on quantum theory and on gravity.

In his lectures, Igor illustrated with an experiment first performed in 1975. The experiment relies on what happens if you jump: You gain energy associated with resisting the Earth’s gravitational pull—gravitational potential energy. A quantum object’s energy determines how the object’s quantum state changes in time. The experimentalists applied this fact to a beam of neutrons. 

They put the beam in a superposition of two locations: closer to the Earth’s surface and farther away. The closer component changed in time in one way, and the farther component changed another way. After a while, the scientists recombined the components. The two interfered with each other similarly to the waves created by two raindrops falling near each other on a puddle. The interference evidenced gravity’s effect on the neutrons’ quantum state.

Summer-school venue. I’d easily say it’s gorgeous but not easily pronounce its name.

The experimentalists approximated gravity as dominated by the Earth alone. But other masses can influence the gravitational field noticeably. What if you put a mass in a superposition of different locations? What would happen to space-time?

Or imagine two quantum particles too far apart to interact with each other significantly. Could a gravitational field entangle the particles by carrying quantum correlations from one to the other?

Physicists including Igor ponder these questions…and then ponder how experimentalists could test their predictions. The more an object influences gravity, the more massive the object tends to be, and the more easily the object tends to decohere—to spill the quantum information that it holds into its surroundings.

The “gravity-quantum interface,” as Igor entitled his lectures, epitomizes what I hoped to study in college, as a high-school student entranced by physics, math, and philosophy. What’s more curious and puzzling than superpositions, entanglement, and space-time? What’s more fundamental than quantum theory and gravity? Little wonder that connecting them inspires wonder.

But we humans are suckers for connections. I appreciated the opportunity to reconnect with a colleague during the summer school. Boaters on the Stockholm archipelago waved to our cohort as they passed. And who knows—gravitational influences may even have rippled between the boats, entangling us a little.

Requisite physicist-visiting-Stockholm photo

With thanks to the summer-school organizers, including Pouya Peighami and Elizabeth Yang, for their invitation and hospitality.

One equation to rule them all?

In lieu of composing a blog post this month, I’m publishing an article in Quanta Magazine. The article provides an introduction to fluctuation relations, souped-up variations on the second law of thermodynamics, which helps us understand why time flows in only one direction. The earliest fluctuation relations described classical systems, such as single strands of DNA. Many quantum versions have been proved since. Their proliferation contrasts with the stereotype of physicists as obsessed with unification—with slimming down a cadre of equations into one über-equation. Will one quantum fluctuation relation emerge to rule them all? Maybe, and maybe not. Maybe the multiplicity of quantum fluctuation relations reflects the richness of quantum thermodynamics.

You can read more in Quanta Magazine here and yet more in chapter 9 of my book. For recent advances in fluctuation relations, as opposed to the broad introduction there, check out earlier Quantum Frontiers posts here, here, here, here, and here.

The power of being able to say “I can explain that”

Caltech condensed-matter theorist Gil Refael explained his scientific raison dê’tre early in my grad-school career: “What really gets me going is seeing a plot [of experimental data] and being able to say, ‘I can explain that.’” The quote has stuck with me almost word for word. When I heard it, I was working deep in abstract quantum information theory and thermodynamics, proving theorems about thought experiments. Embedding myself in pure ideas has always held an aura of romance for me, so I nodded along without seconding Gil’s view.

Roughly nine years later, I concede his point.

The revelation walloped me last month, as I was polishing a paper with experimental collaborators. Members of the Institute for Quantum Optics and Quantum Information (IQOQI) in Innsbruck, Austria—Florian Kranzl, Manoj Joshi, and Christian Roos—had performed an experiment in trapped-ion guru Rainer Blatt’s lab. Their work realized an experimental proposal that I’d designed with fellow theorists near the beginning of my postdoc stint. We aimed to observe signatures of particularly quantum thermalization

Throughout the universe, small systems exchange stuff with their environments. For instance, the Earth exchanges heat and light with the rest of the solar system. After exchanging stuff for long enough, the small system equilibrates with the environment: Large-scale properties of the small system (such as its volume and energy) remain fairly constant; and as much stuff enters the small system as leaves, on average. The Earth remains far from equilibrium, which is why we aren’t dead yet

Far from equilibrium and proud of it

In many cases, in equilibrium, the small system shares properties of the environment, such as the environment’s temperature. In these cases, we say that the small system has thermalized and, if it’s quantum, has reached a thermal state.

The stuff exchanged can consist of energy, particles, electric charge, and more. Unlike classical planets, quantum systems can exchange things that participate in quantum uncertainty relations (experts: that fail to commute). Quantum uncertainty mucks up derivations of the thermal state’s mathematical form. Some of us quantum thermodynamicists discovered the mucking up—and identified exchanges of quantum-uncertain things as particularly nonclassical thermodynamics—only a few years ago. We reworked conventional thermodynamic arguments to accommodate this quantum uncertainty. The small system, we concluded, likely equilibrates to near a thermal state whose mathematical form depends on the quantum-uncertain stuff—what we termed a non-Abelian thermal state. I wanted to see this equilibration in the lab. So I proposed an experiment with theory collaborators; and Manoj, Florian, and Christian took a risk on us.

The experimentalists arrayed between six and fifteen ions in a line. Two ions formed the small system, and the rest formed the quantum environment. The ions exchanged the x-, y-, and z-components of their spin angular momentum—stuff that participates in quantum uncertainty relations. The ions began with a fairly well-defined amount of each spin component, as described in another blog post. The ions exchanged stuff for a while, and then the experimentalists measured the small system’s quantum state.

The small system equilibrated to near the non-Abelian thermal state, we found. No conventional thermal state modeled the results as accurately. Score!

My postdoc and numerical-simulation wizard Aleks Lasek modeled the experiment on his computer. The small system, he found, remained farther from the non-Abelian thermal state in his simulation than in the experiment. Aleks plotted the small system’s distance to the non-Abelian thermal state against the ion chain’s length. The points produced experimentally sat lower down than the points produced numerically. Why?

I think I can explain that, I said. The two ions exchange stuff with the rest of the ions, which serve as a quantum environment. But the two ions exchange stuff also with the wider world, such as stray electromagnetic fields. The latter exchanges may push the small system farther toward equilibrium than the extra ions alone do.

Fortunately for the development of my explanatory skills, collaborators prodded me to hone my argument. The wider world, they pointed out, effectively has a very high temperature—an infinite temperature.1 Equilibrating with that environment, the two ions would acquire an infinite temperature themselves. The two ions would approach an infinite-temperature thermal state, which differs from the non-Abelian thermal state we aimed to observe.

Fair, I said. But the extra ions probably have a fairly high temperature themselves. So the non-Abelian thermal state is probably close to the infinite-temperature thermal state. Analogously, if someone cooks goulash similarly to his father, and the father cooks goulash similarly to his grandfather, then the youngest chef cooks goulash similarly to his grandfather. If the wider world pushes the two ions to equilibrate to infinite temperature, then, because the infinite-temperature state lies near the non-Abelian thermal state, the wider world pushes the two ions to equilibrate to near the non-Abelian thermal state.

Tasty, tasty thermodynamicis

I plugged numbers into a few equations to check that the extra ions do have a high temperature. (Perhaps I should have done so before proposing the argument above, but my collaborators were kind enough not to call me out.) 

Aleks hammered the nail into the problem’s coffin by incorporating into his simulations the two ions’ interaction with an infinite-temperature wider world. His numerical data points dropped to near the experimental data points. The new plot supported my story.

I can explain that! Aleks’s results buoyed me the whole next day; I found myself smiling at random times throughout the afternoon. Not that I’d explained a grand mystery, like the unexpected hiss heard by Arno Penzias and Robert Wilson when they turned on a powerful antenna in 1964. The hiss turned out to come from the cosmic microwave background (CMB), a collection of photons that fill the visible universe. The CMB provided evidence for the then-controversial Big Bang theory of the universe’s origin. Discovering the CMB earned Penzias and Wilson a Nobel Prize. If the noise caused by the CMB was music to cosmologists’ ears, the noise in our experiment is the quiet wailing of a shy banshee. But it’s our experiment’s noise, and we understand it now.

The experience hasn’t weaned me off the romance of proving theorems about thought experiments. Theorems about thermodynamic quantum uncertainty inspired the experiment that yielded the plot that confused us. But I now second Gil’s sentiment. In the throes of an experiment, “I can explain that” can feel like a battle cry.

1Experts: The wider world effectively has an infinite temperature because (i) the dominant decoherence is dephasing relative to the \sigma_z product eigenbasis and (ii) the experimentalists rotate their qubits often, to simulate a rotationally invariant Hamiltonian evolution. So the qubits effectively undergo dephasing relative to the \sigma_x, \sigma_y, and \sigma_z eigenbases.

Space-time and the city

I felt like a gum ball trying to squeeze my way out of a gum-ball machine. 

I was one of 50-ish physicists crammed into the lobby—and in the doorway, down the stairs, and onto the sidewalk—of a Manhattan hotel last December. Everyone had received a COVID vaccine, and the omicron variant hadn’t yet begun chewing up North America. Everyone had arrived on the same bus that evening, feeding on the neon-bright views of Fifth Avenue through dinnertime. Everyone wanted to check in and offload suitcases before experiencing firsthand the reason for the nickname “the city that never sleeps.” So everyone was jumbled together in what passed for a line.

We’d just passed the halfway point of the week during which I was pretending to be a string theorist. I do that whenever my research butts up against black holes, chaos, quantum gravity (the attempt to unify quantum physics with Einstein’s general theory of relativity), and alternative space-times. These topics fall under the heading “It from Qubit,” which calls for understanding puzzling physics (“It”) by analyzing how quantum systems process information (“Qubit”). The “It from Qubit” crowd convenes for one week each December, to share progress and collaborate.1 The group spends Monday through Wednesday at Princeton’s Institute for Advanced Study (IAS), dogged by photographs of Einstein, busts of Einstein, and roads named after Einstein. A bus ride later, the group spends Thursday and Friday at the Simons Foundation in New York City.

I don’t usually attend “It from Qubit” gatherings, as I’m actually a quantum information theorist and quantum thermodynamicist. Having admitted as much during the talk I presented at the IAS, I failed at pretending to be a string theorist. Happily, I adore being the most ignorant person in a roomful of experts, as the experience teaches me oodles. At lunch and dinner, I’d plunk down next to people I hadn’t spoken to and ask what they see as trending in the “It from Qubit” community. 

One buzzword, I’d first picked up on shortly before the pandemic had begun (replicas). Having lived a frenetic life, that trend seemed to be declining. Rising buzzwords (factorization and islands), I hadn’t heard in black-hole contexts before. People were still tossing around terms from when I’d first forayed into “It from Qubit” (scrambling and out-of-time-ordered correlator), but differently from then. Five years ago, the terms identified the latest craze. Now, they sounded entrenched, as though everyone expected everyone else to know and accept their significance.

One buzzword labeled my excuse for joining the workshops: complexity. Complexity wears as many meanings as the stereotypical New Yorker wears items of black clothing. Last month, guest blogger Logan Hillberry wrote about complexity that emerges in networks such as brains and social media. To “It from Qubit,” complexity quantifies the difficulty of preparing a quantum system in a desired state. Physicists have conjectured that a certain quantum state’s complexity parallels properties of gravitational systems, such as the length of a wormhole that connects two black holes. The wormhole’s length grows steadily for a time exponentially large in the gravitational system’s size. So, to support the conjecture, researchers have been trying to prove that complexity typically grows similarly. Collaborators and I proved that it does, as I explained in my talk and as I’ll explain in a future blog post. Other speakers discussed experimental complexities, as well as the relationship between complexity and a simplified version of Einstein’s equations for general relativity.

Inside the Simons Foundation on Fifth Avenue in Manhattan

I learned a bushel of physics, moonlighting as a string theorist that week. The gum-ball-machine lobby, though, retaught me something I’d learned long before the pandemic. Around the time I squeezed inside the hotel, a postdoc struck up a conversation with the others of us who were clogging the doorway. We had a decent fraction of an hour to fill; so we chatted about quantum thermodynamics, grant applications, and black holes. I asked what the postdoc was working on, he explained a property of black holes, and it reminded me of a property of thermodynamics. I’d nearly reached the front desk when I realized that, out of the sheer pleasure of jawing about physics with physicists in person, I no longer wanted to reach the front desk. The moment dangles in my memory like a crystal ornament from the lobby’s tree—pendant from the pandemic, a few inches from the vaccines suspended on one side and from omicron on the other. For that moment, in a lobby buoyed by holiday lights, wrapped in enough warmth that I’d forgotten the December chill outside, I belonged to the “It from Qubit” community as I hadn’t belonged to any community in 22 months.

Happy new year.

Presenting at the IAS was a blast. Photo credit: Jonathan Oppenheim.

1In person or virtually, pandemic-dependently.

Thanks to the organizers of the IAS workshop—Ahmed Almheiri, Adam Bouland, Brian Swingle—for the invitation to present and to the organizers of the Simons Foundation workshop—Patrick Hayden and Matt Headrick—for the invitation to attend.

Balancing the tradeoff

So much to do, so little time. Tending to one task is inevitably at the cost of another, so how does one decide how to spend their time? In the first few years of my PhD, I balanced problem sets, literature reviews, and group meetings, but at the detriment to my hobbies. I have played drums my entire life, but I largely fell out of practice in graduate school. Recently, I made time to play with a group of musicians, even landing a couple gigs in downtown Austin, Texas, “live music capital of the world.” I have found attending to my non-physics interests makes my research hours more productive and less taxing. Finding the right balance of on- versus off-time has been key to my success as my PhD enters its final year.

Of course, life within physics is also full of tradeoffs. My day job is as an experimentalist. I use tightly focused laser beams, known as optical tweezers, to levitate micrometer-sized glass spheres. I monitor a single microsphere’s motion as it undergoes collisions with air molecules, and I study the system as an environmental sensor of temperature, fluid flow, and acoustic waves; however, by night I am a computational physicist. I code simulations of interacting qubits subject to kinetic constraints, so-called quantum cellular automata (QCA). My QCA work started a few years ago for my Master’s degree, but my interest in the subject persists. I recently co-authored one paper summarizing the work so far and another detailing an experimental implementation.

The author doing his part to “keep Austin weird” by playing the drums dressed as grackle (note the beak), the central-Texas bird notorious for overrunning grocery store parking lots.
Balancing research interests: Trapping a glass microsphere with optical tweezers.
Balancing research interests: Visualizing the time evolution of four different QCA rules.

QCA, the subject of this post, are themselves tradeoff-aware systems. To see what I mean, first consider their classical counterparts cellular automata. In their simplest construction, the system is a one-dimensional string of bits. Each bit takes a value of 0 or 1 (white or black). The bitstring changes in discrete time steps based on a simultaneously-applied local update rule: Each bit, along with its two nearest-neighbors, determine the next state of the central bit. Put another way, a bit either flips, i.e., changes 0 to 1 or 1 to 0, or remains unchanged over a timestep depending on the state of that bit’s local neighborhood. Thus, by choosing a particular rule, one encodes a trade off between activity (bit flips) and inactivity (bit remains unchanged). Despite their simple construction, cellular automata dynamics are diverse; they can produce fractals and encryption-quality random numbers. One rule even has the ability to run arbitrary computer algorithms, a property known as universal computation.

Classical cellular automata. Left: rule 90 producing the fractal Sierpiński’s triangle. Middle: rule 30 can be used to generate random numbers. Right: rule 110 is capable of universal computation.

In QCA, bits are promoted to qubits. Instead of being just 0 or 1 like a bit, a qubit can be a continuous mixture of both 0 and 1, a property called superposition. In QCA, a qubit’s two neighbors being 0 or 1 determine whether or not it changes. For example, when in an active neighborhood configuration, a qubit can be coded to change from 0 to “0 plus 1” or from 1 to “0 minus 1”. This is already a head-scratcher, but things get even weirder. If a qubit’s neighbors are in a superposition, then the center qubit can become entangled with those neighbors. Entanglement correlates qubits in a way that is not possible with classical bits.

Do QCA support the emergent complexity observed in their classical cousins? What are the effects of a continuous state space, superposition, and entanglement? My colleagues and I attacked these questions by re-examining many-body physics tools through the lens of complexity science. Singing the lead, we have a workhorse of quantum and solid-state physics: two-point correlations. Singing harmony we have the bread-and-butter of network analysis: complex-network measures. The duet between the two tells the story of structured correlations in QCA dynamics.

In a bit more detail, at each QCA timestep we calculate the mutual information between all qubits i and all other qubits j. Doing so reveals how much there is to learn about one qubit by measuring another, including effects of quantum entanglement. Visualizing each qubit as a node, the mutual information can be depicted as weighted links between nodes: the more correlated two qubits are, the more strongly they are linked. The collection of nodes and links makes a network. Some QCA form unstructured, randomly-linked networks while others are highly structured. 

Complex-network measures are designed to highlight certain structural patterns within a network. Historically, these measures have been used to study diverse networked-systems like friend groups on Facebook, biomolecule pathways in metabolism, and functional-connectivity in the brain. Remarkably, the most structured QCA networks we observed quantitatively resemble those of the complex systems just mentioned despite their simple construction and quantum unitary dynamics. 

Visualizing mutual information networks. Left: A Goldilocks-QCA generated network. Right: a random network.

What’s more, the particular QCA that generate the most complex networks are those that balance the activity-inactivity trade-off. From this observation, we formulate what we call the Goldilocks principle: QCA that generate the most complexity are those that change a qubit if and only if the qubit’s neighbors contain an equal number of 1’s and 0’s. The Goldilocks rules are neither too inactive nor too active, balancing the tradeoff to be “just right.”  We demonstrated the Goldilocks principle for QCA with nearest-neighbor constraints as well as QCA with nearest-and-next-nearest-neighbor constraints.

To my delight, the scientific conclusions of my QCA research resonate with broader lessons-learned from my time as a PhD student: Life is full of trade-offs, and finding the right balance is key to achieving that “just right” feeling.

Cutting the quantum mustard

I had a relative to whom my parents referred, when I was little, as “that great-aunt of yours who walked into a glass door at your cousin’s birthday party.” I was a small child in a large family that mostly lived far away; little else distinguished this great-aunt from other relatives, in my experience. She’d intended to walk from my grandmother’s family room to the back patio. A glass door stood in the way, but she didn’t see it. So my great-aunt whammed into the glass; spent part of the party on the couch, nursing a nosebleed; and earned the epithet via which I identified her for years.

After growing up, I came to know this great-aunt as a kind, gentle woman who adored her family and was adored in return. After growing into a physicist, I came to appreciate her as one of my earliest instructors in necessary and sufficient conditions.

My great-aunt’s intended path satisfied one condition necessary for her to reach the patio: Nothing visible obstructed the path. But the path failed to satisfy a sufficient condition: The invisible obstruction—the glass door—had been neither slid nor swung open. Sufficient conditions, my great-aunt taught me, mustn’t be overlooked.

Her lesson underlies a paper I published this month, with coauthors from the Cambridge other than mine—Cambridge, England: David Arvidsson-Shukur and Jacob Chevalier Drori. The paper concerns, rather than pools and patios, quasiprobabilities, which I’ve blogged about many times [1,2,3,4,5,6,7]. Quasiprobabilities are quantum generalizations of probabilities. Probabilities describe everyday, classical phenomena, from Monopoly to March Madness to the weather in Massachusetts (and especially the weather in Massachusetts). Probabilities are real numbers (not dependent on the square-root of -1); they’re at least zero; and they compose in certain ways (the probability of sun or hail equals the probability of sun plus the probability of hail). Also, the probabilities that form a distribution, or a complete set, sum to one (if there’s a 70% chance of rain, there’s a 30% chance of no rain). 

In contrast, quasiprobabilities can be negative and nonreal. We call such values nonclassical, as they’re unavailable to the probabilities that describe classical phenomena. Quasiprobabilities represent quantum states: Imagine some clump of particles in a quantum state described by some quasiprobability distribution. We can imagine measuring the clump however we please. We can calculate the possible outcomes’ probabilities from the quasiprobability distribution.

Not from my grandmother’s house, although I wouldn’t mind if it were.

My favorite quasiprobability is an obscure fellow unbeknownst even to most quantum physicists: the Kirkwood-Dirac distribution. John Kirkwood defined it in 1933, and Paul Dirac defined it independently in 1945. Then, quantum physicists forgot about it for decades. But the quasiprobability has undergone a renaissance over the past few years: Experimentalists have measured it to infer particles’ quantum states in a new way. Also, colleagues and I have generalized the quasiprobability and discovered applications of the generalization across quantum physics, from quantum chaos to metrology (the study of how we can best measure things) to quantum thermodynamics to the foundations of quantum theory.

In some applications, nonclassical quasiprobabilities enable a system to achieve a quantum advantage—to usefully behave in a manner impossible for classical systems. Examples include metrology: Imagine wanting to measure a parameter that characterizes some piece of equipment. You’ll perform many trials of an experiment. In each trial, you’ll prepare a system (for instance, a photon) in some quantum state, send it through the equipment, and measure one or more observables of the system. Say that you follow the protocol described in this blog post. A Kirkwood-Dirac quasiprobability distribution describes the experiment.1 From each trial, you’ll obtain information about the unknown parameter. How much information can you obtain, on average over trials? Potentially more information if some quasiprobabilities are negative than if none are. The quasiprobabilities can be negative only if the state and observables fail to commute with each other. So noncommutation—a hallmark of quantum physics—underlies exceptional metrological results, as shown by Kirkwood-Dirac quasiprobabilities.

Exceptional results are useful, and we might aim to design experiments that achieve them. We can by designing experiments described by nonclassical Kirkwood-Dirac quasiprobabilities. When can the quasiprobabilities become nonclassical? Whenever the relevant quantum state and observables fail to commute, the quantum community used to believe. This belief turns out to mirror the expectation that one could access my grandmother’s back patio from the living room whenever no visible barriers obstructed the path. As a lack of visible barriers was necessary for patio access, noncommutation is necessary for Kirkwood-Dirac nonclassicality. But noncommutation doesn’t suffice, according to my paper with David and Jacob. We identified a sufficient condition, sliding back the metaphorical glass door on Kirkwood-Dirac nonclassicality. The condition depends on simple properties of the system, state, and observables. (Experts: Examples include the Hilbert space’s dimensionality.) We also quantified and upper-bounded the amount of nonclassicality that a Kirkwood-Dirac quasiprobability can contain.

From an engineering perspective, our results can inform the design of experiments intended to achieve certain quantum advantages. From a foundational perspective, the results help illuminate the sources of certain quantum advantages. To achieve certain advantages, noncommutation doesn’t cut the mustard—but we now know a condition that does.

For another take on our paper, check out this news article in Physics Today.  

1Really, a generalized Kirkwood-Dirac quasiprobability. But that phrase contains a horrendous number of syllables, so I’ll elide the “generalized.”

Peeking into the world of quantum intelligence

Intelligent beings have the ability to receive, process, store information, and based on the processed information, predict what would happen in the future and act accordingly.

An illustration of receiving, processing, and storing information. Based on the processed information, one can make prediction about the future.
[Credit: Claudia Cheng]

We, as intelligent beings, receive, process, and store classical information. The information comes from vision, hearing, smell, and tactile sensing. The data is encoded as analog classical information through the electrical pulses sending through our nerve fibers. Our brain processes this information classically through neural circuits (at least that is our current understanding, but one should check out this blogpost). We then store this processed classical information in our hippocampus that allows us to retrieve it later to combine it with future information that we obtain. Finally, we use the stored classical information to make predictions about the future (imagine/predict the future outcomes if we perform certain action) and choose the action that would most likely be in our favor.

Such abilities have enabled us to make remarkable accomplishments: soaring in the sky by constructing accurate models of how air flows around objects, or building weak forms of intelligent beings capable of performing basic conversations and play different board games. Instead of receiving/processing/storing classical information, one could imagine some form of quantum intelligence that deals with quantum information instead of classical information. These quantum beings can receive quantum information through quantum sensors built up from tiny photons and atoms. They would then process this quantum information with quantum mechanical evolutions (such as quantum computers), and store the processed qubits in a quantum memory (protected with a surface code or toric code).

A caricature of human intelligence dating long before 1950, artificial intelligence that began in the 50’s, and the emergence of quantum intelligence.
[Credit: Claudia Cheng]

It is natural to wonder what a world of quantum intelligence would be like. While we have never encountered such a strange creature in the real world (yet), the mathematics of quantum mechanics, machine learning, and information theory allow us to peek into what such a fantastic world would be like. The physical world we live in is intrinsically quantum. So one may imagine that a quantum being is capable of making more powerful predictions than a classical being. Maybe he/she/they could better predict events that happened further away, such as tell us how a distant black hole was engulfing another? Or perhaps he/she/they could improve our lives, for example by presenting us with an entirely new approach for capturing energy from sunlight?

One may be skeptical about finding quantum intelligent beings in nature (and rightfully so). But it may not be so absurd to synthesize a weak form of quantum (artificial) intelligence in an experimental lab, or enhance our classical human intelligence with quantum devices to approximate a quantum-mechanical being. Many famous companies, like Google, IBM, Microsoft, and Amazon, as well as many academic labs and startups have been building better quantum machines/computers day by day. By combining the concepts of machine learning on classical computers with these quantum machines, the future of us interacting with some form of quantum (artificial) intelligence may not be so distant.

Before the day comes, could we peek into the world of quantum intelligence? And could one better understand how much more powerful they could be over classical intelligence?

A cartoon depiction of me (Left), Richard Kueng (Middle), and John Preskill (Right).
[Credit: Claudia Cheng]

In a recent publication [1], my advisor John Preskill, my good friend Richard Kueng, and I made some progress toward these questions. We consider a quantum mechanical world where classical beings could obtain classical information by measuring the world (performing POVM measurement). In contrast, quantum beings could retrieve quantum information through quantum sensors and store the data in a quantum memory. We study how much better quantum over classical beings could learn from the physical world to accurately predict the outcomes of unseen events (with the focus on the number of interactions with the physical world instead of computation time). We cast these problems in a rigorous mathematical framework and utilize high-dimensional probability and quantum information theory to understand their respective prediction power. Rigorously, one refers to a classical/quantum being as a classical/quantum model, algorithm, protocol, or procedure. This is because the actions of these classical/quantum beings are the center of the mathematical analysis.

Formally, we consider the task of learning an unknown physical evolution described by a CPTP map \mathcal{E} that takes in n-qubit state and maps to m-qubit state. The classical model can select an arbitrary classical input to the CPTP map and measure the output state of the CPTP map with some POVM measurement. The quantum model can access the CPTP map coherently and obtain quantum data from each access, which is equivalent to composing multiple CPTP maps with quantum computations to learn about the CPTP map. The task is to predict a property of the output state \mathcal{E}(\lvert x \rangle\!\langle x \rvert), given by \mathrm{Tr}(O \mathcal{E}(\lvert x \rangle\!\langle x \rvert)), for a new classical input x \in \{0, 1\}^n. And the goal is to achieve the task while accessing \mathcal{E} as few times as possible (i.e., fewer interactions or experiments in the physical world). We denote the number of interactions needed by classical and quantum models as N_{\mathrm{C}}, N_{\mathrm{Q}}.

In general, quantum models could learn from fewer interactions with the physical world (or experiments in the physical world) than classical models. This is because coherent quantum information can facilitate better information synthesis with information obtained from previous experiments. Nevertheless, in [1], we show that there is a fundamental limit to how much more efficient quantum models can be. In order to achieve a prediction error

\mathbb{E}_{x \sim \mathcal{D}} |h(x) -  \mathrm{Tr}(O \mathcal{E}(\lvert x \rangle\!\langle x \rvert))| \leq \mathcal{O}(\epsilon),

where h(x) is the hypothesis learned from the classical/quantum model and \mathcal{D} is an arbitrary distribution over the input space \{0, 1\}^n, we found that the speed-up N_{\mathrm{C}} / N_{\mathrm{Q}} is upper bounded by m / \epsilon, where m > 0 is the number of qubits each experiment provides (the output number of qubits in the CPTP map \mathcal{E}), and \epsilon > 0 is the desired prediction error (smaller \epsilon means we want to predict more accurately).

In contrast, when we want to accurately predict all unseen events, we prove that quantum models could use exponentially fewer experiments than classical models. We give a construction for predicting properties of quantum systems showing that quantum models could substantially outperform classical models. These rigorous results show that quantum intelligence shines when we seek stronger prediction performance.

We have only scratched the surface of what is possible with quantum intelligence. As the future unfolds, I am hopeful that we will discover more that can be done only by quantum intelligence, through mathematical analysis, rigorous numerical studies, and physical experiments.

Further information:

  • A classical model that can be used to accurately predict properties of quantum systems is the classical shadow formalism [2] that we proposed a year ago. In many tasks, this model can be shown to be one of the strongest rivals that quantum models have to surpass.
  • Even if a quantum model only receives and stores classical data, the ability to process the data using a quantum-mechanical evolution can still be advantageous [3]. However, obtaining large advantage will be harder in this case as the computational power in data can slightly boost classical machines/intelligence [3].
  • Another nice paper by Dorit Aharonov, Jordan Cotler, and Xiao-Liang Qi [4] also proved advantages of quantum models over classical one in some classification tasks.


[1] Huang, Hsin-Yuan, Richard Kueng, and John Preskill. “Information-Theoretic Bounds on Quantum Advantage in Machine Learning.” Physical Review Letters 126: 190505 (2021). https://doi.org/10.1103/PhysRevLett.126.190505

[2] Huang, Hsin-Yuan, Richard Kueng, and John Preskill. “Predicting many properties of a quantum system from very few measurements.” Nature Physics 16: 1050-1057 (2020). https://doi.org/10.1038/s41567-020-0932-7

[3] Huang, Hsin-Yuan, et al. “Power of data in quantum machine learning.” Nature communications 12.1 (2021): 1-9. https://doi.org/10.1038/s41467-021-22539-9

[4] Aharonov, Dorit, Jordan Cotler, and Xiao-Liang Qi. “Quantum Algorithmic Measurement.” arXiv preprint arXiv:2101.04634 (2021).

Learning about learning

The autumn of my sophomore year of college was mildly hellish. I took the equivalent of three semester-long computer-science and physics courses, atop other classwork; co-led a public-speaking self-help group; and coordinated a celebrity visit to campus. I lived at my desk and in office hours, always declining my flatmates’ invitations to watch The West Wing

Hard as I studied, my classmates enjoyed greater facility with the computer-science curriculum. They saw immediately how long an algorithm would run, while I hesitated and then computed the run time step by step. I felt behind. So I protested when my professor said, “You’re good at this.” 

I now see that we were focusing on different facets of learning. I rued my lack of intuition. My classmates had gained intuition by exploring computer science in high school, then slow-cooking their experiences on a mental back burner. Their long-term exposure to the material provided familiarity—the ability to recognize a new problem as belonging to a class they’d seen examples of. I was cooking course material in a mental microwave set on “high,” as a semester’s worth of material was crammed into ten weeks at my college.

My professor wasn’t measuring my intuition. He only saw that I knew how to compute an algorithm’s run time. I’d learned the material required of me—more than I realized, being distracted by what I hadn’t learned that difficult autumn.

We can learn a staggering amount when pushed far from our comfort zones—and not only we humans can. So can simple collections of particles.

Examples include a classical spin glass. A spin glass is a collection of particles that shares some properties with a magnet. Both a magnet and a spin glass consist of tiny mini-magnets called spins. Although I’ve blogged about quantum spins before, I’ll focus on classical spins here. We can imagine a classical spin as a little arrow that points upward or downward.  A bunch of spins can form a material. If the spins tend to point in the same direction, the material may be a magnet of the sort that’s sticking the faded photo of Fluffy to your fridge.

The spins may interact with each other, similarly to how electrons interact with each other. Not entirely similarly, though—electrons push each other away. In contrast, a spin may coax its neighbors into aligning or anti-aligning with it. Suppose that the interactions are random: Any given spin may force one neighbor into alignment, gently ask another neighbor to align, entreat a third neighbor to anti-align, and having nothing to say to neighbors four and five.

The spin glass can interact with the external world in two ways. First, we can stick the spins in a magnetic field, as by placing magnets above and below the glass. If aligned with the field, a spin has negative energy; and, if antialigned, positive energy. We can sculpt the field so that it varies across the spin glass. For instance, spin 1 can experience a strong upward-pointing field, while spin 2 experiences a weak downward-pointing field.

Second, say that the spins occupy a fixed-temperature environment, as I occupy a 74-degree-Fahrenheit living room. The spins can exchange heat with the environment. If releasing heat to the environment, a spin flips from having positive energy to having negative—from antialigning with the field to aligning.

Let’s perform an experiment on the spins. First, we design a magnetic field using random numbers. Whether the field points upward or downward at any given spin is random, as is the strength of the field experienced by each spin. We sculpt three of these random fields and call the trio a drive.

Let’s randomly select a field from the drive and apply it to the spin glass for a while; again, randomly select a field from the drive and apply it; and continue many times. The energy absorbed by the spins from the fields spikes, then declines.

Now, let’s create another drive of three random fields. We’ll randomly pick a field from this drive and apply it; again, randomly pick a field from this drive and apply it; and so on. Again, the energy absorbed by the spins spikes, then tails off.

Here comes the punchline. Let’s return to applying the initial fields. The energy absorbed by the glass will spike—but not as high as before. The glass responds differently to a familiar drive than to a new drive. The spin glass recognizes the original drive—has learned the first fields’ “fingerprint.” This learning happens when the fields push the glass far from equilibrium,1 as I learned when pushed during my mildly hellish autumn.

So spin glasses learn drives that push them far from equilibrium. So do many other simple, classical, many-particle systems: polymers, viscous liquids, crumpled sheets of Mylar, and more. Researchers have predicted such learning and observed it experimentally. 

Scientists have detected many-particle learning by measuring thermodynamic observables. Examples include the energy absorbed by the spin glass—what thermodynamicists call work. But thermodynamics developed during the 1800s, to describe equilibrium systems, not to study learning. 

One study of learning—the study of machine learning—has boomed over the past two decades. As described by the MIT Technology Review, “[m]achine-learning algorithms use statistics to find patterns in massive amounts of data.” Users don’t tell the algorithms how to find those patterns.


It seems natural and fitting to use machine learning to learn about the learning by many-particle systems. That’s what I did with collaborators from the group of Jeremy England, a GlaxoSmithKline physicist who studies complex behaviors of many particle systems. Weishun Zhong, Jacob Gold, Sarah Marzen, Jeremy, and I published our paper last month. 

Using machine learning, we detected and measured many-particle learning more reliably and precisely than thermodynamic measures seem able to. Our technique works on multiple facets of learning, analogous to the intuition and the computational ability I encountered in my computer-science course. We illustrated our technique on a spin glass, but one can apply our approach to other systems, too. I’m exploring such applications with collaborators at the University of Maryland.

The project pushed me far from my equilibrium: I’d never worked with machine learning or many-body learning. But it’s amazing, what we can learn when pushed far from equilibrium. I first encountered this insight sophomore fall of college—and now, we can quantify it better than ever.

1Equilibrium is a quiet, restful state in which the glass’s large-scale properties change little. No net flow of anything—such as heat or particles—enter or leave the system.

One if by land minus two if by sea, over the square-root of two

Happy National Poetry Month! The United States salutes word and whimsy in April, and Quantum Frontiers is continuing its tradition of celebrating. As a resident of Cambridge, Massachusetts and as a quantum information scientist, I have trouble avoiding the poem “Paul Revere’s Ride.” 

Henry Wadsworth Longfellow wrote the poem, as well as others in the American canon, during the 1800s. Longfellow taught at Harvard in Cambridge, and he lived a few blocks away from the university, in what’s now a national historic site. Across the street from the house, a bust of the poet gazes downward, as though lost in thought, in Longfellow Park. Longfellow wrote one of his most famous poems about an event staged a short drive from—and, arguably, partially in—Cambridge.

Longfellow Park

The event took place “on the eighteenth of April, in [Seventeen] Seventy-Five,” as related by the narrator of “Paul Revere’s Ride.” Revere was a Boston silversmith and a supporter of the American colonies’ independence from Britain. Revolutionaries anticipated that British troops would set out from Boston sometime during the spring. The British planned to seize revolutionaries’ weapons in the nearby town of Concord and to jail revolutionary leaders in Lexington. The troops departed Boston during the night of April 18th. 

Upon learning of their movements, sexton Robert Newman sent a signal from Boston’s old North Church to Charlestown. Revere and the physician William Dawes rode out from Charlestown to warn the people of Lexington and the surrounding areas. A line of artificial hoof prints, pressed into a sidewalk a few minutes from the Longfellow house, marks part of Dawes’s trail through Cambridge. The initial riders galvanized more riders, who stirred up colonial militias that resisted the troops’ advance. The Battles of Lexington and Concord ensued, initiating the Revolutionary War.

Longfellow took liberties with the facts he purported to relate. But “Paul Revere’s Ride” has blown the dust off history books for generations of schoolchildren. The reader shares Revere’s nervous excitement as he fidgets, awaiting Newman’s signal: 

Now he patted his horse’s side, 
Now gazed on the landscape far and near, 
Then impetuous stamped the earth, 
And turned and tightened his saddle-girth;
But mostly he watched with eager search 
The belfry-tower of the old North Church.

The moment the signal arrives, that excitement bursts its seams, and Revere leaps astride his horse. The reader comes to gallop through with the silversmith the night, the poem’s clip-clop-clip-clop rhythm evoking a horse’s hooves on cobblestones.

The author, outside Longfellow House, on the eighteenth of April in…Twenty Twenty.

Not only does “Paul Revere’s Ride” revitalize history, but it also offers a lesson in information theory. While laying plans, Revere instructs Newman: 

He said to his friend, “If the British march
By land or sea from the town to-night,
Hang a lantern aloft in the belfry-arch
Of the North-Church-tower, as a signal light.

Then comes one of the poem’s most famous lines: “One if by land, and two if by sea.” The British could have left Boston by foot or by boat, and Newman had to communicate which. Specifying one of two options, he related one bit, or one basic unit of information. Newman thereby exemplifies a cornerstone of information theory: the encoding of a bit of information—an abstraction—in a physical system that can be in one of two possible states—a light that shines from one or two lanterns.

Benjamin Schumacher and Michael Westmoreland point out the information-theoretic interpretation of Newman’s action in their quantum-information textbook. I used their textbook in my first quantum-information course, as a senior in college. Before reading the book, I’d never felt that I could explain what information is or how it can be quantified. Information is an abstraction and a Big Idea, like consciousness, life, and piety. But, Schumacher and Westmoreland demonstrated, most readers already grasp the basics of information theory; some readers even studied the basics while memorizing a poem in elementary school. So I doff my hat—or, since we’re discussing the 1700s, my mobcap—to the authors.

Reading poetry enriches us more than we realize. So read a poem this April. You can find Longfellow’s poem here or ride off wherever your fancy takes you.