# Glass beads and weak-measurement schemes

Richard Feynman fiddled with electronics in a home laboratory, growing up. I fiddled with arts and crafts.1 I glued popsicle sticks, painted plaques, braided yarn, and designed greeting cards. Of the supplies in my family’s crafts box, I adored the beads most. Of the beads, I favored the glass ones.

I would pour them on the carpet, some weekend afternoons. I’d inherited a hodgepodge: The beads’ sizes, colors, shapes, trimmings, and craftsmanship varied. No property divided the beads into families whose members looked like they belonged together. But divide the beads I tried. I might classify them by color, then subdivide classes by shape. The color and shape groupings precluded me from grouping by size. But, by loosening my original classification and combining members from two classes, I might incorporate trimmings into the categorization. I’d push my classification scheme as far as I could. Then, I’d rake the beads together and reorganize them according to different principles.

Why have I pursued theoretical physics? many people ask. I have many answers. They include “Because I adored organizing craft supplies at age eight.” I craft and organize ideas.

I’ve blogged about the out-of-time-ordered correlator (OTOC), a signature of how quantum information spreads throughout a many-particle system. Experimentalists want to measure the OTOC, to learn how information spreads. But measuring the OTOC requires tight control over many quantum particles.

I proposed a scheme for measuring the OTOC, with help from Chapman University physicist Justin Dressel. The scheme involves weak measurements. Weak measurements barely disturb the systems measured. (Most measurements of quantum systems disturb the measured systems. So intuited Werner Heisenberg when formulating his uncertainty principle.)

I had little hope for the weak-measurement scheme’s practicality. Consider the stereotypical experimentalist’s response to a stereotypical experimental proposal by a theorist: Oh, sure, we can implement that—in thirty years. Maybe. If the pace of technological development doubles. I expected to file the weak-measurement proposal in the “unfeasible” category.

But experimentalists started collaring me. The scheme sounds reasonable, they said. How many trials would one have to perform? Did the proposal require ancillas, helper systems used to control the measured system? Must each ancilla influence the whole measured system, or could an ancilla interact with just one particle? How did this proposal compare with alternatives?

I met with a cavity-QED2 experimentalist and a cold-atoms expert. I talked with postdocs over skype, with heads of labs at Caltech, with grad students in Taiwan, and with John Preskill in his office. I questioned an NMR3 experimentalist over lunch and fielded superconducting-qubit4 questions in the sunshine. I read papers, reread papers, and powwowed with Justin.

I wouldn’t have managed half so well without Justin and without Brian Swingle. Brian and coauthors proposed the first OTOC-measurement scheme. He reached out after finding my first OTOC paper.

According to that paper, the OTOC is a moment of a quasiprobability.5 How does that quasiprobability look, we wondered? How does it behave? What properties does it have? Our answers appear in a paper we released with Justin this month. We calculate the quasiprobability in two examples, prove properties of the quasiprobability, and argue that the OTOC motivates generalizations of quasiprobability theory. We also enhance the weak-measurement scheme and analyze it.

Amidst that analysis, in a 10 x 6 table, we classify glass beads.

We inventoried our experimental conversations and distilled them. We culled measurement-scheme features analogous to bead size, color, and shape. Each property labels a row in the table. Each measurement scheme labels a column. Each scheme has, I learned, gold flecks and dents, hues and mottling, an angle at which it catches the light.

I’ve kept most of the glass beads that fascinated me at age eight. Some of the beads have dispersed to necklaces, picture frames, and eyeglass leashes. I moved the remnants, a few years ago, to a compartmentalized box. Doesn’t it resemble the table?

That’s why I work at the IQIM.

1I fiddled in a home laboratory, too, in a garage. But I lived across the street from that garage. I lived two rooms from an arts-and-crafts box.

2Cavity QED consists of light interacting with atoms in a box.

3Lots of nuclei manipulated with magnetic fields. “NMR” stands for “nuclear magnetic resonance.” MRI machines, used to scan brains, rely on NMR.

4Superconducting circuits are tiny, cold quantum circuits.

5A quasiprobability resembles a probability but behaves more oddly: Probabilities range between zero and one; quasiprobabilities can dip below zero. Think of a moment as like an average.

With thanks to all who questioned me; to all who answered questions of mine; to my wonderful coauthors; and to my parents, who stocked the crafts box.

# The entangled fabric of space

We live in the information revolution. We translate everything into vast sequences of ones and zeroes. From our personal email to our work documents, from our heart rates to our credit rates, from our preferred movies to our movie preferences, all things information are represented using this minimal {0,1} alphabet which our digital helpers “understand” and process. Many of us physicists are now taking this information revolution at heart and embracing the “It from qubit” motto. Our dream: to understand space, time and gravity as emergent features in a world made of information – quantum information.

Over the past two years, I have been obsessively trying to understand this profound perspective more rigorously. Recently, John Preskill and I have taken a further step in this direction in the recent paper: quantum code properties from holographic geometries. In it, we make progress in interpreting features of the holographic approach to quantum gravity in the terms of quantum information constructs.

In this post I would like to present some context for this work through analogies which hopefully help intuitively convey the general ideas. While still containing some technical content, this post is not likely to satisfy those readers seeking a precise in-depth presentation. To you I can only recommend the masterfully delivered lecture notes on gravity and entanglement by Mark Van Raamsdonk.

## Entanglement as a cat’s cradle

A cat’s cradle serves as a crude metaphor for quantum mechanical entanglement. The full image provides a complete description of the string and how it is laced in a stable configuration around the two hands. However, this lacing does not describe a stable configuration of half the string on one hand. The string would become disentangled and fall if we were to suddenly remove one of the hands or cut through the middle.

Of all the concepts needed to explain emergent spacetime, maybe the most difficult is that of quantum entanglement. While the word seems to convey some kind of string wound up in a complicated way, it is actually a quality which may describe information in quantum mechanical systems. In particular, it applies to a system for which we have a complete description as a whole, but are only capable of describing certain statistical properties of its parts. In other words, our knowledge of the whole loses predictive power when we are only concerned with the parts. Entanglement entropy is a measure of information which quantifies this.

While our metaphor for entanglement is quite crude, it will serve the purpose of this post. Namely, to illustrate one of the driving premises for the holographic approach to quantum gravity, that the very structure of spacetime is emergent and built up from entanglement entropy.

## Knit and crochet your way into the manifolds

But let us bring back our metaphors and try to convey the content of this identification. For this, we resort to the unlikely worlds of knitting and crochet. Indeed, by a planned combination of individual loops and stitches, these traditional crafts are capable of approximating any kind of surface (2D Riemannian surface would be the technical term).

Here I have presented some examples with uniform curvature R: flat in green, positive curvature (ball) in yellow and negative curvature (coral reef) in purple. While actual practitioners may be more interested in getting the shape right on hats and socks for loved ones, for us the point is that if we take a step back, these objects built of simple loops, hooks and stitches could end up looking a lot like the smooth surfaces that a physicist might like to use to describe 2D space. This is cute, but can we push this metaphor even further?

Well, first of all, although the pictures above are only representing 2D surfaces, we can expect that a similar approach should allow approximating 3D and even higher dimensional objects (again the technical term is Riemannian manifolds). It would just make things much harder to present in a picture. These woolen structures are, in fact, quite reminiscent of tensor networks, a modern mathematical construct widely used in the field of quantum information. There too, we combine basic building blocks (tensors) through simple operations (tensor index contraction) to build a more complex composite object. In the tensor network world, the structure of the network (how its nodes are connected to other nodes) generically defines the entanglement structure of the resulting object.

This regular tensor network layout was used to describe hyperbolic space which is similar to the purple crochet. However, they apriori look quite dissimilar due to the use of the Poincaré disk model where tensors further from the center look smaller. Another difference is that the high degree of regularity is achieved at the expense of having very few tensors per curvature radius (as compared to its purple crochet cousin). However, planarity and regularity don’t seem to be essential so the crochet probably provides a better intuitive picture.

Roughly speaking, tensor networks are ingenious ways of encoding (quantum) inputs into (quantum) outputs. In particular, if you enter some input at the boundary of your tensor network, the tensors do the work of processing that information throughout the network so that if you ask for an output at any one of the nodes in the bulk of the tensor network, you get the right encoded answer. In other words, the information we input into the tensor network begins its journey at the dangling edges found at the boundary of the network and travels through the bulk edges by exploiting them as information bridges between the nodes of the network.

In the figure representing the cat’s cradle, these dangling input edges can be though of as the fingers holding the wool. Now, if we partition these edges into two disjoint sets (say, the fingers on the left hand and the fingers on the right hand, respectively), there will be some amount of entanglement between them. How much? In general, we cannot say, but under certain assumptions we find that it is proportional to the minimum cut through the network. Imagine you had an incredible number of fingers holding your wool structure. Now separate these fingers arbitrarily into two subsets L and R (we may call them left hand and right hand, although there is nothing right or left handy about them). By pulling left hand and right hand apart, the wool might stretch until at some point it breaks. How many threads will break? Well, the question is analogous to the entanglement one. We might expect, however, that a minimal number of threads break such that each hand can go its own way. This is what we call the minimal cut. In tensor networks, entanglement entropy is always bounded above by such a minimal cut and it has been confirmed that under certain conditions entanglement also reaches, or approximates, this bound. In this respect, our wool analogy seems to be working out.

## Holography

Holography, in the context of black holes, was sparked by a profound observation of Jacob Bekenstein and Stephen Hawking, which identified the surface area of a black hole horizon (in Planck units) with its entropy, or information content:

$S_{BH} = \frac{k A_{BH}}{4\ell_p^2}$.

Here, $S_{BH}$ is the entropy associated to the black hole, $A_{BH}$ is its horizon area, $\ell_p$ is the Planck length and $k$ is Boltzmann’s constant.
Why is this equation such a big deal? Well, there are many reasons, but let me emphasize one. For theoretical physicists, it is common to get rid of physical units by relating them through universal constants. For example, the theory of special relativity allows us to identify units of distance with units of time through the equation $d=ct$ using the speed of light c. General relativity further allows us to identify mass and energy through the famous $E=mc^2$. By considering the Bekenstein-Hawking entropy, units of area are being swept away altogether! They are being identified with dimensionless units of information (one square meter is roughly $1.4\times10^{69}$ bits according to the Bousso bound).

Initially, the identification of area and information was proposed to reconcile black holes with the laws of thermodynamics. However, this has turned out to be the main hint leading to the holographic principle, wherein states that describe a certain volume of space in a theory of quantum gravity can also be thought of as being represented at the lower dimensional boundary of the given volume. This idea, put forth by Gerard ‘t Hooft, was later given a more precise interpretation by Leonard Susskind and subsequently by Juan Maldacena through the celebrated AdS/CFT correspondence. I will not dwell in the details of the AdS/CFT correspondence as I am not an expert myself. However, this correspondence gave S. Ryu and T. Takayanagi  (RT) a setting to vastly generalize the identification of area as an information quantity. They proposed identifying the area of minimal surfaces on the bulk (remember the minimal cut?) with entanglement entropy in the boundary theory.

Roughly speaking, if we were to split the boundary into two regions, left $L$ and right $R$ it should be possible to also partition the bulk in a way that each piece of the bulk has either $L$ or $R$ in its boundary. Ryu and Takayanagi proposed that the area of the smallest surface $\chi_R=\chi_L$ which splits the bulk in this way would be proportional to the entanglement entropy between the two parts

$S_L = S_R = \frac{|\chi_L|}{4G} =\frac{|\chi_R|}{4G}$.

It turns out that some quantum field theory states admit such a geometric interpretation. Many high energy theory colleagues have ideas about when this is possible and what are the necessary conditions. By far the best studied setting for this holographic duality is AdS/CFT, where Ryu and Takayanagi first checked their proposal. Here, the entanglement features of  the lowest energy state of a conformal field theory are matched to surfaces in a hyperbolic space (like the purple crochet and the tensor network presented). However, other geometries have been shown to match the RT prediction with respect to the entanglement properties of different states. The key point here is that the boundary states do not have any geometry per se. They just manifest different amounts of entanglement when partitioned in different ways.

## Emergence

The holographic program suggests that bulk geometry emerges from the entanglement properties of the boundary state. Spacetime materializes from the information structure of the boundary instead of being a fundamental structure as in general relativity. Am I saying that we should strip everything physical, including space in favor of ones and zeros? Well, first of all, it is not just me who is pushing this approach. Secondly, no one is claiming that we should start making all our physical reasoning in terms of ones and zeros.

Let me give an example. We know that the sea is composed mostly of water molecules. The observation of waves that travel, superpose and break can be labeled as an emergent phenomenon. However, to a surfer, a wave is much more real than the water molecules composing it and the fact that it is emergent is of no practical consequence when trying to predict where a wave will break. A proficient physicist, armed with tools from statistical mechanics (there are more than $10^{25}$ molecules per liter), could probably derive a macroscopic model for waves from the microscopic theory of particles. In the process of learning what the surfer already understood, he would identify elements of the  microscopic theory which become irrelevant for such questions. Such details could be whether the sea has an odd or even number of molecules or the presence of a few fish.

In the case of holography, each square meter corresponds to $1.4\times10^{69}$ bits of entanglement. We don’t even have words to describe anything close to this outrageously large exponent which leaves plenty of room for emergence. Even taking all the information on the internet – estimated at $10^{22}$ bits (10 zettabits) – we can’t even match the area equivalent of the smallest known particle. The fact that there are so many orders of magnitude makes it difficult to extrapolate our understanding of the geometric domain to the information domain and vice versa. This is precisely the realm where techniques such as those from statistical mechanics successfully get rid of irrelevant details.

High energy theorists and people with a background in general relativity tend to picture things in a continuum language. For example, part of their daily butter are Riemannian or Lorentzian manifolds which are respectively used to describe space and spacetime. In contrast, most of information theory is usually applied to deal with discrete elements such as bits, elementary circuit gates, etc. Nevertheless, I believe it is fruitful to straddle this cultural divide to the benefit of both parties. In a way, the convergence we are seeking is analogous to the one achieved by the kinetic theory of gases, which allowed the unification of thermodynamics with classical mechanics.

## So what did we do?

The remarkable success of the geometric RT prediction to different bulk geometries such as the BTZ black holes and the generality of the entanglement result for its random tensor network cousins emboldened us to take the RT prescription beyond its usual domain of application. We considered applying it to arbitrary Riemannian manifolds that are space-like and that can be approximated by a smoothly knit fabric.

Furthermore, we went on to consider the implications that such assumptions would have when the corresponding geometries are interpreted as error-correcting codes. In fact, our work elaborates on the perspective of A. Almheiri, X. Dong and D. Harlow (ADH) where quantum error-correcting code properties of AdS/CFT were laid out; it is hard to overemphasize the influence of this work. Our work considers general geometries and identifies properties a code associated to a specific holographic geometry should satisfy.

In the cat cradle/fabric metaphor for holography, the fingers at the boundary constitute the boundary theory without gravity and the resulted fabric represents a bulk geometry in the corresponding bulk gravitational theory. Bulk observables may be represented in different ways on the boundary, but not arbitrarily. This raises the question of which parts of the bulk correspond to which parts of the boundary. In general, there is not a one to one mapping. However, if we partition the boundary in two parts $L$ and $R$, we expect to be able to split the bulk into two corresponding regions  ${\mathcal E}[L]$  and  ${\mathcal E}[R]$. This is the content of the entanglement wedge hypothesis, which is our other main assumption.  In our metaphor, one could imagine that we pull the left fingers up and the right fingers down (taking care not to get hurt). At some point, the fabric breaks through $\chi_R$ into two pieces. In the setting we are concerned with, these pieces maintain part of the original structure, which tells us which bulk information was available in one piece of the boundary and which part was available in the other.

Although we do not produce new explicit examples of such codes, we worked our way towards developing a language which translates between the holographic/geometric perspective and the coding theory perspective. We specifically build upon the language of operator algebra quantum error correction (OAQEC) which allows individually focusing on different parts of the logical message. In doing so we identified several coding theoretic bounds and quantities, some of which we found to be applicable beyond the context of holography. A particularly noteworthy one is a strengthening of the quantum Singleton bound, which defines a trade-off between how much logical information can be packed in a code, how much physical space is used for encoding this information and how well-protected the information is from erasures.

One of the central observations of ADH highlights how quantum codes have properties from both classical error-correcting codes and secret sharing schemes. On the one hand, logical encoded information should be protected from loss of small parts of the carrier, a property quantified by the code distance. On the other hand, the logical encoded information should not become accessible until a sufficiently large part of the carrier is available to us. This is quantified by the threshold of a corresponding secret sharing scheme. We call this quantity price as it identifies how much of the carrier we would need before someone could reconstruct the message faithfully. In general, it is hard to balance these two competing requirements; a statement which can be made rigorous. This kind of complementarity has long been recognized in quantum cryptography. However, we found that according to holographic predictions, codes admitting a geometric interpretation achieve a remarkable optimality in the trade-off between these features.

Our exploration of alternative geometries is rewarded by the following guidelines

In uberholography, bulk observables are accessible in a Cantor type fractal shaped subregion of the boundary. This is illustrated on the Poincare disc presentation of negatively curved bulk.

• Hyperbolic geometries predict a fixed polynomial scaling for code distance. This is illustrated by a feature we call uberholography. We use this name because there is an excess of holography wherein bulk observables can be represented on intricate subsets of the boundary which have fractal dimension even smaller than the boundary itself.
• Hyperbolic geometries suggest the possibility of decoding procedures which are local on the boundary geometry. This property may be connected to the locality of the corresponding boundary field theory.
• Flat and positive curvature geometries may lead to codes with better parameters in terms of distance and rates (ratio of logical information to physical information). A hemisphere reaches optimum parameters, saturating coding bounds.

Seven iterations of a ternary Cantor set (dark line) on the unit interval. Each iteration is obtained by punching holes from the previous one and the set obtained in the limit is a fractal.

Current day quantum computers are far from the number of qubits required to invoke an emergent geometry. Nevertheless, it is exhilarating to take a step back and consider how the properties of the codes, and information in general, may be interpreted geometrically. On the other hand, I find that the quantum code language we adapt to the context of holography might eventually serve as a useful tool in distinguishing which boundary features are relevant or irrelevant for the emergent properties of the holographic dual. Ours is but one contribution in a very active field. However, the one thing I am certain about is that these are exciting times to be doing physics.

# Local operations and Chinese communications

The workshop spotlighted entanglement. It began in Shanghai, paused as participants hopped the Taiwan Strait, and resumed in Taipei. We discussed quantum operations and chaos, thermodynamics and field theory.1 I planned to return from Taipei to Shanghai to Los Angeles.

Quantum thermodynamicist Nelly Ng and I drove to the Taipei airport early. News from Air China curtailed our self-congratulations: China’s military was running an operation near Shanghai. Commercial planes couldn’t land. I’d miss my flight to LA.

Two quantum thermodynamicists in Shanghai

An operation?

Quantum information theorists use a mindset called operationalism. We envision experimentalists in separate labs. Call the experimentalists Alice, Bob, and Eve (ABE). We tell stories about ABE to formulate and analyze problems. Which quantum states do ABE prepare? How do ABE evolve, or manipulate, the states? Which measurements do ABE perform? Do they communicate about the measurements’ outcomes?

Operationalism concretizes ideas. The outlook checks us from drifting into philosophy and into abstractions difficult to apply physics tools to.2 Operationalism infuses our language, our framing of problems, and our mathematical proofs.

Experimentalists can perform some operations more easily than others. Suppose that Alice controls the magnets, lasers, and photodetectors in her lab; Bob controls the equipment in his; and Eve controls the equipment in hers. Each experimentalist can perform local operations (LO). Suppose that Alice, Bob, and Eve can talk on the phone and send emails. They exchange classical communications (CC).

You can’t generate entanglement using LOCC. Entanglement consists of strong correlations that quantum systems can share and that classical systems can’t. A quantum system in Alice’s lab can hold more information about a quantum system of Bob’s than any classical system could. We must create and control entanglement to operate quantum computers. Creating and controlling entanglement poses challenges. Hence quantum information scientists often model easy-to-perform operations with LOCC.

Suppose that some experimentalist Charlie loans entangled quantum systems to Alice, Bob, and Eve. How efficiently can ABE compute some quantity, exchange quantum messages, or perform other information-processing tasks, using that entanglement? Such questions underlie quantum information theory.

Taipei’s night market. Or Caltech’s neighborhood?

Local operations.

Nelly and I performed those, trying to finagle me to LA. I inquired at Air China’s check-in desk in English. Nelly inquired in Mandarin. An employee smiled sadly at each of us.

We branched out into classical communications. I called Expedia (“No, I do not want to fly to Manila”), United Airlines (“No flights for two days?”), my credit-card company, Air China’s American reservations office, Air China’s Chinese reservations office, and Air China’s Taipei reservations office. I called AT&T to ascertain why I couldn’t reach Air China (“Yes, please connect me to the airline. Could you tell me the number first? I’ll need to dial it after you connect me and the call is then dropped”).

As I called, Nelly emailed. She alerted Bob, aka Janet (Ling-Yan) Hung, who hosted half the workshop at Fudan University in Shanghai. Nelly emailed Eve, aka Feng-Li Lin, who hosted half the workshop at National Taiwan University in Taipei. Janet twiddled the magnets in her lab (investigated travel funding), and Feng-Li cooled a refrigerator in his.

ABE can process information only so efficiently, using LOCC. The time crept from 1:00 PM to 3:30.

Nelly Ng uses classical communications.

What could we have accomplished with quantum communication? Using LOCC, Alice can manipulate quantum states (like an electron’s orientation) in her lab. She can send nonquantum messages (like “My flight is delayed”) to Bob. She can’t send quantum information (like an electron’s orientation).

Alice and Bob can ape quantum communication, given entanglement. Suppose that Charlie strongly correlates two electrons. Suppose that Charlie gives Alice one electron and gives Bob the other. Alice can send one qubit–one unit of quantum information–to Bob. We call that sending quantum teleportation.

Suppose that air-traffic control had loaned entanglement to Janet, Feng-Li, and me. Could we have finagled me to LA quickly?

Quantum teleportation differs from human teleportation.

xkcd.com/465

We didn’t need teleportation. Feng-Li arranged for me to visit Taiwan’s National Center for Theoretical Sciences (NCTS) for two days. Air China agreed to return me to Shanghai afterward. United would fly me to LA, thanks to help from Janet. Nelly rescued my luggage from leaving on the wrong flight.

Would I rather have teleported? I would have avoided a bushel of stress. But I wouldn’t have learned from Janet about Chinese science funding, wouldn’t have heard Feng-Li’s views about gravitational waves, wouldn’t have glimpsed Taiwanese countryside flitting past the train we rode to the NCTS.

According to some metrics, classical resources outperform quantum.

At Taiwan’s National Center for Theoretical Sciences

The workshop organizers have generously released videos of the lectures. My lecture about quantum chaos and fluctuation relations appears here and here. More talks appear here.

With gratitude to Janet Hung, Feng-Li Lin, and Nelly Ng; to Fudan University, National Taiwan University, and Taiwan’s National Center for Theoretical Sciences for their hospitality; and to Xiao Yu for administrative support.

Glossary and other clarifications:

1Field theory describes subatomic particles and light.

2Physics and philosophy enrich each other. But I haven’t trained in philosophy. I benefit from differentiating physics problems that I’ve equipped to solve from philosophy problems that I haven’t.

# It’s CHAOS!

My brother and I played the video game Sonic the Hedgehog on a Sega Dreamcast. The hero has spiky electric-blue fur and can run at the speed of sound.1 One of us, then the other, would battle monsters. Monster number one oozes onto a dark city street as an aquamarine puddle. The puddle spreads, then surges upward to form limbs and claws.2 The limbs splatter when Sonic attacks: Aqua globs rain onto the street.

The monster’s master, Dr. Eggman, has ginger mustachios and a body redolent of his name. He scoffs as the heroes congratulate themselves.

“Fools!” he cries, the pauses in his speech heightening the drama. “[That monster is] CHAOS…the GOD…of DE-STRUC-TION!” His cackle could put a Disney villain to shame.

Dr. Eggman’s outburst comes to mind when anyone asks what topic I’m working on.

“Chaos! And the flow of time, quantum theory, and the loss of information.”

Alexei Kitaev, a Caltech physicist, hooked me on chaos. I TAed his spring-2016 course. The registrar calls the course Ph 219c: Quantum Computation. I call the course Topics that Interest Alexei Kitaev.

“What do you plan to cover?” I asked at the end of winter term.

Topological quantum computation, Alexei replied. How you simulate Hamiltonians with quantum circuits. Or maybe…well, he was thinking of discussing black holes, information, and chaos.

If I’d had a tail, it would have wagged.

Sonic’s best friend, Tails the fox.

I fwumped down on the couch in Alexei’s office, and Alexei walked to his whiteboard. Scientists first noticed chaos in classical systems. Consider a double pendulum—a pendulum that hangs from the bottom of a pendulum that hangs from, say, a clock face. Imagine pulling the bottom pendulum far to one side, then releasing. The double pendulum will swing, bend, and loop-the-loop like a trapeze artist. Imagine freezing the trapeze artist after an amount $t$ of time.

What if you pulled another double pendulum a hair’s breadth less far? You could let the pendulum swing, wait for a time $t$, and freeze this pendulum. This pendulum would probably lie far from its brother. This pendulum would probably have been moving with a different speed than its brother, in a different direction, just before the freeze. The double pendulum’s motion changes loads if the initial conditions change slightly. This sensitivity to initial conditions characterizes classical chaos.

A mathematical object $F(t)$ reflects quantum systems’ sensitivities to initial conditions. [Experts: $F(t)$ can evolve as an exponential governed by a Lyapunov-type exponent: $\sim 1 - ({\rm const.})e^{\lambda_{\rm L} t}$.] $F(t)$ encodes a hypothetical process that snakes back and forth through time. This snaking earned $F(t)$ the name “the out-of-time-ordered correlator” (OTOC). The snaking prevents experimentalists from measuring quantum systems’ OTOCs easily. But experimentalists are trying, because $F(t)$ reveals how quantum information spreads via entanglement. Such entanglement distinguishes black holes, cold atoms, and specially prepared light from everyday, classical systems.

Alexei illustrated, on his whiteboard, the sensitivity to initial conditions.

“In case you’re taking votes about what to cover this spring,” I said, “I vote for chaos.”

We covered chaos. A guest attended one lecture: Beni Yoshida, a former IQIM postdoc. Beni and colleagues had devised quantum error-correcting codes for black holes.3 Beni’s foray into black-hole physics had led him to $F(t)$. He’d written an OTOC paper that Alexei presented about. Beni presented about a follow-up paper. If I’d had another tail, it would have wagged.

Sonic’s friend has two tails.

Alexei’s course ended. My research shifted to many-body localization (MBL), a quantum phenomenon that stymies the spread of information. OTOC talk burbled beyond my office door.

At the end of the summer, IQIM postdoc Yichen Huang posted on Facebook, “In the past week, five papers (one of which is ours) appeared . . . studying out-of-time-ordered correlators in many-body localized systems.”

I looked down at the MBL calculation I was performing. I looked at my computer screen. I set down my pencil.

“Fine.”

I marched to John Preskill’s office.

The bosses. Of different sorts, of course.

The OTOC kept flaring on my radar, I reported. Maybe the time had come for me to try contributing to the discussion. What might I contribute? What would be interesting?

We kicked around ideas.

“Well,” John ventured, “you’re interested in fluctuation relations, right?”

Something clicked like the “power” button on a video-game console.

Fluctuation relations are equations derived in nonequilibrium statistical mechanics. They describe systems driven far from equilibrium, like a DNA strand whose ends you’ve yanked apart. Experimentalists use fluctuation theorems to infer a difficult-to-measure quantity, a difference $\Delta F$ between free energies. Fluctuation relations imply the Second Law of Thermodynamics. The Second Law relates to the flow of time and the loss of information.

Time…loss of information…Fluctuation relations smelled like the OTOC. The two had to join together.

I spent the next four days sitting, writing, obsessed. I’d read a paper, three years earlier, that casts a fluctuation relation in terms of a correlator. I unearthed the paper and redid the proof. Could I deform the proof until the paper’s correlator became the out-of-time-ordered correlator?

Apparently. I presented my argument to my research group. John encouraged me to clarify a point: I’d defined a mathematical object $A$, a probability amplitude. Did $A$ have physical significance? Could anyone measure it? I consulted measurement experts. One identified $A$ as a quasiprobability, a quantum generalization of a probability, used to model light in quantum optics. With the experts’ assistance, I devised two schemes for measuring the quasiprobability.

The result is a fluctuation-like relation that contains the OTOC. The OTOC, the theorem reveals, is a combination of quasiprobabilities. Experimentalists can measure quasiprobabilities with weak measurements, gentle probings that barely disturb the probed system. The theorem suggests two experimental protocols for inferring the difficult-to-measure OTOC, just as fluctuation relations suggest protocols for inferring the difficult-to-measure $\Delta F$. Just as fluctuation relations cast $\Delta F$ in terms of a characteristic function of a probability distribution, this relation casts $F(t)$ in terms of a characteristic function of a (summed) quasiprobability distribution. Quasiprobabilities reflect entanglement, as the OTOC does.

Collaborators and I are extending this work theoretically and experimentally. How does the quasiprobability look? How does it behave? What mathematical properties does it have? The OTOC is motivating questions not only about our quasiprobability, but also about quasiprobability and weak measurements. We’re pushing toward measuring the OTOC quasiprobability with superconducting qubits or cold atoms.

Chaos has evolved from an enemy to a curiosity, from a god of destruction to an inspiration. I no longer play the electric-blue hedgehog. But I remain electrified.

1I hadn’t started studying physics, ok?

2Don’t ask me how the liquid’s surface tension rises enough to maintain the limbs’ shapes.

3Black holes obey quantum mechanics. Quantum systems can solve certain problems more quickly than ordinary (classical) computers. Computers make mistakes. We fix mistakes using error-correcting codes. The codes required by quantum computers differ from the codes required by ordinary computers. Systems that contain black holes, we can regard as performing quantum computations. Black-hole systems’ mistakes admit of correction via the code constructed by Beni & co.

# Quantum Chess

Two years ago, as a graduate student in Physics at USC,  I began work on a game whose mechanics were based on quantum mechanics. When I had a playable version ready, my graduate adviser, Todd Brun, put me in contact with IQIM’s Spiros Michalakis, who had already worked with Google to design qCraft, a mod introducing quantum mechanics into Minecraft. Spiros must have seen potential in my clunky prototype and our initial meeting turned into weekly brainstorming lunches at Caltech’s Chandler cafeteria. More than a year later, the game had evolved into Quantum Chess and we began talking about including a video showing some gameplay at an upcoming Caltech event celebrating Feynman’s quantum legacy. The next few months were a whirlwind. Somehow this video turned into a Quantum Chess battle for the future of humanity, between Stephen Hawking and Paul Rudd. And it was being narrated by Keanu Reeves! The video, called Anyone Can Quantum, and directed by Alex Winter, premiered at Caltech’s One Entangled Evening on January 26, 2016 and has since gone viral. If you haven’t watched it, now would be a good time to do so (if you are at work, be prepared to laugh quietly).

So, what exactly is Quantum Chess and how does it make use of quantum physics? It is a modern take on the centuries-old game of strategy that endows each chess piece with quantum powers. You don’t need to know quantum mechanics to play the game. On the other hand, understanding the rules of chess might help [1].  But if you already know the basics of regular chess, you can just start playing. Over time, your brain will get used to some of the strange quantum behavior of the chess pieces and the battles you wage in Quantum Chess will make regular chess look like tic-tac-toe [2].

In this post, I will discuss the concept of quantum superposition and how it plays a part in the game. There will be more posts to follow that will discuss entanglement, interference, and quantum measurement [3].

In quantum chess, players have the ability to perform quantum moves in addition to the standard chess moves. Each time a player chooses to move a piece, they can indicate whether they want to perform a standard move, or a quantum move. A quantum move creates a superposition of boards. If any of you ever saw Star Trek 3D Chess, you can think of this in a similar way.

Star Trek 3D Chess

There are multiple boards on which pieces exist. However, in Quantum Chess, the number of possible boards is not fixed, it can increase or decrease. All possible boards exist in a superposition. The player is presented with a single board that represents the entire superposition. In Quantum Chess, any individual move will act on all boards at the same time.  Each time a player makes a quantum move, the number of possible boards present in the superposition doubles. Let’s look at some pictures that might clarify things.

The Quantum Chess board begins in the same configuration as standard chess.

All pawns move the same as they would in standard chess, but all other pieces get a choice of two movement types, standard or quantum. Standard moves act exactly as they would in standard chess. However, quantum moves, create superpositions. Let’s look at an example of a quantum move for the white queen.

In this diagram, we see what happens when we perform a quantum move of the white queen from D1 to D3. We get two possible boards. On one board the queen did not move at all. On the other, the queen did move. Each board has a 50% chance of “existence”. Showing every possible board, though, would get quite complicated after just a few moves. So, the player view of the game is a single board. After the same quantum queen move, the player sees this:

The teal colored “fill” of each queen shows the probability of finding the queen in that space; the same queen, existing in different locations on the board. The queen is in a superposition of being in two places at once. On their next turn, the player can choose to move any one of their pieces.

So, let’s talk about moving the queen, again. You may be wondering, “What happens if I want to move a piece that is in a superposition?” The queen exists in two spaces. You choose which of those two positions you would like to move from, and you can perform the same standard or quantum moves from that space. Let’s look at trying to perform a standard move, instead of a quantum move, on the queen that now exists in a superposition. The result would be as follows:

The move acts on all boards in the superposition. On any board where the queen is in space D3, it will be moved to B5. On any board where the queen is still in space D1, it will not be moved. There is a 50% chance that the queen is still in space D1 and a 50% chance that it is now located in B5. The player view, as illustrated below, would again be a 50/50 superposition of the queen’s position. This was just an example of a standard move on a piece in a superposition, but a quantum move would work similarly.

Some of you might have noticed the quantum move basically gives you a 50% chance to pass your turn. Not a very exciting thing to do for most players. That’s why I’ve given the quantum move an added bonus. With a quantum move, you can choose a target space that is up to two standard moves away! For example, the queen could choose a target that is forward two spaces and then left two spaces. Normally, this would take two turns: The first turn to move from D1 to D3 and the second turn to move from D3 to B3. A quantum move gives you a 50% chance to move from D1 to B3 in a single turn!

Let’s look at a quantum queen move from D1 to B3.

Just like the previous quantum move we looked at, we get a 50% probability that the move was successful and a 50% probability that nothing happened. As a player, we would see the board below.

There is a 50% chance the queen completed two standard moves in one turn! Don’t worry though, things are not just random. The fact that the board is a superposition of boards and that movement is unitary (just a fancy word for how quantum things evolve) can lead to some interesting effects. I’ll end this post here. Now, I hope I’ve given you some idea of how superposition is present in Quantum Chess. In the next post I’ll go into entanglement and a bit more on the quantum move!

Notes:

[1] For those who would like to know more about chess, here is a good link.

[2] If you would like to see a public release of Quantum Chess (and get a copy of the game), consider supporting the Kickstarter campaign.

[3] I am going to be describing aspects of the game in terms of probability and multiple board states. For those with a scientific or technical understanding of how quantum mechanics works, this may not appear to be very quantum. I plan to go into a more technical description of the quantum aspects of the game in a later post. Also, a reminder to the non-scientific audience. You don’t need to know quantum mechanics to play this game. In fact, you don’t even need to know what I’m going to be describing here to play! These posts are just for those with an interest in how concepts like superposition, entanglement, and interference can be related to how the game works.

# Toward physical realizations of thermodynamic resource theories

The thank-you slide of my presentation remained onscreen, and the question-and-answer session had begun. I was presenting a seminar about thermodynamic resource theories (TRTs), models developed by quantum-information theorists for small-scale exchanges of heat and work. The audience consisted of condensed-matter physicists who studied graphene and photonic crystals. I was beginning to regret my topic’s abstractness.

The question-asker pointed at a listener.

“This is an experimentalist,” he continued, “your arch-nemesis. What implications does your theory have for his lab? Does it have any? Why should he care?”

I could have answered better. I apologized that quantum-information theorists, reared on the rarefied air of Dirac bras and kets, had developed TRTs. I recalled the baby steps with which science sometimes migrates from theory to experiment. I could have advocated for bounding, with idealizations, efficiencies achievable in labs. I should have invoked the connections being developed with fluctuation results, statistical mechanical theorems that have withstood experimental tests.

The crowd looked unconvinced, but I scored one point: The experimentalist was not my arch-nemesis.

“My new friend,” I corrected the questioner.

His question has burned in my mind for two years. Experiments have inspired, but not guided, TRTs. TRTs have yet to drive experiments. Can we strengthen the connection between TRTs and the natural world? If so, what tools must resource theorists develop to predict outcomes of experiments? If not, are resource theorists doing physics?

A Q&A more successful than mine.

I explore answers to these questions in a paper released today. Ian Durham and Dean Rickles were kind enough to request a contribution for a book of conference proceedings. The conference, “Information and Interaction: Eddington, Wheeler, and the Limits of Knowledge” took place at the University of Cambridge (including a graveyard thereof), thanks to FQXi (the Foundational Questions Institute).

“Proceedings are a great opportunity to get something off your chest,” John said.

That seminar Q&A had sat on my chest, like a pet cat who half-smothers you while you’re sleeping, for two years. Theorists often justify TRTs with experiments.* Experimentalists, an argument goes, are probing limits of physics. Conventional statistical mechanics describe these regimes poorly. To understand these experiments, and to apply them to technologies, we must explore TRTs.

Does that argument not merit testing? If experimentalists observe the extremes predicted with TRTs, then the justifications for, and the timeliness of, TRT research will grow.

Something to get off your chest. Like the contents of a conference-proceedings paper, according to my advisor.

You’ve read the paper’s introduction, the first eight paragraphs of this blog post. (Who wouldn’t want to begin a paper with a mortifying anecdote?) Later in the paper, I introduce TRTs and their role in one-shot statistical mechanics, the analysis of work, heat, and entropies on small scales. I discuss whether TRTs can be realized and whether physicists should care. I identify eleven opportunities for shifting TRTs toward experiments. Three opportunities concern what merits realizing and how, in principle, we can realize it. Six adjustments to TRTs could improve TRTs’ realism. Two more-out-there opportunities, though less critical to realizations, could diversify the platforms with which we might realize TRTs.

One opportunity is the physical realization of thermal embezzlement. TRTs, like thermodynamic laws, dictate how systems can and cannot evolve. Suppose that a state $R$ cannot transform into a state $S$: $R \not\mapsto S$. An ancilla $C$, called a catalyst, might facilitate the transformation: $R + C \mapsto S + C$. Catalysts act like engines used to extract work from a pair of heat baths.

Engines degrade, so a realistic transformation might yield $S + \tilde{C}$, wherein $\tilde{C}$ resembles $C$. For certain definitions of “resembles,”** TRTs imply, one can extract arbitrary amounts of work by negligibly degrading $C$. Detecting the degradation—the work extraction’s cost—is difficult. Extracting arbitrary amounts of work at a difficult-to-detect cost contradicts the spirit of thermodynamic law.

The spirit, not the letter. Embezzlement seems physically realizable, in principle. Detecting embezzlement could push experimentalists’ abilities to distinguish between close-together states $C$ and $\tilde{C}$. I hope that that challenge, and the chance to violate the spirit of thermodynamic law, attracts researchers. Alternatively, theorists could redefine “resembles” so that $C$ doesn’t rub the law the wrong way.

The paper’s broadness evokes a caveat of Arthur Eddington’s. In 1927, Eddington presented Gifford Lectures entitled The Nature of the Physical World. Being a physicist, he admitted, “I have much to fear from the expert philosophical critic.” Specializing in TRTs, I have much to fear from the expert experimental critic. The paper is intended to point out, and to initiate responses to, the lack of physical realizations of TRTs. Some concerns are practical; some, philosophical. I expect and hope that the discussion will continue…preferably with more cooperation and charity than during that Q&A.

If you want to continue the discussion, drop me a line.

*So do theorists-in-training. I have.

**A definition that involves the trace distance.

# The mentors that shape us

I was in awe of Wheeler. Some students thought he sucked.

I immediately changed it to…

I was in awe of Wheeler. Some students thought less of him.

And next, when I saw John write about himself,

Though I’m 59, few students seemed awed. Some thought I sucked. Maybe I did sometimes.

I massaged it into…

Though I’m 59, few students seemed awed. Some thought I was not as good. Maybe I wasn’t sometimes.

When John published the post, I read it again for any typos I might have missed. There were no typos. I felt useful! But when I saw that all mentions of sucked had been restored to their rightful place, I felt like an idiot. John did not fire a strongly-worded email back my way asking for an explanation as to my taking liberties with his own writing. He simply trusted that I would get the message in the comfort of my own awkwardness. It worked beautifully. John had set the tone for Quantum Frontier’s authentic voice with his very first post. It was to be personal, even if the subject matter was as scientifically hardcore as it got.

So when the time came for me to write my first post, I made it personal. I wrote about my time in Los Alamos as a postdoc, working on a problem in mathematical physics that almost broke me. It was Matt Hastings, an intellectual tornado, that helped me through these hard times. As my mentor, he didn’t say things like Well done! Great progress! Good job, Spiro! He said, You can do this. And when I finally did it, when I finally solved that damn problem, Matt came back to me and said: Beyond some typos, I cannot find any mistakes. Good job, Spiro. And it meant the world to me. The sleepless nights, the lonely days up in the Pajarito mountains of New Mexico, the times I had resolved to go work for my younger brother as a waiter in his first restaurant… those were the times that I had come upon a fork on the road and my mentor had helped me choose the path less traveled.

When the time came for me to write my next post, I ended by offering two problems for the readers to solve, with the following text as motivation:

This post is supposed to be an introduction to the insanely beautiful world of problem solving. It is not a world ruled by Kings and Queens. It is a world where commoners like you and me can become masters of their domain and even build an empire.

Doi-Inthananon temple in Chiang Mai, Thailand. A breathtaking city, host of this year’s international math olympiad.

It has been way too long since my last “problem solving” post, so I leave you with a problem from this year’s International Math Olympiad, which took place in gorgeous Chiang Mai, Thailand. FiverThirtyEight‘s recent article about the dominance of the US math olympic team in this year’s competition, gives some context about the degree of difficulty of this problem:

Determine all triples (a, b, c) of positive integers such that each of the numbers: ab-c, bc-a, ca-b is a power of two.

Like Fermat’s Last Theorem, this problem is easy to describe and hard to solve. Only 5 percent of the competitors got full marks on this question, and nearly half (44 percent) got no points at all.

But, on the triumphant U.S. squad, four of the six team members nailed it.

In other words, only 1 in 20 kids in the competition solved this problem correctly and about half of the kids didn’t even know where to begin. For more perspective, each national team is comprised of the top 6 math prodigies in that country. In China, that means 6 out of something like 100 million kids. And only 3-4 of these kids solved the problem.

The coach of the US national team, Po-Shen Loh, a Caltech alum and an associate professor of mathematics at Carnegie Mellon University (give him tenure already) deserves some serious props. If you think this problem is too hard, I have this to say to you: Yes, it is. But, who cares? You can do this.

Note: I will work out the solution in detail in an upcoming post, unless one of you solves it in the comments section before then!

Update: Solution posted in comments below (in response to Anthony’s comment). Thank you all who posted some of the answers below. The solution is far from trivial, but I still wonder if an elegant solution exists that gives all four triples. Maybe the best solution is geometric? I hope one of you geniuses can figure that out!