“What’s that?” I asked.

“A second edition of Galileo’s *Siderius nuncius*. Here,” she added, thrusting the book into my hands. “Take it.”

So began my internship at the Smithsonian Institution’s Dibner Library for the History of Science and Technology.

Many people know the Smithsonian for its museums. The Smithsonian, they know, houses the ruby slippers worn by Dorothy in *The Wizard of Oz*. The Smithsonian houses planes constructed by Orville and Wilbur Wright, the dresses worn by First Ladies on presidential inauguration evenings, a space shuttle, and a *Tyrannosaurus Rex* skeleton. Smithsonian museums line the National Mall in Washington, D.C.—the United States’ front lawn—and march beyond.

Most people don’t know that the Smithsonian has 21 libraries.

Lilla heads the Smithsonian Libraries’ Special-Collections Department. She also directs a library tucked into a corner of the National Museum of American History. I interned at that library—the Dibner—in college. Images of Benjamin Franklin, of inventor Eli Whitney, and of astronomical instruments line the walls. The reading room contains styrofoam cushions on which scholars lay crumbling rare books. Lilla and the library’s technician, Morgan Aronson, find references for researchers, curate exhibitions, and introduce students to science history. They also care for the vault.

The vault. How I’d missed the vault.

The vault contains manuscripts and books from the past ten centuries. We handle the items without gloves, which reduce our fingers’ sensitivities: Interpose gloves between yourself and a book, and you’ll raise your likelihood of ripping a page. A temperature of 65°F inhibits mold from growing. Redrot mars some leather bindings, though, and many crowns—tops of books’ spines—have collapsed. Aging carries hazards.

But what the ages have carried to the Dibner! We^{1} have a survey filled out by Einstein and a first edition of Newton’s *Principia mathematica*. We have Euclid’s *Geometry* in Latin, Arabic, and English, from between 1482 and 1847. We have a note, handwritten by quantum physicist Erwin Schödinger, about why students shouldn’t fear exams.

I returned to the Dibner one day this spring. Lilla and I fetched out manuscripts and books related to quantum physics and thermodynamics. “Hermann Weyl” labeled one folder.

Weyl contributed to physics and mathematics during the early 1900s. I first encountered his name when studying particle physics. The Dibner, we discovered, owns a proof for part of his 1928 book *Gruppentheorie und Quantenmechanik*. Weyl appears to have corrected a typed proof by hand. He’d handwritten also spin matrices.

Electrons have a property called “spin.” Spin resembles a property of yours, your position relative to the Earth’s center. We represent your position with three numbers: your latitude, your longitude, and your distance above the Earth’s surface. We represent electron spin with three blocks of numbers, three matrices. Today’s physicists write the matrices as^{2}

We needn’t write these matrices. We could represent electron spin with different matrices, so long as the matrices obey certain properties. But most physicists choose the above matrices, in my experience. We call our choice “a convention.”

Weyl chose a different convention:

The difference surprised me. Perhaps it shouldn’t have: Conventions change. Approaches to quantum physics change. Weyl’s matrices differ from ours little: Permute our matrices and negate one matrix, and you recover Weyl’s.

But the electron-spin matrices play a role, in quantum physics, like the role played by *T. Rex* in paleontology exhibits: All quantum scientists recognize electron spin. We illustrate with electron spin in examples. Students memorize spin matrices in undergrad classes. Homework problems feature electron spin. Physicists have known of electron spin’s importance for decades. I didn’t expect such a bedrock to have changed its trappings.

How did scientists’ convention change? When did it? Why? Or did the convention not change—did Weyl’s contemporaries use today’s convention, and did Weyl stand out?

I intended to end this article with these questions. I sent a draft to John Preskill, proposing to post soon. But he took up the questions like a knight taking up arms.

Wolfgang Pauli, John emailed, appears to have written the matrices first. (Physicist call the matrices “Pauli matrices.”) A 1927 paper by Pauli contains the notation used today. Paul Dirac copied the notation in a 1928 paper, acknowledging Pauli. Weyl’s book appeared the same year. The following year, Weyl used Pauli’s notation in a paper.

No document we know of, apart from the Dibner proof, contains the Dibner-proof notation. Did the notation change between the proof-writing and publication? Does the Dibner hold the only anomalous electron-spin matrices? What accounts for the anomaly?

If you know, feel free to share. If you visit DC, drop Lilla and Morgan a line. Bring a research project. Bring a class. Bring zeal for the past. You might find yourself holding a time capsule by Galileo.

*With thanks to Lilla and Morgan for their hospitality, time, curiosity, and expertise. With thanks to John for burrowing into the Pauli matrices’ history.*

^{1}I continue to count myself as part of the Dibner community. Part of me refuses to leave.

^{2}I’ll omit factors of

]]>

Sara doesn’t usually study ants. She trained in physics, information theory, and astrobiology. (Astrobiology is the study of life; life’s origins; and conditions amenable to life, on Earth and anywhere else life may exist.) Sara analyzes how information reaches, propagates through, and manifests in the swarm.

Some ants inspect one nest; some, the other. Few ants encounter both choices. Yet most of the ants choose simultaneously. (How does Gabriele know when an ant chooses? Decided ants carry other ants toward the chosen nest. Undecided ants don’t.)

Gabriele and Sara plotted each ant’s status (decided or undecided) at each instant. All the ants’ lines start in the “undecided” region, high up in the graph. Most lines drop to the “decided” region together. Physicists call such dramatic, large-scale changes in many-particle systems “phase transitions.” The swarm transitions from the “undecided” phase to the “decided,” as moisture transitions from vapor to downpour.

Look from afar, and you’ll see evidence of a hive mind: The lines clump and slump together. Look more closely, and you’ll find lags between ants’ decisions. Gabriele and Sara grouped the ants according to their behaviors. Sara explained the grouping at a workshop this spring.

The green lines, she said, are undecided ants.

My stomach dropped like Gabriele and Sara’s ant lines.

People call data “cold” and “hard.” Critics lambast scientists for not appealing to emotions. Politicians weave anecdotes into their numbers, to convince audiences to care.

But when Sara spoke, I looked at her green lines and thought, “That’s me.”

I’ve blogged about my indecisiveness. Postdoc Ning Bao and I formulated a quantum voting scheme in which voters can superpose—form quantum combinations of—options. Usually, when John Preskill polls our research group, I abstain from voting. Politics, and questions like “Does building a quantum computer require only engineering or also science?”,^{1} have many facets. I want to view such questions from many angles, to pace around the questions as around a sculpture, to hear other onlookers, to test my impressions on them, and to cogitate before choosing.^{2} However many perspectives I’ve gathered, I’m missing others worth seeing. I commiserated with the green-line ants.

Sara presented about ants at a workshop hosted by the Beyond Center for Fundamental Concepts in Science at Arizona State University (ASU). The organizers, Paul Davies of Beyond and Andrew Briggs of Oxford, entitled the workshop “The Power of Information.” Participants represented information theory, thermodynamics and statistical mechanics, biology, and philosophy.

Paul and Andrew posed questions to guide us: What status does information have? Is information “a real thing” “out there in the world”? Or is information only a mental construct? What roles can information play in causation?

We paced around these questions as around a Chinese viewing stone. We sat on a bench in front of those questions, stared, debated, and cogitated. We taught each other about ants, artificial atoms, nanoscale machines, and models for information processing.

I wonder if I’ll acquire opinions about Paul and Andrew’s questions. Maybe I’ll meander from “undecided” to “decided” over a career. Maybe I’ll phase-transition like Sara’s ants. Maybe I’ll remain near the top of her diagram, a green holdout.

I know little about information’s power. But Sara’s plot revealed one power of information: Information can move us—from homeless to belonging, from ambivalent to decided, from a plot’s top to its bottom, from passive listener to finding yourself in a green curve.

*With thanks to Sara Imari Walker, Paul Davies, Andrew Briggs, Katherine Smith, and the Beyond Center for their hospitality and thoughts.*

^{1}By “only engineering,” I mean not “merely engineering” pejoratively, but “engineering and no other discipline.”

^{2}I feel compelled to perform these activities before choosing. I try to. Psychological experiments, however, suggest that I might decide before realizing that I’ve decided.

]]>

But in the end of all that, what do we know about modern physics? Certainly we all took a class called ‘modern physics’. Or should I say ‘”modern” physics’? Because, I’m guessing, the modern physics class heavily featured the Stern-Gerlach experiment (1922) and mentions of De-Broglie, Bohr, and Dirac quite often. Don’t get me wrong: great physics, and essential. But modern?

So what would be modern physics? What should we teach that does not predate 1960? By far the biggest development in my neck of the woods is easy access to computing power. Even I can run simulations for a Schroedinger equation (SE) with hundreds of sites and constantly driven. Even I can diagonalize a gigantic matrix that corresponds to a Mott-Hubbard model of 15 or maybe even 20 particles. What’s more, new approximate algorithms capture the many-body quantum dynamics, and ground states of chains with 100s of sites. These are the DMRG (density matrix renormalization group) and MPS (matrix product states) (see https://arxiv.org/abs/cond-mat/0409292 for a review of DMRG, and https://arxiv.org/pdf/1008.3477.pdf for a review of MPS, both by the inspiring Uli Schollwoeck).

Should we teach that? Isn’t it complicated? Yes and no. Respectively – not simultaneously. We should absolutely teach it. And no – it is really not complicated. That’s the point – it is simpler than Schroedinger’s equation! How do we teach it? I am not sure yet, but certainly there is a junior level time slot for computational quantum mechanics somewhere.

What else? Once we think about it, the flood gates open. Condensed matter just gave us a whole new paradigm for semi-conductors: topological insulators. Definitely need to teach that – and it is pure 21st century! Tough? Not at all, just solving SE on a lattice. Not tough? Well, maybe not trivial, but is it any tougher than finding the orbitals of Hydrogen? (at the risk of giving you nightmares, remember Laguerre polynomials? Oh – right – you won’t get any nightmares, because, most likely, you don’t remember!)

With that let me take a shot at the standard way that quantum mechanics is taught. Roughly a quantum class goes like this: wave-matter duality; SE; free particle; box; harmonic oscillator, spin, angular momentum, hydrogen atom. This is a good program for atomic physics, and possibly field theory. But by and large, this is the quantum mechanics of vacuum. What about quantum mechanics of matter? Is Feynman path integral really more important than electron waves in solids? All physics is beautiful. But can’t Feynman wait while we teach tight binding models?

And I’ll stop here, before I get started on hand-on labs, as well as the fragmented nature of our programs.

Question to you all out there: Suppose we go and modernize (no quotes) our physics program. What should we add? What should we take away? And we all agree – all physics is Beautiful! I’m sure I have my blind spots, so please comment!

]]>

I would pour them on the carpet, some weekend afternoons. I’d inherited a hodgepodge: The beads’ sizes, colors, shapes, trimmings, and craftsmanship varied. No property divided the beads into families whose members looked like they belonged together. But divide the beads I tried. I might classify them by color, then subdivide classes by shape. The color and shape groupings precluded me from grouping by size. But, by loosening my original classification and combining members from two classes, I might incorporate trimmings into the categorization. I’d push my classification scheme as far as I could. Then, I’d rake the beads together and reorganize them according to different principles.

Why have I pursued theoretical physics? many people ask. I have many answers. They include “Because I adored organizing craft supplies at age eight.” I craft and organize ideas.

I’ve blogged about the *out-of-time-ordered correlator* (OTOC), a signature of how quantum information spreads throughout a many-particle system. Experimentalists want to measure the OTOC, to learn how information spreads. But measuring the OTOC requires tight control over many quantum particles.

I proposed a scheme for measuring the OTOC, with help from Chapman University physicist Justin Dressel. The scheme involves weak measurements. Weak measurements barely disturb the systems measured. (Most measurements of quantum systems disturb the measured systems. So intuited Werner Heisenberg when formulating his uncertainty principle.)

I had little hope for the weak-measurement scheme’s practicality. Consider the stereotypical experimentalist’s response to a stereotypical experimental proposal by a theorist: *Oh, sure, we can implement that—in thirty years. Maybe. If the pace of technological development doubles. *I expected to file the weak-measurement proposal in the “unfeasible” category.

But experimentalists started collaring me. The scheme sounds reasonable, they said. How many trials would one have to perform? Did the proposal require *ancillas*, helper systems used to control the measured system? Must each ancilla influence the whole measured system, or could an ancilla interact with just one particle? How did this proposal compare with alternatives?

I met with a cavity-QED^{2} experimentalist and a cold-atoms expert. I talked with postdocs over skype, with heads of labs at Caltech, with grad students in Taiwan, and with John Preskill in his office. I questioned an NMR^{3} experimentalist over lunch and fielded superconducting-qubit^{4} questions in the sunshine. I read papers, reread papers, and powwowed with Justin.

I wouldn’t have managed half so well without Justin and without Brian Swingle. Brian and coauthors proposed the first OTOC-measurement scheme. He reached out after finding my first OTOC paper.

According to that paper, the OTOC is a moment of a quasiprobability.^{5} How does that quasiprobability look, we wondered? How does it behave? What properties does it have? Our answers appear in a paper we released with Justin this month. We calculate the quasiprobability in two examples, prove properties of the quasiprobability, and argue that the OTOC motivates generalizations of quasiprobability theory. We also enhance the weak-measurement scheme and analyze it.

Amidst that analysis, in a 10 x 6 table, we classify glass beads.

We inventoried our experimental conversations and distilled them. We culled measurement-scheme features analogous to bead size, color, and shape. Each property labels a row in the table. Each measurement scheme labels a column. Each scheme has, I learned, gold flecks and dents, hues and mottling, an angle at which it catches the light.

I’ve kept most of the glass beads that fascinated me at age eight. Some of the beads have dispersed to necklaces, picture frames, and eyeglass leashes. I moved the remnants, a few years ago, to a compartmentalized box. Doesn’t it resemble the table?

*That’s* why I work at the IQIM.

^{1}I fiddled in a home laboratory, too, in a garage. But I lived across the street from that garage. I lived two rooms from an arts-and-crafts box.

^{2}Cavity QED consists of light interacting with atoms in a box.

^{3}Lots of nuclei manipulated with magnetic fields. “NMR” stands for “nuclear magnetic resonance.” MRI machines, used to scan brains, rely on NMR.

^{4}Superconducting circuits are tiny, cold quantum circuits.

^{5}A quasiprobability resembles a probability but behaves more oddly: Probabilities range between zero and one; quasiprobabilities can dip below zero. Think of a moment as like an average.

*With thanks to all who questioned me; to all who answered questions of mine; to my wonderful coauthors; and to my parents, who stocked the crafts box.*

]]>

Over the past two years, I have been obsessively trying to understand this profound perspective more rigorously. Recently, John Preskill and I have taken a further step in this direction in the recent paper: quantum code properties from holographic geometries. In it, we make progress in interpreting features of the holographic approach to quantum gravity in the terms of quantum information constructs.

In this post I would like to present some context for this work through analogies which hopefully help intuitively convey the general ideas. While still containing some technical content, this post is not likely to satisfy those readers seeking a precise in-depth presentation. To you I can only recommend the masterfully delivered lecture notes on gravity and entanglement by Mark Van Raamsdonk.

Of all the concepts needed to explain emergent spacetime, maybe the most difficult is that of *quantum entanglement*. While the word seems to convey some kind of string wound up in a complicated way, it is actually a quality which may describe information in quantum mechanical systems. In particular, it applies to a system for which we have a complete description as a whole, but are only capable of describing certain statistical properties of its parts. In other words, our knowledge of the whole loses predictive power when we are only concerned with the parts. *Entanglement entropy* is a measure of information which quantifies this.

While our metaphor for entanglement is quite crude, it will serve the purpose of this post. Namely, to illustrate one of the driving premises for the holographic approach to quantum gravity, that the very structure of spacetime is *emergent* and built up from entanglement entropy.

But let us bring back our metaphors and try to convey the content of this identification. For this, we resort to the unlikely worlds of knitting and crochet. Indeed, by a planned combination of individual loops and stitches, these traditional crafts are capable of approximating any kind of surface (2D Riemannian surface would be the technical term).

Here I have presented some examples with uniform curvature R: flat in green, positive curvature (ball) in yellow and negative curvature (coral reef) in purple. While actual practitioners may be more interested in getting the shape right on hats and socks for loved ones, for us the point is that if we take a step back, these objects built of simple loops, hooks and stitches could end up looking a lot like the smooth surfaces that a physicist might like to use to describe 2D space. This is cute, but can we push this metaphor even further?

Well, first of all, although the pictures above are only representing 2D surfaces, we can expect that a similar approach should allow approximating 3D and even higher dimensional objects (again the technical term is *Riemannian manifolds*). It would just make things much harder to present in a picture. These woolen structures are, in fact, quite reminiscent of tensor networks, a modern mathematical construct widely used in the field of quantum information. There too, we combine basic building blocks (tensors) through simple operations (tensor index contraction) to build a more complex composite object. In the tensor network world, the structure of the network (how its nodes are connected to other nodes) generically defines the entanglement structure of the resulting object.

Roughly speaking, tensor networks are ingenious ways of encoding (quantum) inputs into (quantum) outputs. In particular, if you enter some input at the *boundary* of your tensor network, the tensors do the work of processing that information throughout the network so that if you ask for an output at any one of the nodes in the *bulk* of the tensor network, you get the right encoded answer. In other words, the information we input into the tensor network begins its journey at the dangling edges found at the boundary of the network and travels through the bulk edges by exploiting them as information bridges between the nodes of the network.

In the figure representing the cat’s cradle, these dangling input edges can be though of as the fingers holding the wool. Now, if we partition these edges into two disjoint sets (say, the fingers on the left hand and the fingers on the right hand, respectively), there will be some amount of entanglement between them. How much? In general, we cannot say, but under certain assumptions we find that it is proportional to the minimum cut through the network. Imagine you had an incredible number of fingers holding your wool structure. Now separate these fingers arbitrarily into two subsets *L* and *R* (we may call them left hand and right hand, although there is nothing right or left handy about them). By pulling left hand and right hand apart, the wool might stretch until at some point it breaks. How many threads will break? Well, the question is analogous to the entanglement one. We might expect, however, that a minimal number of threads break such that each hand can go its own way. This is what we call the *minimal cut*. In tensor networks, entanglement entropy is always bounded above by such a minimal cut and it has been confirmed that under certain conditions entanglement also reaches, or approximates, this bound. In this respect, our wool analogy seems to be working out.

*Holography,* in the context of black holes, was sparked by a profound observation of Jacob Bekenstein and Stephen Hawking, which identified the surface area of a black hole horizon (in Planck units) with its entropy, or information content:

.

Here, is the entropy associated to the black hole, is its horizon area, is the Planck length and is Boltzmann’s constant.

Why is this equation such a big deal? Well, there are many reasons, but let me emphasize one. For theoretical physicists, it is common to get rid of physical units by relating them through universal constants. For example, the theory of special relativity allows us to identify units of distance with units of time through the equation using the speed of light *c*. General relativity further allows us to identify mass and energy through the famous . By considering the Bekenstein-Hawking entropy, units of area are being swept away altogether! They are being identified with dimensionless units of information (one square meter is roughly bits according to the Bousso bound).

Initially, the identification of area and information was proposed to reconcile black holes with the laws of thermodynamics. However, this has turned out to be the main hint leading to the *holographic principle, *wherein states that describe a certain volume of space in a theory of quantum gravity can also be thought of as being represented at the lower dimensional boundary of the given volume. This idea, put forth by Gerard ‘t Hooft, was later given a more precise interpretation by Leonard Susskind and subsequently by Juan Maldacena through the celebrated AdS/CFT correspondence. I will not dwell in the details of the AdS/CFT correspondence as I am not an expert myself. However, this correspondence gave S. Ryu and T. Takayanagi (RT) a setting to vastly generalize the identification of area as an information quantity. They proposed identifying the area of minimal surfaces on the bulk (remember the minimal cut?) with *entanglement entropy *in the boundary theory.

Roughly speaking, if we were to split the boundary into two regions, left and right it should be possible to also partition the bulk in a way that each piece of the bulk has either or in its boundary. Ryu and Takayanagi proposed that the area of the smallest surface which splits the bulk in this way would be proportional to the entanglement entropy between the two parts

.

It turns out that some quantum field theory states admit such a *geometric interpretation*. Many high energy theory colleagues have ideas about when this is possible and what are the necessary conditions. By far the best studied setting for this holographic duality is AdS/CFT, where Ryu and Takayanagi first checked their proposal. Here, the entanglement features of the lowest energy state of a conformal field theory are matched to surfaces in a hyperbolic space (like the purple crochet and the tensor network presented). However, other geometries have been shown to match the RT prediction with respect to the entanglement properties of different states. The key point here is that the boundary states do not have any geometry *per se*. They just manifest different amounts of entanglement when partitioned in different ways.

The holographic program suggests that bulk geometry *emerges* from the entanglement properties of the boundary state. Spacetime materializes from the information structure of the boundary instead of being a fundamental structure as in general relativity. Am I saying that we should strip everything physical, including space in favor of ones and zeros? Well, first of all, it is not just me who is pushing this approach. Secondly, no one is claiming that we should start making all our physical reasoning in terms of ones and zeros.

Let me give an example. We know that the sea is composed mostly of water molecules. The observation of waves that travel, superpose and break can be labeled as an emergent phenomenon. However, to a surfer, a wave is much more real than the water molecules composing it and the fact that it is emergent is of no practical consequence when trying to predict where a wave will break. A proficient physicist, armed with tools from statistical mechanics (there are more than molecules per liter), could probably derive a macroscopic model for waves from the microscopic theory of particles. In the process of learning what the surfer already understood, he would identify elements of the microscopic theory which become irrelevant for such questions. Such details could be whether the sea has an odd or even number of molecules or the presence of a few fish.

In the case of holography, each square meter corresponds to bits of entanglement. We don’t even have words to describe anything close to this outrageously large exponent which leaves plenty of room for emergence. Even taking all the information on the internet – estimated at bits (10 zettabits) – we can’t even match the area equivalent of the smallest known particle. The fact that there are so many orders of magnitude makes it difficult to extrapolate our understanding of the geometric domain to the information domain and vice versa. This is precisely the realm where techniques such as those from statistical mechanics successfully get rid of irrelevant details.

High energy theorists and people with a background in general relativity tend to picture things in a continuum language. For example, part of their daily butter are Riemannian or Lorentzian manifolds which are respectively used to describe space and spacetime. In contrast, most of information theory is usually applied to deal with discrete elements such as bits, elementary circuit gates, etc. Nevertheless, I believe it is fruitful to straddle this cultural divide to the benefit of both parties. In a way, the convergence we are seeking is analogous to the one achieved by the kinetic theory of gases, which allowed the unification of thermodynamics with classical mechanics.

The remarkable success of the geometric RT prediction to different bulk geometries such as the BTZ black holes and the generality of the entanglement result for its random tensor network cousins emboldened us to take the RT prescription beyond its usual domain of application. We considered applying it to *arbitrary Riemannian manifolds* that are space-like and that can be approximated by a smoothly knit fabric.

Furthermore, we went on to consider the implications that such assumptions would have when the corresponding geometries are interpreted as error-correcting codes. In fact, our work elaborates on the perspective of A. Almheiri, X. Dong and D. Harlow (ADH) where quantum error-correcting code properties of AdS/CFT were laid out; it is hard to overemphasize the influence of this work. Our work considers general geometries and identifies properties a code associated to a specific holographic geometry should satisfy.

In the cat cradle/fabric metaphor for holography, the fingers at the boundary constitute the boundary theory without gravity and the resulted fabric represents a bulk geometry in the corresponding bulk gravitational theory. Bulk observables may be represented in different ways on the boundary, but not arbitrarily. This raises the question of which parts of the bulk correspond to which parts of the boundary. In general, there is not a one to one mapping. However, if we partition the boundary in two parts and , we expect to be able to split the bulk into two corresponding regions and . This is the content of the *entanglement wedge hypothesis*, which is our other main assumption. In our metaphor, one could imagine that we pull the left fingers up and the right fingers down (taking care not to get hurt). At some point, the fabric breaks through into two pieces. In the setting we are concerned with, these pieces maintain part of the original structure, which tells us which bulk information was available in one piece of the boundary and which part was available in the other.

Although we do not produce new explicit examples of such codes, we worked our way towards developing a language which translates between the holographic/geometric perspective and the coding theory perspective. We specifically build upon the language of operator algebra quantum error correction (OAQEC) which allows individually focusing on different parts of the logical message. In doing so we identified several coding theoretic bounds and quantities, some of which we found to be applicable beyond the context of holography. A particularly noteworthy one is a strengthening of the quantum Singleton bound, which defines a trade-off between how much logical information can be packed in a code, how much physical space is used for encoding this information and how well-protected the information is from erasures.

One of the central observations of ADH highlights how quantum codes have properties from both classical error-correcting codes and secret sharing schemes. On the one hand, logical encoded information should be protected from loss of small parts of the carrier, a property quantified by the code *distance*. On the other hand, the logical encoded information should not become accessible until a sufficiently large part of the carrier is available to us. This is quantified by the threshold of a corresponding secret sharing scheme. We call this quantity *price* as it identifies how much of the carrier we would need before someone could reconstruct the message faithfully. In general, it is hard to balance these two competing requirements; a statement which can be made rigorous. This kind of complementarity has long been recognized in quantum cryptography. However, we found that according to holographic predictions, codes admitting a geometric interpretation achieve a remarkable optimality in the trade-off between these features.

Our exploration of alternative geometries is rewarded by the following guidelines

- Hyperbolic geometries predict a fixed polynomial scaling for code distance. This is illustrated by a feature we call
*uberholography.*We use this name because there is an*excess*of holography wherein bulk observables can be represented on intricate subsets of the boundary which have fractal dimension even smaller than the boundary itself. - Hyperbolic geometries suggest the possibility of decoding procedures which are local on the boundary geometry. This property may be connected to the locality of the corresponding boundary field theory.
- Flat and positive curvature geometries may lead to codes with better parameters in terms of distance and rates (ratio of logical information to physical information). A hemisphere reaches optimum parameters, saturating coding bounds.

Current day quantum computers are far from the number of qubits required to invoke an emergent geometry. Nevertheless, it is exhilarating to take a step back and consider how the properties of the codes, and information in general, may be interpreted geometrically. On the other hand, I find that the quantum code language we adapt to the context of holography might eventually serve as a useful tool in distinguishing which boundary features are relevant or irrelevant for the emergent properties of the holographic dual. Ours is but one contribution in a very active field. However, the one thing I am certain about is that these are exciting times to be doing physics.

]]>

Almost 10 years ago I visited the Perimeter Institute to attend a conference, and by chance was assigned an office shared with Patrick Hayden. Patrick was a professor at McGill at that time, but I knew him well from his years at Caltech as a Sherman Fairchild Prize Fellow, and deeply respected him. Our proximity that week ignited a collaboration which turned out to be one of the most satisfying of my career.

To my surprise, Patrick revealed he had been thinking about black holes, a long-time passion of mine but not previously a research interest of his, and that he had already arrived at a startling insight which would be central to the paper we later wrote together. Patrick wondered what would happen if Alice possessed a black hole which happened to be highly entangled with a quantum computer held by Bob. He imagined Alice throwing a qubit into the black hole, after which Bob would collect the black hole’s Hawking radiation and feed it into his quantum computer for processing. Drawing on his knowledge about quantum communication through noisy channels, Patrick argued that Bob would only need to grab a few qubits from the radiation in order to salvage Alice’s qubit successfully by doing an appropriate quantum computation.

This idea got my adrenaline pumping, stirring a vigorous dialogue. Patrick had initially assumed that the subsystem of the black hole ejected in the Hawking radiation had been randomly chosen, but we eventually decided (based on a simple picture of the quantum computation performed by the black hole) that it should take a time scaling like M log M (where M is the black hole mass expressed in Planck units) for Alice’s qubit to get scrambled up with the rest of her black hole. Only after this scrambling time would her qubit leak out in the Hawking radiation. This time is actually shockingly short, about a millisecond for a solar mass black hole. The best previous estimate for how long it would take for Alice’s qubit to emerge (scaling like M^{3}), had been about 10^{67} years.

This short time scale aroused memories of discussions with Lenny Susskind back in 1993, vividly recreated in Lenny’s engaging book *The Black Hole War*. Because of the black hole’s peculiar geometry, it seemed conceivable that Bob could distill a copy of Alice’s qubit from the Hawking radiation and then leap into the black hole, joining Alice, who could then toss her copy of the qubit to Bob. It disturbed me that Bob would then hold two perfect copies of Alice’s qubit; I was a quantum information novice at the time, but I knew enough to realize that making a perfect clone of a qubit would violate the rules of quantum mechanics. I proposed to Lenny a possible resolution of this “cloning puzzle”: If Bob has to wait outside the black hole for too long in order to distill Alice’s qubit, then when he finally jumps in it may be too late for Alice’s qubit to catch up to Bob inside the black hole before Bob is destroyed by the powerful gravitational forces inside. Revisiting that scenario, I realized that the scrambling time M log M, though short, was just barely long enough for the story to be self-consistent. It was gratifying that things seemed to fit together so nicely, as though a deep truth were being affirmed.

Patrick and I viewed our paper as a welcome opportunity to draw the quantum information and quantum gravity communities closer together, and we wrote it with both audiences in mind. We had fun writing it, adding rhetorical flourishes which we hoped would draw in readers who might otherwise be put off by unfamiliar ideas and terminology.

In their recent work, Juan and his collaborators propose a different way to think about the problem. They stripped down our Hawking radiation decoding scenario to a model so simple that it can be analyzed quite explicitly, yielding a pleasing result. What had worried me so much was that there seemed to be two copies of the same qubit, one carried into the black hole by Alice and the other residing outside the black hole in the Hawking radiation. I was alarmed by the prospect of a rendezvous of the two copies. Maldacena et al. argue that my concern was based on a misconception. There is just one copy, either inside the black hole or outside, but not both. In effect, as Bob extracts his copy of the qubit on the outside, he destroys Alice’s copy on the inside!

To reach this conclusion, several ideas are invoked. First, we analyze the problem in the case where we understand quantum gravity best, the case of a negatively curved spacetime called anti-de Sitter space. In effect, this trick allows us to trap a black hole inside a bottle, which is very advantageous because we can study the physics of the black hole by considering what happens on the walls of the bottle. Second, we envision Bob’s quantum computer as another black hole which is entangled with Alice’s black hole. When two black holes in anti-de Sitter space are entangled, the resulting geometry has a “wormhole” which connects together the interiors of the two black holes. Third, we chose the entangled pair of black holes to be in a very special quantum state, called the “thermofield double” state. This just means that the wormhole connecting the black holes is as short as possible. Fourth, to make the analysis even simpler, we suppose there is just one spatial dimension, which makes it easier to draw a picture of the spacetime. Now each wall of the bottle is just a point in space, with the left wall lying outside Bob’s side of the wormhole, and the right wall lying outside Alice’s side.

An important property of the wormhole is that it is not traversable. That is, when Alice throws her qubit into her black hole and it enters her end of the wormhole, the qubit cannot emerge from the other end. Instead it is stuck inside, unable to get out on either Alice’s side or Bob’s side. Most ways of manipulating the black holes from the outside would just make the wormhole longer and exacerbate the situation, but in a clever recent paper Ping Gao, Daniel Jafferis, and Aron Wall pointed out an exception. We can imagine a quantum wire connecting the left wall and right wall, which simulates a process in which Bob extracts a small amount of Hawking radiation from the right wall (that is, from Alice’s black hole), and carefully deposits it on the left wall (inserting it into Bob’s quantum computer). Gao, Jafferis, and Wall find that this procedure, by altering the trajectories of Alice’s and Bob’s walls, can actually make the wormhole traversable!

This picture gives us a beautiful geometric interpretation of the decoding protocol that Patrick and I had described. It is the interaction between Alice’s wall and Bob’s wall that brings Alice’s qubit within Bob’s grasp. By allowing Alice’s qubit to reach Bob at the other end of the wormhole, that interaction suffices to perform Bob’s decoding task, which is especially easy in this case because Bob’s quantum computer was connected to Alice’s black hole by a short wormhole when she threw her qubit inside.

And what if Bob conducts his daring experiment, in which he decodes Alice’s qubit while still outside the black hole, and then jumps into the black hole to check whether the same qubit is also still inside? The above spacetime diagram contrasts two possible outcomes of Bob’s experiment. After entering the black hole, Alice might throw her qubit toward Bob so he can catch it inside the black hole. But if she does, then the qubit never reaches Bob’s quantum computer, and he won’t be able to decode it from the outside. On the other hand, Alice might allow her qubit to reach Bob’s quantum computer at the other end of the (now traversable) wormhole. But if she does, Bob won’t find the qubit when he enters the black hole. Either way, there is just one copy of the qubit, and no way to clone it. I shouldn’t have been so worried!

Granted, we have only described what happens in an oversimplified model of a black hole, but the lessons learned may be more broadly applicable. The case for broader applicability rests on a highly speculative idea, what Maldacena and Susskind called the ER=EPR conjecture, which I wrote about in this earlier blog post. One consequence of the conjecture is that a black hole highly entangled with a quantum computer is equivalent, after a transformation acting only on the computer, to two black holes connected by a short wormhole (though it might be difficult to actually execute that transformation). The insights of Gao-Jafferis-Wall and Maldacena-Stanford-Yang, together with the ER=EPR viewpoint, indicate that we don’t have to worry about the same quantum information being in two places at once. Quantum mechanics can survive the attack of the clones. Whew!

Thanks to Juan, Douglas, and Lenny for ongoing discussions and correspondence which have helped me to understand their ideas (including a lucid explanation from Douglas at our Caltech group meeting last Wednesday). This story is still unfolding and there will be more to say. These are exciting times!

]]>

Quantum thermodynamicist Nelly Ng and I drove to the Taipei airport early. News from Air China curtailed our self-congratulations: China’s military was running an operation near Shanghai. Commercial planes couldn’t land. I’d miss my flight to LA.

An operation?

Quantum information theorists use a mindset called *operationalism*. We envision experimentalists in separate labs. Call the experimentalists Alice, Bob, and Eve (ABE). We tell stories about ABE to formulate and analyze problems. Which quantum states do ABE prepare? How do ABE *evolve*, or manipulate, the states? Which measurements do ABE perform? Do they communicate about the measurements’ outcomes?

Operationalism concretizes ideas. The outlook checks us from drifting into philosophy and into abstractions difficult to apply physics tools to.^{2} Operationalism infuses our language, our framing of problems, and our mathematical proofs.

Experimentalists can perform some operations more easily than others. Suppose that Alice controls the magnets, lasers, and photodetectors in her lab; Bob controls the equipment in his; and Eve controls the equipment in hers. Each experimentalist can perform *local operations* (LO). Suppose that Alice, Bob, and Eve can talk on the phone and send emails. They exchange *classical communications* (CC).

You can’t generate entanglement using LOCC. Entanglement consists of strong correlations that quantum systems can share and that classical systems can’t. A quantum system in Alice’s lab can hold more information about a quantum system of Bob’s than any classical system could. We must create and control entanglement to operate quantum computers. Creating and controlling entanglement poses challenges. Hence quantum information scientists often model easy-to-perform operations with LOCC.

Suppose that some experimentalist Charlie loans entangled quantum systems to Alice, Bob, and Eve. How efficiently can ABE compute some quantity, exchange quantum messages, or perform other information-processing tasks, using that entanglement? Such questions underlie quantum information theory.

Local operations.

Nelly and I performed those, trying to finagle me to LA. I inquired at Air China’s check-in desk in English. Nelly inquired in Mandarin. An employee smiled sadly at each of us.

We branched out into classical communications. I called Expedia (“No, I do not want to fly to Manila”), United Airlines (“No flights for two days?”), my credit-card company, Air China’s American reservations office, Air China’s Chinese reservations office, and Air China’s Taipei reservations office. I called AT&T to ascertain why I couldn’t reach Air China (“Yes, please connect me to the airline. Could you tell me the number first? I’ll need to dial it after you connect me and the call is then dropped”).

As I called, Nelly emailed. She alerted Bob, aka Janet (Ling-Yan) Hung, who hosted half the workshop at Fudan University in Shanghai. Nelly emailed Eve, aka Feng-Li Lin, who hosted half the workshop at National Taiwan University in Taipei. Janet twiddled the magnets in her lab (investigated travel funding), and Feng-Li cooled a refrigerator in his.

ABE can process information only so efficiently, using LOCC. The time crept from 1:00 PM to 3:30.

What could we have accomplished with *quantum communication? *Using LOCC, Alice can manipulate quantum states (like an electron’s orientation) in her lab. She can send nonquantum messages (like “My flight is delayed”) to Bob. She can’t send quantum information (like an electron’s orientation).

Alice and Bob can ape quantum communication, given entanglement. Suppose that Charlie strongly correlates two electrons. Suppose that Charlie gives Alice one electron and gives Bob the other. Alice can send one qubit–one unit of quantum information–to Bob. We call that sending *quantum teleportation*.

Suppose that air-traffic control had loaned entanglement to Janet, Feng-Li, and me. Could we have finagled me to LA quickly?

Quantum teleportation differs from human teleportation.

We didn’t need teleportation. Feng-Li arranged for me to visit Taiwan’s National Center for Theoretical Sciences (NCTS) for two days. Air China agreed to return me to Shanghai afterward. United would fly me to LA, thanks to help from Janet. Nelly rescued my luggage from leaving on the wrong flight.

Would I rather have teleported? I would have avoided a bushel of stress. But I wouldn’t have learned from Janet about Chinese science funding, wouldn’t have heard Feng-Li’s views about gravitational waves, wouldn’t have glimpsed Taiwanese countryside flitting past the train we rode to the NCTS.

According to some metrics, classical resources outperform quantum.

*The workshop organizers have generously released videos of the lectures. M y lecture about quantum chaos and fluctuation relations appears here and here. More talks appear here.*

*With gratitude to Janet Hung, Feng-Li Lin, and Nelly Ng; to Fudan University, National Taiwan University, and Taiwan’s National Center for Theoretical Sciences for their hospitality; and to Xiao Yu for administrative support.*

*Glossary and other clarifications:*

^{1}Field theory describes subatomic particles and light.

^{2}Physics and philosophy enrich each other. But I haven’t trained in philosophy. I benefit from differentiating physics problems that I’ve equipped to solve from philosophy problems that I haven’t.

]]>

I could recount numerous anecdotes that exemplify my encounter with the frighteningly intelligent and vivid imagination of the people at LIGO with whom I had the great pleasure of working – Prof. Rana X. Adhikari, Maria Okounkova, Eric Quintero, Maximiliano Isi, Sarah Gossan, and Jameson Graef Rollins – but in the end it all boils down to a parable about fish.

Rana’s version, which he recounted to me on our first meeting, goes as follows: “There are these two young fish swimming along, and a scientist approaches the aquarium and proclaims, “We’ve finally discovered the true nature of water!” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, “What the hell is water?”” In David Foster Wallace’s more famous version, the scientist is not a scientist but an old fish, who greets them saying, “Morning, boys. How’s the water?”

The difference is not circumstantial. Foster Wallace’s version is an argument against “unconsciousness, the default setting, the rat race, the constant gnawing sense of having had, and lost, some infinite thing” – personified by the young fish – and an urgent call for awareness – personified by the old fish. But in Rana’s version, the matter is more hard-won: as long as they are fish, they haven’t the faintest apprehension of the very concept of water: even a wise old fish would fail to notice. In this adaptation, gaining awareness of that which is “so real and essential, so hidden in plain sight all around us, all the time” as Foster Wallace describes it, demands much more than just an effort in mindfulness. It demands imagining the unimaginable.

Albert Einstein once said that “Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand.” But the question remains of how far our imagination can reach, and where the radius ends for us in “what there ever will be to know and understand”, versus that which happens to be. My earlier remark about LIGO scientists’ being far-out does not at all refer to a speculative disposition, which would characterise amateur anything-goes, and does go over-the-edge pseudo-science. Rather, it refers to the high level of creativity that is demanded of physicists today, and to the untiring curiosity that drives them to expand the limits of that radius, despite all odds.

The possibility of imagination has become an increasingly animating thought within my currently ongoing project:

As an independent curator of contemporary art, I travelled to Caltech for a 6-week period of research, towards developing an exhibition that will invite the public to engage with some of the highly challenging implications around the concept of time in physics. In it, I identify LIGO’s breakthrough detection of gravitational waves as an unparalleled incentive by which to acquire – in broad cultural terms – a new sense of time that departs from the old and now wholly inadequate one. After LIGO’s announcement proved that time fluctuation not only happens, but that it happened *here*, to *us*, on a precise date and time, it is finally possible for a broader public to relate, however abstract some of the concepts from the field of physics may remain. More simply put: we can finally sense that the water is moving.[1]

One century after Einstein’s Theory of General Relativity, most people continue to hold a highly impoverished idea of the nature of time, despite it being perhaps the most fundamental element of our existence. For 100 years there was no blame or shame in this. Because within all possible changes to the three main components of the universe – space, time & energy – the fluctuation of time was always the only one that escaped our sensorial capacities, existing exclusively in our minds, and finding its fullest expression in mathematical language. If you don’t speak mathematics, time fluctuation remains impossible to grasp, and painful to imagine.

But on February 11th, 2016, this situation changed dramatically.

On this date, a televised announcement told the world of the first-ever sensory detection of time-fluctuation, made with the aid of the most sensitive machine ever to be built by mankind. Finally, we have sensorial access to variations in all components of the universe as we know it. What is more, we observe the non-static passage of time through *sound*, thereby connecting it to the most affective of our senses.

Of course, LIGO’s detection is limited to time fluctuation and doesn’t yet make other mind-bending behaviours of time observable. But this is only circumstantial. The key point is that we can take this initial leap, and that it loosens our feet from the cramp of Newtonian fixity. Once in this state, gambolling over to ideas about zero time tunnelling, non-causality, or the future determining the present, for instance, is far more plausible, and no longer painful but rather seductive, at least, perhaps, for the playful at heart.

Taking a slight off-road (to be re-routed in a moment): there is a common misconception about children’s allegedly free-spirited creativity. Watching someone aged between around 4 and 15 draw a figure will demonstrate quite clearly just how taut they really are, and that they apply strict schemes that follow reality as they see and learn to see it. Bodies consistently have eyes, mouths, noses, heads, rumps and limbs, correctly placed and in increasingly realistic colours. Ask them to depart from these conventions – “draw one eye on his forehead”, “make her face green” – like masters such as Pablo Picasso and Henri Matisse have done – and they’ll likely become very upset (young adolescents being particularly conservative, reaching the point of panic when challenged to shed consensus).

This is not to compare the lay public (including myself) to children, but to suggest that there’s no inborn capacity – the unaffected, ‘genius’ naïveté that the modernist movements of Primitivism, Art Brut and Outsider Art exalted – for developing a creativity that is of substance. Arriving at a consequential idea, in both art and physics, entails a great deal of acumen and is far from gratuitous, however whimsical the moment in which it sometimes appears. And it’s also to suggest that there’s a necessary process of acquaintance – the knowledge of something through experience – in taking a cognitive leap away from the seemingly obvious nature of reality. If there’s some truth in this, then LIGO’s expansion of our sensorial access to the fluctuation of time, together with artistic approaches that lift the remaining questions and ambiguities of spacetime onto a relational, experiential plane, lay fertile ground on which to begin to foster a new sense of time – on a broad cultural level – however slowly it unfolds.

The first iteration of this project will be an exhibition, to take place in Berlin, in July 2017. It will feature existing and newly commissioned works by established and upcoming artists from Los Angeles and Berlin, working in sound, installation and video, to stage a series of immersive environments that invite the viewers’ bodily interaction.

Though the full selection cannot be disclosed just yet, I would like here to provide a glimpse of two works-in-progress by artist-duo Evelina Domnitch & Dmitry Gelfand, whom I invited to Los Angeles to collaborate in my research with LIGO, and whose contribution has been of great value to the project.

For more details on the exhibition, please stay tuned, and be warmly welcome to visit Berlin in July!

Text & images: courtesy of the artists.

**ORBIHEDRON **| 2017

A dark vortex in the middle of a water-filled basin emits prismatic bursts of rotating light. Akin to a radiant ergosphere surrounding a spinning black hole, *Orbihedron *evokes the relativistic as well as quantum interpretation of gravity – the reconciliation of which is essential for unravelling black hole behaviour and the origins of the cosmos. Descending into the eye of the vortex, a white laser beam reaches an impassible singularity that casts a whirling circular shadow on the basin’s floor. The singularity lies at the bottom of a dimple on the water’s surface, the crown of the vortex, which acts as a concave lens focussing the laser beam along the horizon of the “black hole” shadow. Light is seemingly swallowed by the black hole in accordance with general relativity, yet leaks out as quantum theory predicts.

**ER = EPR | 2017**

Two co-rotating vortices, joined together via a slender vortical bridge, lethargically drift through a body of water. Light hitting the water’s surface transforms the vortex pair into a dynamic lens, projecting two entangled black holes encircled by shimmering halos. As soon as the “wormhole” link between the black holes rips apart, the vortices immediately dissipate, analogously to the collapse of a wave function. Connecting distant black holes or two sides of the same black hole, might wormholes be an example of cosmic-scale quantum entanglement? This mind-bending conjecture of Juan Maldacena and Leonard Susskind can be traced back to two iconoclastic papers from 1935. Previously thought to be unrelated (both by their authors and numerous generations of readers), one article, the legendary EPR (penned by Einstein, Podolsky and Rosen) engendered the concept of quantum entanglement or “spooky action at a distance”; and the second text theorised Einstein-Rosen (ER) bridges, later known as wormholes. Although the widely read EPR paper has led to the second quantum revolution, currently paving the way to quantum simulation and computation, ER has enjoyed very little readership. By equating ER to EPR, the formerly irreconcilable paradigms of physics have the potential to converge: the phenomenon of gravity is imagined in a quantum mechanical context. The theory further implies, according to Maldacena, that the undivided, “reliable structure of space-time is due to the ghostly features of entanglement”.

[1] I am here extending our capacity to sense to that of the technology itself, which indeed measured the warping of spacetime. However, in interpreting gravitational waves from a human frame of reference (moving nowhere near the speed of light at which gravitational waves travel), they would seem to be spatial. In fact, the elongation of space (a longer wavelength) directly implies that time slows down (a longer wave-period), so that the two are indistinguishable.

Isabel de Sena

]]>

The monster’s master, Dr. Eggman, has ginger mustachios and a body redolent of his name. He scoffs as the heroes congratulate themselves.

“Fools!” he cries, the pauses in his speech heightening the drama. “[That monster is] CHAOS…the GOD…of DE-STRUC-TION!” His cackle could put a Disney villain to shame.

Dr. Eggman’s outburst comes to mind when anyone asks what topic I’m working on.

“Chaos! And the flow of time, quantum theory, and the loss of information.”

Alexei Kitaev, a Caltech physicist, hooked me on chaos. I TAed his spring-2016 course. The registrar calls the course Ph 219c: Quantum Computation. I call the course Topics that Interest Alexei Kitaev.

“What do you plan to cover?” I asked at the end of winter term.

Topological quantum computation, Alexei replied. How you simulate Hamiltonians with quantum circuits. Or maybe…well, he was thinking of discussing black holes, information, and chaos.

If I’d had a tail, it would have wagged.

“What would you say about black holes?” I asked.

IWhat if you pulled another double pendulum a hair’s breadth less far? You could let the pendulum swing, wait for a time , and freeze this pendulum. This pendulum would probably lie far from its brother. This pendulum would probably have been moving with a different speed than its brother, in a different direction, just before the freeze. The double pendulum’s motion changes loads if the initial conditions change slightly. This sensitivity to initial conditions characterizes classical chaos.

A mathematical object reflects quantum systems’ sensitivities to initial conditions. [Experts: can evolve as an exponential governed by a Lyapunov-type exponent: .] encodes a hypothetical process that snakes back and forth through time. This snaking earned the name “the out-of-time-ordered correlator” (OTOC). The snaking prevents experimentalists from measuring quantum systems’ OTOCs easily. But experimentalists are trying, because reveals how quantum information spreads via entanglement. Such entanglement distinguishes black holes, cold atoms, and specially prepared light from everyday, classical systems.

Alexei illustrated, on his whiteboard, the sensitivity to initial conditions.

“In case you’re taking votes about what to cover this spring,” I said, “I vote for chaos.”

We covered chaos. A guest attended one lecture: Beni Yoshida, a former IQIM postdoc. Beni and colleagues had devised quantum error-correcting codes for black holes.^{3} Beni’s foray into black-hole physics had led him to . He’d written an OTOC paper that Alexei presented about. Beni presented about a follow-up paper. If I’d had another tail, it would have wagged.

At the end of the summer, IQIM postdoc Yichen Huang posted on Facebook, “In the past week, five papers (one of which is ours) appeared . . . studying out-of-time-ordered correlators in many-body localized systems.”

I looked down at the MBL calculation I was performing. I looked at my computer screen. I set down my pencil.

“Fine.”

I marched to John Preskill’s office.

The OTOC kept flaring on my radar, I reported. Maybe the time had come for me to try contributing to the discussion. What might I contribute? What would be interesting?We kicked around ideas.

“Well,” John ventured, “you’re interested in fluctuation relations, right?”

Something clicked like the “power” button on a video-game console.

Fluctuation relations are equations derived in nonequilibrium statistical mechanics. They describe systems driven far from equilibrium, like a DNA strand whose ends you’ve yanked apart. Experimentalists use fluctuation theorems to infer a difficult-to-measure quantity, a difference between free energies. Fluctuation relations imply the Second Law of Thermodynamics. The Second Law relates to the flow of time and the loss of information.

Time…loss of information…Fluctuation relations smelled like the OTOC. The two *had* to join together.

I spent the next four days sitting, writing, obsessed. I’d read a paper, three years earlier, that casts a fluctuation relation in terms of a correlator. I unearthed the paper and redid the proof. Could I deform the proof until the paper’s correlator became the out-of-time-ordered correlator?

Apparently. I presented my argument to my research group. John encouraged me to clarify a point: I’d defined a mathematical object , a *probability amplitude*. Did have physical significance? Could anyone measure it? I consulted measurement experts. One identified as a quasiprobability, a quantum generalization of a probability, used to model light in quantum optics. With the experts’ assistance, I devised two schemes for measuring the quasiprobability.

The result is a fluctuation-like relation that contains the OTOC. The OTOC, the theorem reveals, is a combination of quasiprobabilities. Experimentalists can measure quasiprobabilities with weak measurements, gentle probings that barely disturb the probed system. The theorem suggests two experimental protocols for inferring the difficult-to-measure OTOC, just as fluctuation relations suggest protocols for inferring the difficult-to-measure . Just as fluctuation relations cast in terms of a characteristic function of a probability distribution, this relation casts in terms of a characteristic function of a (summed) quasiprobability distribution. Quasiprobabilities reflect entanglement, as the OTOC does.

Collaborators and I are extending this work theoretically and experimentally. How does the quasiprobability look? How does it behave? What mathematical properties does it have? The OTOC is motivating questions not only about our quasiprobability, but also about quasiprobability and weak measurements. We’re pushing toward measuring the OTOC quasiprobability with superconducting qubits or cold atoms.

Chaos has evolved from an enemy to a curiosity, from a god of destruction to an inspiration. I no longer play the electric-blue hedgehog. But I remain electrified.

^{1}I hadn’t started studying physics, ok?

^{2}Don’t ask me how the liquid’s surface tension rises enough to maintain the limbs’ shapes.

^{3}Black holes obey quantum mechanics. Quantum systems can solve certain problems more quickly than ordinary (classical) computers. Computers make mistakes. We fix mistakes using error-correcting codes. The codes required by quantum computers differ from the codes required by ordinary computers. Systems that contain black holes, we can regard as performing quantum computations. Black-hole systems’ mistakes admit of correction via the code constructed by Beni & co.

]]>

The ten shortlisted films were chosen from a total of 203 submissions received during the festival’s 2016 call for entries. Some of the finalists are dramatic, some funny, some abstract. Some are live-action film, some animation. Each is under five minutes long. Find the titles and synopses of the shortlisted films below.

Screenings of the films start February 23 with confirmed events in Waterloo (23 February) and Vancouver (23 February), Canada; Singapore (25-28 February); Glasgow, UK (17 March); and Brisbane, Australia (24 March).

More details can be found at shorts.quantumlah.org, where viewers can also watch the films online

and vote for their favorite to help decide a ‘People’s Choice’ prize. The website also hosts interviews with the filmmakers.

The Quantum Shorts festival is run by the Centre for Quantum Technologies at the National University of Singapore with a constellation of prestigious partners including Scientific American magazine and the journal Nature. The festival’s media partners, scientific partners and screening partners span five countries. The Institute for Quantum Information and Matter at Caltech is a proud sponsor.

For making the shortlist, the filmmakers receive a $250 award, a one-year digital subscription to Scientific American and certificates.

The festival’s top prize of US $1500 and runner-up prize of US $1000 will now be decided by a panel of eminent judges. The additional People’s Choice prize of $500 will be decided by public vote on the shortlist, with voting open on the festival website until March 26th. Prizes will be announced by the end of March.

**Quantum Shorts 2016: FINALISTS**

*Ampersand*

What unites everything on Earth? That we are all ultimately composed of something that is both matter & wave

Submitted by Erin Shea, United States

*Approaching Reality*

Dancing cats, a watchful observer and a strange co-existence. It’s all you need to understand the essence of quantum mechanics

Submitted by Simone De Liberato, United Kingdom

*Bolero*

The coin is held fast, but is it heads or tails? As long as the fist remains closed, you are a winner – and a loser

Submitted by Ivan D’Antonio, Italy

*Novae*

What happens when a massive star reaches the end of its life? Something that goes way beyond the spectacular, according to this cosmic poem about the infinite beauty of a black hole’s birth

Submitted by Thomas Vanz, France

*The Guardian*

A quantum love triangle, where uncertainty is the only winner

Submitted by Chetan Kotabage, India

*The Real Thing*

Picking up a beverage shouldn’t be this hard. And it definitely shouldn’t take you through the multiverse…

Submitted by Adam Welch, United States

*Together – Parallel Universe*

It’s a tale as old as time: boy meets girl, girl is not as interested as boy hoped. So boy builds spaceship and travels through multi-dimensional reality to find the one universe where they can be together

Submitted by Michael Robertson, South Africa

*Tom’s Breakfast*

This is one of those days when Tom’s morning routine doesn’t go to plan – far from it, in fact. The only question is, can he be philosophical about it?

Submitted by Ben Garfield, United Kingdom

*Triangulation*

Only imagination can show us the hidden world inside of fundamental particles

Submitted by Vladimir Vlasenko, Ukraine

*Whitecap*

Dr. David Long has discovered how to turn matter into waveforms. So why shouldn’t he experiment with his own existence?

Submitted by Bernard Ong, United States

]]>