Topological qubits: Arriving in 2018?

Editor‘s note: This post was prepared jointly by Ryan Mishmash and Jason Alicea.

Physicists appear to be on the verge of demonstrating proof-of-principle “usefulness” of small quantum computers.  Preskill’s notion of quantum supremacy spotlights a particularly enticing goal: use a quantum device to perform some computation—any computation in fact—that falls beyond the reach of the world’s best classical computers.  Efforts along these lines are being vigorously pursued along many fronts, from academia to large corporations to startups.  IBM’s publicly accessible 16-qubit superconducting device, Google’s pursuit of a 7×7 superconducting qubit array, and the recent synthesis of a 51-qubit quantum simulator using rubidium atoms are a few of many notable highlights.  While the number of qubits obtainable within such “conventional” approaches has steadily risen, synthesizing the first “topological qubit” remains an outstanding goal.  That ceiling may soon crumble however—vaulting topological qubits into a fascinating new chapter in the quest for scalable quantum hardware.

Why topological quantum computing?

As quantum computing progresses from minimalist quantum supremacy demonstrations to attacking real-world problems, hardware demands will naturally steepen.  In, say, a superconducting-qubit architecture, a major source of overhead arises from quantum error correction needed to combat decoherence.  Quantum-error-correction schemes such as the popular surface-code approach encode a single fault-tolerant logical qubit in many physical qubits, perhaps thousands.  The number of physical qubits required for practical applications can thus rapidly balloon.

The dream of topological quantum computing (introduced by Kitaev) is to construct hardware inherently immune to decoherence, thereby mitigating the need for active error correction.  In essence, one seeks physical qubits that by themselves function as good logical qubits.  This lofty objective requires stabilizing exotic phases of matter that harbor emergent particles known as “non-Abelian anyons”.  Crucially, nucleating non-Abelian anyons generates an exponentially large set of ground states that cannot be distinguished from each other by any local measurement.  Topological qubits encode information in those ground states, yielding two key virtues:

(1) Insensitivity to local noise.  For reference, consider a conventional qubit encoded in some two-level system, with the 0 and 1 states split by an energy \hbar \omega.  Local noise sources—e.g., random electric and magnetic fields—cause that splitting to fluctuate stochastically in time, dephasing the qubit.  In practice one can engender immunity against certain environmental perturbations.  One famous example is the transmon qubit (see “Charge-insensitive qubit design derived from the Cooper pair box” by Koch et al.) used extensively at IBM, Google, and elsewhere.  The transmon is a superconducting qubit that cleverly suppresses the effects of charge noise by operating in a regime where Josephson couplings are sizable compared to charging energies.  Transmons remain susceptible, however, to other sources of randomness such as flux noise and critical-current noise.  By contrast, topological qubits embed quantum information in global properties of the system, building in immunity against all local noise sources.  Topological qubits thus realize “perfect” quantum memory.

(2) Perfect gates via braiding.  By exploiting the remarkable phenomenon of non-Abelian statistics, topological qubits further enjoy “perfect” quantum gates: Moving non-Abelian anyons around one another reshuffles the system among the ground states—thereby processing the qubits—in exquisitely precise ways that depend only on coarse properties of the exchange.

Disclaimer: Adjectives like “perfect” should come with the qualifier “up to exponentially small corrections”, a point that we revisit below.

Experimental status

The catch is that systems supporting non-Abelian anyons are not easily found in nature.  One promising topological-qubit implementation exploits exotic 1D superconductors whose ends host “Majorana modes”—novel zero-energy degrees of freedom that underlie non-Abelian-anyon physics.  In 2010, two groups (Lutchyn et al. and Oreg et al.) proposed a laboratory realization that combines semiconducting nanowires, conventional superconductors, and modest magnetic fields.

Since then, the materials-science progress on nanowire-superconductor hybrids has been remarkable.  Researchers can now grow extremely clean, versatile devices featuring various manipulation and readout bells and whistles.  These fabrication advances paved the way for experiments that have reported increasingly detailed Majorana characteristics: tunneling signatures including recent reports of long-sought quantized response, evolution of Majorana modes with system size, mapping out of the phase diagram as a function of external parameters, etc.  Alternate explanations are still being debated though.  Perhaps the most likely culprit are conventional localized fermionic levels (“Andreev bound states”) that can imitate Majorana signatures under certain conditions; see in particular Liu et al.  Still, the collective experimental effort on this problem over the last 5+ years has provided mounting evidence for the existence of Majorana modes.  Revealing their prized quantum-information properties poses a logical next step.

Validating a topological qubit

Ideally one would like to verify both hallmarks of topological qubits noted above—“perfect” insensitivity to local noise and “perfect” gates via braiding.  We will focus on the former property, which can be probed in simpler device architectures.  Intuitively, noise insensitivity should imply long qubit coherence times.  But how do you pinpoint the topological origin of long coherence times, and in any case what exactly qualifies as “long”?

Here is one way to sharply address these questions (for more details, see our work in Aasen et al.).  As alluded to in our disclaimer above, logical 0 and 1 topological-qubit states aren’t exactly degenerate.  In nanowire devices they’re split by an energy \hbar \omega that is exponentially small in the separation distance L between Majorana modes divided by the superconducting coherence length \xi.  Correspondingly, the qubit states are not quite locally indistinguishable either, and hence not perfectly immune to local noise.  Now imagine pulling apart Majorana modes to go from a relatively poor to a perfect topological qubit.  During this process two things transpire in tandem: The topological qubit’s oscillation frequency, \omega, vanishes exponentially while the dephasing time T_2 becomes exponentially long.  That is,

scaling

This scaling relation could in fact be used as a practical definition of a topologically protected quantum memory.  Importantly, mimicking this property in any non-topological qubit would require some form of divine intervention.  For example, even if one fine-tuned conventional 0 and 1 qubit states (e.g., resulting from the Andreev bound states mentioned above) to be exactly degenerate, local noise could still readily produce dephasing.

As discussed in Aasen et al., this topological-qubit scaling relation can be tested experimentally via Ramsey-like protocols in a setup that might look something like the following:

Aasen

This device contains two adjacent Majorana wires (orange rectangles) with couplings controlled by local gates (“valves” represented by black switches).  Incidentally, the design was inspired by a gate-controlled variation of the transmon pioneered in Larsen et al. and de Lange et al.  In fact, if only charge noise was present, we wouldn’t stand to gain much in the way of coherence times: both the transmon and topological qubit would yield exponentially long T_2 times.  But once again, other noise sources can efficiently dephase the transmon, whereas a topological qubit enjoys exponential protection from all sources of local noise.  Mathematically, this distinction occurs because the splitting for transmon qubit states is exponentially flat only with respect to variations in a “gate offset” n_g.  For the topological qubit, the splitting is exponentially flat with respect to variations in all external parameters (e.g., magnetic field, chemical potential, etc.), so long as Majorana modes still survive.  (By “exponentially flat” we mean constant up to exponentially small deviations.)  Plotting the energies of the qubit states in the two respective cases versus external parameters, the situation can be summarized as follows:

energies

Outlook: Toward “topological quantum ascendancy”

These qubit-validation experiments constitute a small stepping stone toward building a universal topological quantum computer.  Explicitly demonstrating exponentially protected quantum information as discussed above would, nevertheless, go a long way toward establishing practical utility of Majorana-based topological qubits.  One might even view this goal as single-qubit-level “topological quantum ascendancy”.  Completion of this milestone would further set the stage for implementing “perfect” quantum gates, which requires similar capabilities albeit in more complex devices.  Researchers at Microsoft and elsewhere have their sights set on bringing a prototype topological qubit to life in the very near future.  It is not unreasonable to anticipate that 2018 will mark the debut of the topological qubit.  We could of course be off target.  There is, after all, still plenty of time in 2017 to prove us wrong.

Taming wave functions with neural networks

Note from Nicole Yunger Halpern: One sunny Saturday this spring, I heard Sam Greydanus present about his undergraduate thesis. Sam was about to graduate from Dartmouth with a major in physics. He had worked with quantum-computation theorist Professor James Whitfield. The presentation — about applying neural networks to quantum computation — so intrigued me that I asked him to share his research on Quantum Frontiers. Sam generously agreed; this is his story.

Wave functions in the wild

ski_interference

The wave function, \psi , is a mixed blessing. At first, it causes unsuspecting undergrads (me) some angst via the Schrodinger’s cat paradox. This angst morphs into full-fledged panic when they encounter concepts such as nonlocality and Bell’s theorem (which, by the way, is surprisingly hard to verify experimentally). The real trouble with \psi , though, is that it grows exponentially with the number of entangled particles in a system. We couldn’t even hope to write the wavefunction of 100 entangled particles, much less perform computations on it…but there’s a lot to gain from doing just that.

The thing is, we (a couple of luckless physicists) love \psi . Manipulating wave functions can give us ultra-precise timekeeping, secure encryption, and polynomial-time factoring of integers (read: break RSA). Harnessing quantum effects can also produce better machine learning, better physics simulations, and even quantum teleportation.

Taming the beast

Though \psi grows exponentially with the number of particles in a system, most physical wave functions can be described with a lot less information. Two algorithms for doing this are the Density Matrix Renormalization Group (DMRG) and Quantum Monte Carlo (QMC).

bonsai

Density Matrix Renormalization Group (DMRG). Imagine we want to learn about trees, but studying a full-grown, 50-foot tall tree in the lab is too unwieldy. One idea is to keep the tree small, like a bonsai tree. DMRG is an algorithm which, like a bonsai gardener, prunes the wave function while preserving its most important components. It produces a compressed version of the wave function called a Matrix Product State (MPS). One issue with DMRG is that it doesn’t extend particularly well to 2D and 3D systems.

Screen Shot 2017-07-29 at 12.01.23 AM

Quantum Monte Carlo (QMC). Another way to study the concept of “tree” in a lab (bear with me on this metaphor) would be to study a bunch of leaf, seed, and bark samples. Quantum Monte Carlo algorithms do this with wave functions, taking “samples” of a wave function (pure states) and using the properties and frequencies of these samples to build a picture of the wave function as a whole. The difficulty with QMC is that it treats the wave function as a black box. We might ask, “how does flipping the spin of the third electron affect the total energy?” and QMC wouldn’t have much of a physical answer.

Brains \gg Brawn

Neural Quantum States (NQS). Some state spaces are far too large for even Monte Carlo to sample adequately. Suppose now we’re studying a forest full of different species of trees. If one type of tree vastly outnumbers the others, choosing samples from random trees isn’t an efficient way to map biodiversity. Somehow, we need to make the sampling process “smarter”. Last year, Google DeepMind used a technique called deep reinforcement learning to do just that – and achieved fame for defeating the world champion human Go player. A recent Science paper by Carleo and Troyer (2017) used the same technique to make QMC “smarter” and effectively compress wave functions with neural networks. This approach, called “Neural Quantum States (NQS)”, produced several state-of-the-art results.

mps-learn-schema

The general idea of my thesis.

My thesis. My undergraduate thesis centered upon much the same idea. In fact, I had to abandon some of my initial work after reading the NQS paper. I then focused on using machine learning techniques to obtain MPS coefficients. Like Carleo and Troyer, I used neural networks to approximate  \psi . Unlike Carleo and Troyer, I trained my model to output a set of Matrix Product State coefficients which have physical meaning (MPS coefficients always correspond to a certain state and site, e.g. “spin up, electron number 3”).

Cool – but does it work?

Yes – for small systems. In my thesis, I considered a toy system of 4 spin-\frac{1}{2} particles interacting via the Heisenberg Hamiltonian. Solving this system is not difficult so I was able to focus on fitting the two disparate parts – machine learning and Matrix Product States – together.

Success! My model solved for ground states with arbitrary precision. Even more interestingly, I used it to automatically obtain MPS coefficients. Shown below, for example, is a visualization of my model’s coefficients for the GHZ state, compared with coefficients taken from the literature.

Screen Shot 2017-07-28 at 11.46.45 PM

A visual comparison of a 4-site Matrix Product State for the GHZ state a) listed in the literature b) obtained from my neural network model. Colored squares correspond to real-valued elements of 2×2 matrices.

Limitations. The careful reader might point out that, according to the schema of my model (above), I still have to write out the full wave function. To scale my model up, I instead trained it variationally over a subspace of the Hamiltonian (just as the authors of the NQS paper did). Results are decent for larger (10-20 particle) systems, but the training itself is still unstable. I’ll finish ironing out the details soon, so keep an eye on arXiv* :).

Outside the ivory tower

qcomputer

A quantum computer developed by Joint Quantum Institute, U. Maryland.

Quantum computing is a field that’s poised to take on commercial relevance. Taming the wave function is one of the big hurdles we need to clear before this happens. Hopefully my findings will have a small role to play in making this happen.

On a more personal note, thank you for reading about my work. As a recent undergrad, I’m still new to research and I’d love to hear constructive comments or criticisms. If you found this post interesting, check out my research blog.

*arXiv is an online library for electronic preprints of scientific papers

The sign problem(s)

The thirteen-month-old had mastered the word “dada” by the time I met her. Her parents were teaching her to communicate other concepts through sign language. Picture her, dark-haired and bibbed, in a high chair. Banana and mango slices litter the tray in front of her. More fruit litters the floor in front of the tray. The baby lifts her arms and flaps her hands.

Dada looks up from scrubbing the floor.

“Look,” he calls to Mummy, “she’s using sign language! All done.” He performs the gesture that his daughter seems to have aped: He raises his hands and rotates his forearms about his ulnas, axes perpendicular to the floor. “All done!”

The baby looks down, seizes another morsel, and stuffs it into her mouth.

“Never mind,” Dada amends. “You’re not done, are you?”

His daughter had a sign(-language) problem.

Banana

So does Dada, MIT professor Aram Harrow. Aram studies quantum information theory. His interests range from complexity to matrices, from resource theories to entropies. He’s blogged for The Quantum Pontiff, and he studies—including with IQIM postdoc Elizabeth Crossonthe quantum sign problem.

Imagine calculating properties of a chunk of fermionic quantum matter. The chunk consists of sites, each inhabited by one particle or by none. Translate as “no site can house more than one particle” the jargon “the particles are fermions.”

The chunk can have certain amounts of energy. Each amount E_j corresponds to some particle configuration indexed by j: If the system has some amount E_1 of energy, particles occupy certain sites and might not occupy others. If the system has a different amount E_2 \neq E_1 of energy, particles occupy different sites. A Hamiltonian, a mathematical object denoted by H, encodes the energies E_j and the configurations. We represent H with a matrix, a square grid of numbers.

Suppose that the chunk has a temperature T = \frac{ 1 }{ k_{\rm B} \beta }. We could calculate the system’s heat capacity, the energy required to raise the chunk’s temperature by one Kelvin. We could calculate the free energy, how much work the chunk could perform in powering a motor or lifting a weight. To calculate those properties, we calculate the system’s partition function, Z.

How? We would list the configurations j. With each configuration, we would associate the weight e^{ - \beta E_j }. We would sum the weights: Z = e^{ - \beta E_1 }  +  e^{ - \beta E_2}  +  \ldots  =  \sum_j e^{ - \beta E_j}.

Easier—like feeding a 13-month-old—said than done. Let N denote the number of qubits in the chunk. If N is large, the number of configurations is gigantic. Our computers can’t process so many configurations. This inability underlies quantum computing’s promise of speeding up certain calculations.

We don’t have quantum computers, and we can’t calculate Z. Can we  approximate Z?

Yes, if H “lacks the sign problem.” The math that models our system models also a classical system. If our system has D dimensions, the classical system has D+1 dimensions. Suppose, for example, that our sites form a line. The classical system forms a square.

We replace the weights e^{ - \beta E_j } with different weights—numbers formed from a matrix that represents H. If H lacks the sign problem, the new weights are nonnegative and behave like probabilities. Many mathematical tools suit probabilities. Aram and Elizabeth apply such tools to Z, here and here, as do many other researchers.

We call Hamiltonians that lack the sign problem “stoquastic,” which I think fanquastic.Stay tuned for a blog post about stoquasticity by Elizabeth.

What if H has the sign problem? The new weights can assume negative and nonreal values. The weights behave unlike probabilities; we can’t apply those tools. We find ourselves knee-deep in banana and mango chunks.

Mango chunks

Solutions to the sign problem remain elusive. Theorists keep trying to mitigate the problem, though. Aram, Elizabeth, and others are improving calculations of properties of sign-problem-free systems. One scientist-in-the-making has achieved a breakthrough: Aram’s daughter now rotates her hands upon finishing meals and when she wants to leave her car seat or stroller.

One sign problem down; one to go.

Mess

With gratitude to Aram’s family for its hospitality and to Elizabeth Crosson for sharing her expertise.

1For experts: A local Hamiltonian is stoquastic relative to the computational basis if each local term is represented, relative to the computational basis, by a matrix whose off-diagonal entries are real and nonpositive.

The power of information

Sara Imari Walker studies ants. Her entomologist colleague Gabriele Valentini cultivates ant swarms. Gabriele coaxes a swarm from its nest, hides the nest, and offers two alternative nests. Gabriele observe the ants’ responses, then analyzes their data with Sara.

Sara doesn’t usually study ants. She trained in physics, information theory, and astrobiology. (Astrobiology is the study of life; life’s origins; and conditions amenable to life, on Earth and anywhere else life may exist.) Sara analyzes how information reaches, propagates through, and manifests in the swarm.

Some ants inspect one nest; some, the other. Few ants encounter both choices. Yet most of the ants choose simultaneously. (How does Gabriele know when an ant chooses? Decided ants carry other ants toward the chosen nest. Undecided ants don’t.)

Gabriele and Sara plotted each ant’s status (decided or undecided) at each instant. All the ants’ lines start in the “undecided” region, high up in the graph. Most lines drop to the “decided” region together. Physicists call such dramatic, large-scale changes in many-particle systems “phase transitions.” The swarm transitions from the “undecided” phase to the “decided,” as moisture transitions from vapor to downpour.

Sara presentation

Sara versus the ants

Look from afar, and you’ll see evidence of a hive mind: The lines clump and slump together. Look more closely, and you’ll find lags between ants’ decisions. Gabriele and Sara grouped the ants according to their behaviors. Sara explained the grouping at a workshop this spring.

The green lines, she said, are undecided ants.

My stomach dropped like Gabriele and Sara’s ant lines.

People call data “cold” and “hard.” Critics lambast scientists for not appealing to emotions. Politicians weave anecdotes into their numbers, to convince audiences to care.

But when Sara spoke, I looked at her green lines and thought, “That’s me.”

I’ve blogged about my indecisiveness. Postdoc Ning Bao and I formulated a quantum voting scheme in which voters can superpose—form quantum combinations of—options. Usually, when John Preskill polls our research group, I abstain from voting. Politics, and questions like “Does building a quantum computer require only engineering or also science?”,1 have many facets. I want to view such questions from many angles, to pace around the questions as around a sculpture, to hear other onlookers, to test my impressions on them, and to cogitate before choosing.2 However many perspectives I’ve gathered, I’m missing others worth seeing. I commiserated with the green-line ants.

Sculpture-question.001

I first met Sara in the building behind the statue. Sara earned her PhD in Dartmouth College’s physics department, with Professor Marcelo Gleiser.

Sara presented about ants at a workshop hosted by the Beyond Center for Fundamental Concepts in Science at Arizona State University (ASU). The organizers, Paul Davies of Beyond and Andrew Briggs of Oxford, entitled the workshop “The Power of Information.” Participants represented information theory, thermodynamics and statistical mechanics, biology, and philosophy.

Paul and Andrew posed questions to guide us: What status does information have? Is information “a real thing” “out there in the world”? Or is information only a mental construct? What roles can information play in causation?

We paced around these questions as around a Chinese viewing stone. We sat on a bench in front of those questions, stared, debated, and cogitated. We taught each other about ants, artificial atoms, nanoscale machines, and models for information processing.

Stone.001

Chinese viewing stone in Yuyuan Garden in Shanghai

I wonder if I’ll acquire opinions about Paul and Andrew’s questions. Maybe I’ll meander from “undecided” to “decided” over a career. Maybe I’ll phase-transition like Sara’s ants. Maybe I’ll remain near the top of her diagram, a green holdout.

I know little about information’s power. But Sara’s plot revealed one power of information: Information can move us—from homeless to belonging, from ambivalent to decided, from a plot’s top to its bottom, from passive listener to finding yourself in a green curve.

 

With thanks to Sara Imari Walker, Paul Davies, Andrew Briggs, Katherine Smith, and the Beyond Center for their hospitality and thoughts.

 

1By “only engineering,” I mean not “merely engineering” pejoratively, but “engineering and no other discipline.”

2I feel compelled to perform these activities before choosing. I try to. Psychological experiments, however, suggest that I might decide before realizing that I’ve decided.

Glass beads and weak-measurement schemes

Richard Feynman fiddled with electronics in a home laboratory, growing up. I fiddled with arts and crafts.1 I glued popsicle sticks, painted plaques, braided yarn, and designed greeting cards. Of the supplies in my family’s crafts box, I adored the beads most. Of the beads, I favored the glass ones.

I would pour them on the carpet, some weekend afternoons. I’d inherited a hodgepodge: The beads’ sizes, colors, shapes, trimmings, and craftsmanship varied. No property divided the beads into families whose members looked like they belonged together. But divide the beads I tried. I might classify them by color, then subdivide classes by shape. The color and shape groupings precluded me from grouping by size. But, by loosening my original classification and combining members from two classes, I might incorporate trimmings into the categorization. I’d push my classification scheme as far as I could. Then, I’d rake the beads together and reorganize them according to different principles.

Why have I pursued theoretical physics? many people ask. I have many answers. They include “Because I adored organizing craft supplies at age eight.” I craft and organize ideas.

Beads

I’ve blogged about the out-of-time-ordered correlator (OTOC), a signature of how quantum information spreads throughout a many-particle system. Experimentalists want to measure the OTOC, to learn how information spreads. But measuring the OTOC requires tight control over many quantum particles.

I proposed a scheme for measuring the OTOC, with help from Chapman University physicist Justin Dressel. The scheme involves weak measurements. Weak measurements barely disturb the systems measured. (Most measurements of quantum systems disturb the measured systems. So intuited Werner Heisenberg when formulating his uncertainty principle.)

I had little hope for the weak-measurement scheme’s practicality. Consider the stereotypical experimentalist’s response to a stereotypical experimental proposal by a theorist: Oh, sure, we can implement that—in thirty years. Maybe. If the pace of technological development doubles. I expected to file the weak-measurement proposal in the “unfeasible” category.

But experimentalists started collaring me. The scheme sounds reasonable, they said. How many trials would one have to perform? Did the proposal require ancillas, helper systems used to control the measured system? Must each ancilla influence the whole measured system, or could an ancilla interact with just one particle? How did this proposal compare with alternatives?

I met with a cavity-QED2 experimentalist and a cold-atoms expert. I talked with postdocs over skype, with heads of labs at Caltech, with grad students in Taiwan, and with John Preskill in his office. I questioned an NMR3 experimentalist over lunch and fielded superconducting-qubit4 questions in the sunshine. I read papers, reread papers, and powwowed with Justin.

I wouldn’t have managed half so well without Justin and without Brian Swingle. Brian and coauthors proposed the first OTOC-measurement scheme. He reached out after finding my first OTOC paper.

According to that paper, the OTOC is a moment of a quasiprobability.5 How does that quasiprobability look, we wondered? How does it behave? What properties does it have? Our answers appear in a paper we released with Justin this month. We calculate the quasiprobability in two examples, prove properties of the quasiprobability, and argue that the OTOC motivates generalizations of quasiprobability theory. We also enhance the weak-measurement scheme and analyze it.

Amidst that analysis, in a 10 x 6 table, we classify glass beads.

Table

We inventoried our experimental conversations and distilled them. We culled measurement-scheme features analogous to bead size, color, and shape. Each property labels a row in the table. Each measurement scheme labels a column. Each scheme has, I learned, gold flecks and dents, hues and mottling, an angle at which it catches the light.

I’ve kept most of the glass beads that fascinated me at age eight. Some of the beads have dispersed to necklaces, picture frames, and eyeglass leashes. I moved the remnants, a few years ago, to a compartmentalized box. Doesn’t it resemble the table?

Box

That’s why I work at the IQIM.

 

1I fiddled in a home laboratory, too, in a garage. But I lived across the street from that garage. I lived two rooms from an arts-and-crafts box.

2Cavity QED consists of light interacting with atoms in a box.

3Lots of nuclei manipulated with magnetic fields. “NMR” stands for “nuclear magnetic resonance.” MRI machines, used to scan brains, rely on NMR.

4Superconducting circuits are tiny, cold quantum circuits.

5A quasiprobability resembles a probability but behaves more oddly: Probabilities range between zero and one; quasiprobabilities can dip below zero. Think of a moment as like an average.

With thanks to all who questioned me; to all who answered questions of mine; to my wonderful coauthors; and to my parents, who stocked the crafts box.

The entangled fabric of space

We live in the information revolution. We translate everything into vast sequences of ones and zeroes. From our personal email to our work documents, from our heart rates to our credit rates, from our preferred movies to our movie preferences, all things information are represented using this minimal {0,1} alphabet which our digital helpers “understand” and process. Many of us physicists are now taking this information revolution at heart and embracing the “It from qubit” motto. Our dream: to understand space, time and gravity as emergent features in a world made of information – quantum information.

Over the past two years, I have been obsessively trying to understand this profound perspective more rigorously. Recently, John Preskill and I have taken a further step in this direction in the recent paper: quantum code properties from holographic geometries. In it, we make progress in interpreting features of the holographic approach to quantum gravity in the terms of quantum information constructs. 

In this post I would like to present some context for this work through analogies which hopefully help intuitively convey the general ideas. While still containing some technical content, this post is not likely to satisfy those readers seeking a precise in-depth presentation. To you I can only recommend the masterfully delivered lecture notes on gravity and entanglement by Mark Van Raamsdonk.  

Entanglement as a cat’s cradle

Cats-cradle

A cat’s cradle serves as a crude metaphor for quantum mechanical entanglement. The full image provides a complete description of the string and how it is laced in a stable configuration around the two hands. However, this lacing does not describe a stable configuration of half the string on one hand. The string would become disentangled and fall if we were to suddenly remove one of the hands or cut through the middle.

Of all the concepts needed to explain emergent spacetime, maybe the most difficult is that of quantum entanglement. While the word seems to convey some kind of string wound up in a complicated way, it is actually a quality which may describe information in quantum mechanical systems. In particular, it applies to a system for which we have a complete description as a whole, but are only capable of describing certain statistical properties of its parts. In other words, our knowledge of the whole loses predictive power when we are only concerned with the parts. Entanglement entropy is a measure of information which quantifies this.

While our metaphor for entanglement is quite crude, it will serve the purpose of this post. Namely, to illustrate one of the driving premises for the holographic approach to quantum gravity, that the very structure of spacetime is emergent and built up from entanglement entropy.

Knit and crochet your way into the manifolds

But let us bring back our metaphors and try to convey the content of this identification. For this, we resort to the unlikely worlds of knitting and crochet. Indeed, by a planned combination of individual loops and stitches, these traditional crafts are capable of approximating any kind of surface (2D Riemannian surface would be the technical term).

Here I have presented some examples with uniform curvature R: flat in green, positive curvature (ball) in yellow and negative curvature (coral reef) in purple. While actual practitioners may be more interested in getting the shape right on hats and socks for loved ones, for us the point is that if we take a step back, these objects built of simple loops, hooks and stitches could end up looking a lot like the smooth surfaces that a physicist might like to use to describe 2D space. This is cute, but can we push this metaphor even further?

Well, first of all, although the pictures above are only representing 2D surfaces, we can expect that a similar approach should allow approximating 3D and even higher dimensional objects (again the technical term is Riemannian manifolds). It would just make things much harder to present in a picture. These woolen structures are, in fact, quite reminiscent of tensor networks, a modern mathematical construct widely used in the field of quantum information. There too, we combine basic building blocks (tensors) through simple operations (tensor index contraction) to build a more complex composite object. In the tensor network world, the structure of the network (how its nodes are connected to other nodes) generically defines the entanglement structure of the resulting object.

PentagonTensorNetwork

This regular tensor network layout was used to describe hyperbolic space which is similar to the purple crochet. However, they apriori look quite dissimilar due to the use of the Poincaré disk model where tensors further from the center look smaller. Another difference is that the high degree of regularity is achieved at the expense of having very few tensors per curvature radius (as compared to its purple crochet cousin). However, planarity and regularity don’t seem to be essential so the crochet probably provides a better intuitive picture.

Roughly speaking, tensor networks are ingenious ways of encoding (quantum) inputs into (quantum) outputs. In particular, if you enter some input at the boundary of your tensor network, the tensors do the work of processing that information throughout the network so that if you ask for an output at any one of the nodes in the bulk of the tensor network, you get the right encoded answer. In other words, the information we input into the tensor network begins its journey at the dangling edges found at the boundary of the network and travels through the bulk edges by exploiting them as information bridges between the nodes of the network.

In the figure representing the cat’s cradle, these dangling input edges can be though of as the fingers holding the wool. Now, if we partition these edges into two disjoint sets (say, the fingers on the left hand and the fingers on the right hand, respectively), there will be some amount of entanglement between them. How much? In general, we cannot say, but under certain assumptions we find that it is proportional to the minimum cut through the network. Imagine you had an incredible number of fingers holding your wool structure. Now separate these fingers arbitrarily into two subsets L and R (we may call them left hand and right hand, although there is nothing right or left handy about them). By pulling left hand and right hand apart, the wool might stretch until at some point it breaks. How many threads will break? Well, the question is analogous to the entanglement one. We might expect, however, that a minimal number of threads break such that each hand can go its own way. This is what we call the minimal cut. In tensor networks, entanglement entropy is always bounded above by such a minimal cut and it has been confirmed that under certain conditions entanglement also reaches, or approximates, this bound. In this respect, our wool analogy seems to be working out.

Holography

Holography, in the context of black holes, was sparked by a profound observation of Jacob Bekenstein and Stephen Hawking, which identified the surface area of a black hole horizon (in Planck units) with its entropy, or information content:BHentropyF1

S_{BH} = \frac{k A_{BH}}{4\ell_p^2} .

Here, S_{BH} is the entropy associated to the black hole, A_{BH} is its horizon area, \ell_p is the Planck length and k is Boltzmann’s constant.
Why is this equation such a big deal? Well, there are many reasons, but let me emphasize one. For theoretical physicists, it is common to get rid of physical units by relating them through universal constants. For example, the theory of special relativity allows us to identify units of distance with units of time through the equation d=ct using the speed of light c. General relativity further allows us to identify mass and energy through the famous E=mc^2. By considering the Bekenstein-Hawking entropy, units of area are being swept away altogether! They are being identified with dimensionless units of information (one square meter is roughly 1.4\times10^{69} bits according to the Bousso bound).

Initially, the identification of area and information was proposed to reconcile black holes with the laws of thermodynamics. However, this has turned out to be the main hint leading to the holographic principle, wherein states that describe a certain volume of space in a theory of quantum gravity can also be thought of as being represented at the lower dimensional boundary of the given volume. This idea, put forth by Gerard ‘t Hooft, was later given a more precise interpretation by Leonard Susskind and subsequently by Juan Maldacena through the celebrated AdS/CFT correspondence. I will not dwell in the details of the AdS/CFT correspondence as I am not an expert myself. However, this correspondence gave S. Ryu and T. Takayanagi  (RT) a setting to vastly generalize the identification of area as an information quantity. They proposed identifying the area of minimal surfaces on the bulk (remember the minimal cut?) with entanglement entropy in the boundary theory.

Roughly speaking, if we were to split the boundary into two regions, left L and right R it should be possible to also partition the bulk in a way that each piece of the bulk has either L or R in its boundary. Ryu and Takayanagi proposed that the area of the smallest surface \chi_R=\chi_L which splits the bulk in this way would be proportional to the entanglement entropy between the two parts

S_L = S_R = \frac{|\chi_L|}{4G} =\frac{|\chi_R|}{4G}.

It turns out that some quantum field theory states admit such a geometric interpretation. Many high energy theory colleagues have ideas about when this is possible and what are the necessary conditions. By far the best studied setting for this holographic duality is AdS/CFT, where Ryu and Takayanagi first checked their proposal. Here, the entanglement features of  the lowest energy state of a conformal field theory are matched to surfaces in a hyperbolic space (like the purple crochet and the tensor network presented). However, other geometries have been shown to match the RT prediction with respect to the entanglement properties of different states. The key point here is that the boundary states do not have any geometry per se. They just manifest different amounts of entanglement when partitioned in different ways.

Emergence

The holographic program suggests that bulk geometry emerges from the entanglement properties of the boundary state. Spacetime materializes from the information structure of the boundary instead of being a fundamental structure as in general relativity. Am I saying that we should strip everything physical, including space in favor of ones and zeros? Well, first of all, it is not just me who is pushing this approach. Secondly, no one is claiming that we should start making all our physical reasoning in terms of ones and zeros.

Let me give an example. We know that the sea is composed mostly of water molecules. The observation of waves that travel, superpose and break can be labeled as an emergent phenomenon. However, to a surfer, a wave is much more real than the water molecules composing it and the fact that it is emergent is of no practical consequence when trying to predict where a wave will break. A proficient physicist, armed with tools from statistical mechanics (there are more than 10^{25} molecules per liter), could probably derive a macroscopic model for waves from the microscopic theory of particles. In the process of learning what the surfer already understood, he would identify elements of the  microscopic theory which become irrelevant for such questions. Such details could be whether the sea has an odd or even number of molecules or the presence of a few fish.

In the case of holography, each square meter corresponds to 1.4\times10^{69} bits of entanglement. We don’t even have words to describe anything close to this outrageously large exponent which leaves plenty of room for emergence. Even taking all the information on the internet – estimated at 10^{22} bits (10 zettabits) – we can’t even match the area equivalent of the smallest known particle. The fact that there are so many orders of magnitude makes it difficult to extrapolate our understanding of the geometric domain to the information domain and vice versa. This is precisely the realm where techniques such as those from statistical mechanics successfully get rid of irrelevant details.

High energy theorists and people with a background in general relativity tend to picture things in a continuum language. For example, part of their daily butter are Riemannian or Lorentzian manifolds which are respectively used to describe space and spacetime. In contrast, most of information theory is usually applied to deal with discrete elements such as bits, elementary circuit gates, etc. Nevertheless, I believe it is fruitful to straddle this cultural divide to the benefit of both parties. In a way, the convergence we are seeking is analogous to the one achieved by the kinetic theory of gases, which allowed the unification of thermodynamics with classical mechanics.

So what did we do?

The remarkable success of the geometric RT prediction to different bulk geometries such as the BTZ black holes and the generality of the entanglement result for its random tensor network cousins emboldened us to take the RT prescription beyond its usual domain of application. We considered applying it to arbitrary Riemannian manifolds that are space-like and that can be approximated by a smoothly knit fabric.

Furthermore, we went on to consider the implications that such assumptions would have when the corresponding geometries are interpreted as error-correcting codes. In fact, our work elaborates on the perspective of A. Almheiri, X. Dong and D. Harlow (ADH) where quantum error-correcting code properties of AdS/CFT were laid out; it is hard to overemphasize the influence of this work. Our work considers general geometries and identifies properties a code associated to a specific holographic geometry should satisfy.

In the cat cradle/fabric metaphor for holography, the fingers at the boundary constitute the boundary theory without gravity and the resulted fabric represents a bulk geometry in the corresponding bulk gravitational theory. Bulk observables may be represented in different ways on the boundary, but not arbitrarily. This raises the question of which parts of the bulk correspond to which parts of the boundary. In general, there is not a one to one mapping. However, if we partition the boundary in two parts L and R, we expect to be able to split the bulk into two corresponding regions  {\mathcal E}[L]  and  {\mathcal E}[R]. This is the content of the entanglement wedge hypothesis, which is our other main assumption.  In our metaphor, one could imagine that we pull the left fingers up and the right fingers down (taking care not to get hurt). At some point, the fabric breaks through \chi_R into two pieces. In the setting we are concerned with, these pieces maintain part of the original structure, which tells us which bulk information was available in one piece of the boundary and which part was available in the other.

Although we do not produce new explicit examples of such codes, we worked our way towards developing a language which translates between the holographic/geometric perspective and the coding theory perspective. We specifically build upon the language of operator algebra quantum error correction (OAQEC) which allows individually focusing on different parts of the logical message. In doing so we identified several coding theoretic bounds and quantities, some of which we found to be applicable beyond the context of holography. A particularly noteworthy one is a strengthening of the quantum Singleton bound, which defines a trade-off between how much logical information can be packed in a code, how much physical space is used for encoding this information and how well-protected the information is from erasures.

One of the central observations of ADH highlights how quantum codes have properties from both classical error-correcting codes and secret sharing schemes. On the one hand, logical encoded information should be protected from loss of small parts of the carrier, a property quantified by the code distance. On the other hand, the logical encoded information should not become accessible until a sufficiently large part of the carrier is available to us. This is quantified by the threshold of a corresponding secret sharing scheme. We call this quantity price as it identifies how much of the carrier we would need before someone could reconstruct the message faithfully. In general, it is hard to balance these two competing requirements; a statement which can be made rigorous. This kind of complementarity has long been recognized in quantum cryptography. However, we found that according to holographic predictions, codes admitting a geometric interpretation achieve a remarkable optimality in the trade-off between these features.

Our exploration of alternative geometries is rewarded by the following guidelines

CantorWedge

In uberholography, bulk observables are accessible in a Cantor type fractal shaped subregion of the boundary. This is illustrated on the Poincare disc presentation of negatively curved bulk.

  • Hyperbolic geometries predict a fixed polynomial scaling for code distance. This is illustrated by a feature we call uberholography. We use this name because there is an excess of holography wherein bulk observables can be represented on intricate subsets of the boundary which have fractal dimension even smaller than the boundary itself.
  • Hyperbolic geometries suggest the possibility of decoding procedures which are local on the boundary geometry. This property may be connected to the locality of the corresponding boundary field theory.
  • Flat and positive curvature geometries may lead to codes with better parameters in terms of distance and rates (ratio of logical information to physical information). A hemisphere reaches optimum parameters, saturating coding bounds.

    729px-Cantor_set_in_seven_iterations.svg

    Seven iterations of a ternary Cantor set (dark line) on the unit interval. Each iteration is obtained by punching holes from the previous one and the set obtained in the limit is a fractal.

Current day quantum computers are far from the number of qubits required to invoke an emergent geometry. Nevertheless, it is exhilarating to take a step back and consider how the properties of the codes, and information in general, may be interpreted geometrically. On the other hand, I find that the quantum code language we adapt to the context of holography might eventually serve as a useful tool in distinguishing which boundary features are relevant or irrelevant for the emergent properties of the holographic dual. Ours is but one contribution in a very active field. However, the one thing I am certain about is that these are exciting times to be doing physics.

Here’s one way to get out of a black hole!

Two weeks ago I attended an exciting workshop at Stanford, organized by the It from Qubit collaboration, which I covered enthusiastically on Twitter. Many of the talks at the workshop provided fodder for possible blog posts, but one in particular especially struck my fancy. In explaining how to recover information that has fallen into a black hole (under just the right conditions), Juan Maldacena offered a new perspective on a problem that has worried me for many years. I am eagerly awaiting Juan’s paper, with Douglas Stanford and Zhenbin Yang, which will provide more details.

juan-stanford-2017

My cell-phone photo of Juan Maldacena lecturing at Stanford, 22 March 2017.

Almost 10 years ago I visited the Perimeter Institute to attend a conference, and by chance was assigned an office shared with Patrick Hayden. Patrick was a professor at McGill at that time, but I knew him well from his years at Caltech as a Sherman Fairchild Prize Fellow, and deeply respected him. Our proximity that week ignited a collaboration which turned out to be one of the most satisfying of my career.

To my surprise, Patrick revealed he had been thinking about  black holes, a long-time passion of mine but not previously a research interest of his, and that he had already arrived at a startling insight which would be central to the paper we later wrote together. Patrick wondered what would happen if Alice possessed a black hole which happened to be highly entangled with a quantum computer held by Bob. He imagined Alice throwing a qubit into the black hole, after which Bob would collect the black hole’s Hawking radiation and feed it into his quantum computer for processing. Drawing on his knowledge about quantum communication through noisy channels, Patrick argued that  Bob would only need to grab a few qubits from the radiation in order to salvage Alice’s qubit successfully by doing an appropriate quantum computation.

black-hole-retrieval

Alice tosses a qubit into a black hole, which is entangled with Bob’s quantum computer. Bob grabs some Hawking radiation, then does a quantum computation to decode Alice’s qubit.

This idea got my adrenaline pumping, stirring a vigorous dialogue. Patrick had initially assumed that the subsystem of the black hole ejected in the Hawking radiation had been randomly chosen, but we eventually decided (based on a simple picture of the quantum computation performed by the black hole) that it should take a time scaling like M log M (where M is the black hole mass expressed in Planck units) for Alice’s qubit to get scrambled up with the rest of her black hole. Only after this scrambling time would her qubit leak out in the Hawking radiation. This time is actually shockingly short, about a millisecond for a solar mass black hole. The best previous estimate for how long it would take for Alice’s qubit to emerge (scaling like M3), had been about 1067 years.

This short time scale aroused memories of discussions with Lenny Susskind back in 1993, vividly recreated in Lenny’s engaging book The Black Hole War. Because of the black hole’s peculiar geometry, it seemed conceivable that Bob could distill a copy of Alice’s qubit from the Hawking radiation and then leap into the black hole, joining Alice, who could then toss her copy of the qubit to Bob. It disturbed me that Bob would then hold two perfect copies of Alice’s qubit; I was a quantum information novice at the time, but I knew enough to realize that making a perfect clone of a qubit would violate the rules of quantum mechanics. I proposed to Lenny a possible resolution of this “cloning puzzle”: If Bob has to wait outside the black hole for too long in order to distill Alice’s qubit, then when he finally jumps in it may be too late for Alice’s qubit to catch up to Bob inside the black hole before Bob is destroyed by the powerful gravitational forces inside. Revisiting that scenario, I realized that the scrambling time M log M, though short, was just barely long enough for the story to be self-consistent. It was gratifying that things seemed to fit together so nicely, as though a deep truth were being affirmed.

black-hole-cloning

If Bob decodes the Hawking radiation and then jumps into the black hole, can he acquire two identical copies of Alice’s qubit?

Patrick and I viewed our paper as a welcome opportunity to draw the quantum information and quantum gravity communities closer together, and we wrote it with both audiences in mind. We had fun writing it, adding rhetorical flourishes which we hoped would draw in readers who might otherwise be put off by unfamiliar ideas and terminology.

In their recent work, Juan and his collaborators propose a different way to think about the problem. They stripped down our Hawking radiation decoding scenario to a model so simple that it can be analyzed quite explicitly, yielding a pleasing result. What had worried me so much was that there seemed to be two copies of the same qubit, one carried into the black hole by Alice and the other residing outside the black hole in the Hawking radiation. I was alarmed by the prospect of a rendezvous of the two copies. Maldacena et al. argue that my concern was based on a misconception. There is just one copy, either inside the black hole or outside, but not both. In effect, as Bob extracts his copy of the qubit on the outside, he destroys Alice’s copy on the inside!

To reach this conclusion, several ideas are invoked. First, we analyze the problem in the case where we understand quantum gravity best, the case of a negatively curved spacetime called anti-de Sitter space.  In effect, this trick allows us to trap a black hole inside a bottle, which is very advantageous because we can study the physics of the black hole by considering what happens on the walls of the bottle. Second, we envision Bob’s quantum computer as another black hole which is entangled with Alice’s black hole. When two black holes in anti-de Sitter space are entangled, the resulting geometry has a “wormhole” which connects together the interiors of the two black holes. Third, we chose the entangled pair of black holes to be in a very special quantum state, called the “thermofield double” state. This just means that the wormhole connecting the black holes is as short as possible. Fourth, to make the analysis even simpler, we suppose there is just one spatial dimension, which makes it easier to draw a picture of the spacetime. Now each wall of the bottle is just a point in space, with the left wall lying outside Bob’s side of the wormhole, and the right wall lying outside Alice’s side.

An important property of the wormhole is that it is not traversable. That is, when Alice throws her qubit into her black hole and it enters her end of the wormhole, the qubit cannot emerge from the other end. Instead it is stuck inside, unable to get out on either Alice’s side or Bob’s side. Most ways of manipulating the black holes from the outside would just make the wormhole longer and exacerbate the situation, but in a clever recent paper Ping Gao, Daniel Jafferis, and Aron Wall pointed out an exception. We can imagine a quantum wire connecting the left wall and right wall, which simulates a process in which Bob extracts a small amount of Hawking radiation from the right wall (that is, from Alice’s black hole), and carefully deposits it on the left wall (inserting it into Bob’s quantum computer). Gao, Jafferis, and Wall find that this procedure, by altering the trajectories of Alice’s and Bob’s walls, can actually make the wormhole traversable!

wormholes

(a) A nontraversable wormhole. Alice’s qubit, thrown into the black hole, never reaches Bob. (b) Stealing some Hawking radiation from Alice’s side and inserting it on Bob’s side makes the wormhole traversable. Now Alice’s qubit reaches Bob, who can easily “decode” it.

This picture gives us a beautiful geometric interpretation of the decoding protocol that Patrick and I had described. It is the interaction between Alice’s wall and Bob’s wall that brings Alice’s qubit within Bob’s grasp. By allowing Alice’s qubit to reach Bob at the other end of the wormhole, that interaction suffices to perform Bob’s decoding task, which is especially easy in this case because Bob’s quantum computer was connected to Alice’s black hole by a short wormhole when she threw her qubit inside.

Bob-jumps-in

If, after a delay, Bob’s jumps into the black hole, he might find Alice’s qubit inside. But if he does, that qubit cannot be decoded by Bob’s quantum computer. Bob has no way to attain two copies of the qubit.

And what if Bob conducts his daring experiment, in which he decodes Alice’s qubit while still outside the black hole, and then jumps into the black hole to check whether the same qubit is also still inside? The above spacetime diagram contrasts two possible outcomes of Bob’s experiment. After entering the black hole, Alice might throw her qubit toward Bob so he can catch it inside the black hole. But if she does, then the qubit never reaches Bob’s quantum computer, and he won’t be able to decode it from the outside. On the other hand, Alice might allow her qubit to reach Bob’s quantum computer at the other end of the (now traversable) wormhole. But if she does, Bob won’t find the qubit when he enters the black hole. Either way, there is just one copy of the qubit, and no way to clone it. I shouldn’t have been so worried!

Granted, we have only described what happens in an oversimplified model of a black hole, but the lessons learned may be more broadly applicable. The case for broader applicability rests on a highly speculative idea, what Maldacena and Susskind called the ER=EPR conjecture, which I wrote about in this earlier blog post. One consequence of the conjecture is that a black hole highly entangled with a quantum computer is equivalent, after a transformation acting only on the computer, to two black holes connected by a short wormhole (though it might be difficult to actually execute that transformation). The insights of Gao-Jafferis-Wall and Maldacena-Stanford-Yang, together with the ER=EPR viewpoint, indicate that we don’t have to worry about the same quantum information being in two places at once. Quantum mechanics can survive the attack of the clones. Whew!

Thanks to Juan, Douglas, and Lenny for ongoing discussions and correspondence which have helped me to understand their ideas (including a lucid explanation from Douglas at our Caltech group meeting last Wednesday). This story is still unfolding and there will be more to say. These are exciting times!