Spooky action nearby: Entangling logical qubits without physical operations

My top 10 ghosts (solo acts and ensembles). If Bruce Willis being a ghost in The Sixth Sense is a spoiler, that’s on you — the movie has been out for 26 years.

Einstein and I have both been spooked by entanglement. Einstein’s experience was more profound: in a 1947 letter to Born, he famously dubbed it spukhafte Fernwirkung (or spooky action at a distance). Mine, more pedestrian. It came when I first learned the cost of entangling logical qubits on today’s hardware.

Logical entanglement is not easy

I recently listened to a talk where the speaker declared that “logical entanglement is easy,” and I have to disagree. You could argue that it looks easy when compared to logical small-angle gates, in much the same way I would look small standing next to Shaquille O’Neal. But that doesn’t mean 6’5” and 240 pounds is small.

To see why it’s not easy, it helps to look at how logical entangling gates are actually implemented. A logical qubit is not a single physical object. It’s an error-resistant qubit built out of several noisy, error-prone physical qubits. A quantum error-correcting (QEC) code with parameters [[n,k,d]][\![n,k,d]\!] uses nn physical qubits to encode kk logical qubits in a way that can detect up to d1d-1 physical errors and correct up to (d1)/2\lfloor (d-1)/2 \rfloor of them.

This redundancy is what makes fault-tolerant quantum computing possible. It’s also what makes logical operations expensive.

On platforms like neutral-atom arrays and trapped ions, the standard approach is a transversal CNOT: you apply two-qubit gates pairwise across the code blocks (qubit ii in block A interacts with qubit ii in block B). That requires nn physical two-qubit gates to entangle the kk logical qubits of one code block with the kk logical qubits of another.

To make this less abstract, here’s a QuEra animation showing a transversal CNOT implemented in a neutral-atom array. This animation is showing real experimental data, not a schematic idealization.

The idea is simple. The problem is that nn can be large, and physical two-qubit gates are among the noisiest operations available on today’s hardware.

Superconducting platforms take a different route. They tend to rely on lattice surgery; you entangle logical qubits by repeatedly measuring joint stabilizers along a boundary. That replaces two-qubit gates for stabilizer measurements over multiple rounds (typically scaling with the code distance). Unfortunately, physical measurements are the other noisiest primitive we have.

Then there are the modern high-rate qLDPC codes, which pack many logical qubits into a single code block. These are excellent quantum memories. But when it comes to computation, they face challenges. Logical entangling gates can require significant circuit depth, and often entire auxiliary code blocks are needed to mediate the interaction.

This isn’t a purely theoretical complaint. In recent state-of-the-art experiments by Google and by the Harvard–QuEra–MIT collaboration, logical entangling gates consumed nearly half of the total error budget.

So no, logical entanglement is not easy. But, how easy can we make it?

Phantom codes: Logical entanglement without physical operations

To answer how easy logical entanglement can really be, it helps to start with a slightly counterintuitive observation: logical entanglement can sometimes be generated purely by permuting physical qubits.

Let me show you how this works in the simplest possible setting, and then I’ll explain what’s really going on.

Consider a [[4,2,2]][\![4,2,2]\!] stabilizer code, which encodes 4 physical qubits into 2 logical ones that can detect 1 error, but can’t correct any. Below are its logical operators; the arrow indicates what happens when we physically swap qubits 1 and 3 (bars denote logical operators).

X1amp;=amp;XXIIIXXI=X1X2X2amp;=amp;XIXIXIXI=X2Z1amp;=amp;ZIZIZIZI=Z1Z2amp;=amp;ZZIIIZZI=Z1Z2\begin{array}{rcl} \bar X_1 & = & XXII \;\rightarrow\; IXXI = \bar X_1 \bar X_2 \\ \bar X_2 & = & XIXI \;\rightarrow\; XIXI = \bar X_2 \\ \bar Z_1 & = & ZIZI \;\rightarrow\; ZIZI = \bar Z_1 \\ \bar Z_2 & = & ZZII \;\rightarrow\; IZZI = \bar Z_1 \bar Z_2 \end{array}

You can check that the logical operators transform exactly as shown, which is the action of a logical CNOT gate. For readers less familiar with stabilizer codes, click the arrow below for an explanation of what’s going on. Those familiar can carry on.

Click!

At the logical level, we identify gates by how they transform logical Pauli operators. This is the same idea used in ordinary quantum circuits: a gate is defined not just by what it does to states, but by how it reshuffles observables.

A CNOT gate has a very characteristic action. If qubit 1 is the control and qubit 2 is the target, then: an XX on the control spreads to the target, a ZZ on the target spreads back to the control, and the other Pauli operators remain unchanged.

That’s exactly what we see above.

To see why this generates entanglement, it helps to switch from operators to states. A canonical example of how to generate entanglement in quantum circuits is the following. First, you put one qubit into a superposition using a Hadamard. Starting from |00|00\rangle, this gives

|0012(|00+|10).|00\rangle \rightarrow \frac{1}{\sqrt{2}}(|00\rangle + |10\rangle).

At this point there is still no entanglement — just superposition.

The entanglement appears when you apply a CNOT. The CNOT correlates the two branches of the superposition, producing

12(|00+|11),\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle),

which is a maximally-entangled Bell state. The Hadamard creates superposition; the CNOT turns that superposition into correlation.

The operator transformations above are simply the algebraic version of this story. Seeing

X1X1X2andZ2Z1Z2\bar X_1 \rightarrow \bar X_1 \bar X_2 \quad {\rm and} \quad \bar Z_2 \rightarrow \bar Z_1 \bar Z_2

tells us that information on one logical qubit is now inseparable from the other.


In other words, in this code,

CNOT12=SWAP13\bar{\rm CNOT}_{12} ={\rm SWAP}_{13}

The figure below shows how this logical circuit maps onto a physical circuit. Each horizontal line represents a qubit. On the left is a logical CNOT gate: the filled dot marks the control qubit, and the ⊕ symbol marks the target qubit whose state is flipped if the control is in the state 1|1\rangle. On the right is the corresponding physical implementation, where the logical gate is realized by acting on multiple physical qubits.

At this point, all we’ve done is trade one physical operation for another. The real magic comes next. Physical permutations do not actually need to be implemented in hardware. Because they commute cleanly through arbitrary circuits, they can be pulled to the very end of a computation and absorbed into a relabelling of the final measurement outcomes. No operator spread. No increase in circuit depth.

This is not true for generic physical gates. It is a unique property of permutations.

To see how this works, consider a slightly larger example using an [[8,3,2]][\![8,3,2]\!] code. Here the logical operators are a bit more complicated:

CNOT12=SWAP25SWAP37,CNOT23=SWAP28SWAP35,andCNOT31=SWAP36SWAP45.\bar{\rm CNOT}_{12} = {\rm SWAP}_{25}{\rm SWAP}_{37}, \quad \bar{\rm CNOT}_{23} = {\rm SWAP}_{28}{\rm SWAP}_{35}, \;\; {\rm and} \quad \;\; \bar{\rm CNOT}_{31} = {\rm SWAP}_{36}{\rm SWAP}_{45}.

Below is a three-logical-qubit circuit implemented using this code like the circuit drawn above, but now with an extra step. Suppose the circuit contains three logical CNOTs, each implemented via a physical permutation.

Instead of executing any of these permutations, we simply keep track of them classically and relabel the outputs at the end. From the hardware’s point of view, nothing happened.

If you prefer a more physical picture, imagine this implemented with atoms in an array. The atoms never move. No gates fire. The entanglement is there anyway.

This is the key point. Because no physical gates are applied, the logical entangling operation has zero overhead. And for the same reason, it has perfect fidelity. We’ve reached the minimum possible cost of a logical entangling gate. You can’t beat free.

To be clear, not all codes are amenable to logical entanglement through relabeling. This is a very special feature that exists in some codes.

Motivated by this observation, my collaborators and I defined a new class of QEC codes. I’ll state the definition first, and then unpack what it really means.

Phantom codes are stabilizer codes in which logical entangling gates between every ordered pair of logical qubits can be implemented solely via physical qubit permutations.

The phrase “every ordered pair” is a strong requirement. For three logical qubits, it means the code must support logical CNOTs between qubits (1,2)(1,2), (2,1)(2,1), (1,3)(1,3), (3,1)(3,1), (2,3)(2,3), and (3,2)(3,2). More generally, a code with kk logical qubits must support all k(k1)k(k-1) possible directed CNOTs. This isn’t pedantry. Without access to every directed pair, you can’t freely build arbitrary entangling circuits — you’re stuck with a restricted gate set.

The phrase “solely via physical qubit permutations” is just as demanding. If all but one of those CNOTs could be implemented via permutations, but the last one required even a single physical gate — say, a one-qubit Clifford — the code would not be phantom. That condition is what buys you zero overhead and perfect fidelity. Permutations can be compiled away entirely; any additional physical operation cannot.

Together, these two requirements carve out a very special class of codes. All in-block logical entangling gates are free. Logical entangling gates between phantom code blocks are still available — they’re simply implemented transversally.

After settling on this definition, we went back through the literature to see whether any existing codes already satisfied it. We found two. The [[12,2,4]][\![12,2,4]\!] Carbon code and [[2D,D,2]][\![2^D,D,2]\!] hypercube codes. The former enabled repeated rounds of quantum error-correction in trapped-ion experiments, while the latter underpinned recent neutral-atom experiments achieving logical-over-physical performance gains in quantum circuit sampling.

Both are genuine phantom codes. Both are also limited. With distance d=2d=2, they can detect errors but not correct them. With only k=2k=2 logical qubits, there’s a limited class of CNOT circuits you can implement. Which begs the questions: Do other phantom codes exist? Can these codes have advantages that persist for scalable applications under realistic noise conditions? What structural constraints do they obey (parameters, other gates, etc.)?

Before getting to that, a brief note for the even more expert reader on four things phantom codes are not. Phantom codes are not a form of logical Pauli-frame tracking: the phantom property survives in the presence of non-Clifford gates. They are not strictly confined to a single code block: because they are CSS codes, multiple blocks can be stitched together using physical CNOTs in linear depth. They are not automorphism gates, which rely on single-qubit Cliffords and therefore do not achieve zero overhead or perfect fidelity. And they are not codes like SHYPS, Gross, or Tesseract codes, which allow only products of CNOTs via permutations rather than individually addressable ones. All of those codes are interesting. They’re just not phantom codes.

In a recent preprint, we set out to answer the three questions above. This post isn’t about walking through all of those results in detail, so here’s the short version. First, we find many more phantom codes — hundreds of thousands of additional examples, along with infinite families that allow both kk and dd to scale. We study their structural properties and identify which other logical gates they support beyond their characteristic phantom ones.

Second, we show that phantom codes can be practically useful for the right kinds of tasks — essentially, those that are heavy on entangling gates. In end-to-end noisy simulations, we find that phantom codes can outperform the surface code, achieving one–to–two orders of magnitude reductions in logical infidelity for resource state preparation (GHZ-state preparation) and many-body simulation, at comparable qubit overhead and with a modest preselection acceptance rate of about 24%.

If you’re interested in the details, you can read more in our preprint.

Larger space of codes to explore

This is probably a good moment to zoom out and ask the referee question: why does this matter?

I was recently updating my CV and realized I’ve now written my 40th referee report for APS. After a while, refereeing trains a reflex. No matter how clever the construction or how clean the proof, you keep coming back to the same question: what does this actually change?

So why do phantom codes matter? At least to me, there are two reasons: one about how we think about QEC code design, and one about what these codes can already do in practice.

The first reason is the one I’m most excited about. It has less to do with any particular code and more to do with how the field implicitly organizes the space of QEC codes. Most of that space is structured around familiar structural properties: encoding rate, distance, stabilizer weight, LDPC-ness. These form the axes that make a code a good memory. And they matter, a lot.

But computation lives on a different axis. Logical gates cost something, and that cost is sometimes treated as downstream—something to be optimized after a code is chosen, rather than something to design for directly. As a result, the cost of logical operations is usually inherited, not engineered.

One way to make this tension explicit is to think of code design as a multi-dimensional space with at least two axes. One axis is memory cost: how efficiently a code stores information. High rate, high distance, low-weight stabilizers, efficient decoding — all the usual virtues. The other axis is computational cost: how expensive it is to actually do things with the encoded qubits. Low computational cost means many logical gates can be implemented with little overhead. Low computational cost makes computation easy.

Why focus on extreme points in this space? Because extremes are informative. They tell you what is possible, what is impossible, and which tradeoffs are structural rather than accidental.

Phantom codes sit precisely at one such extreme: they minimize the cost of in-block logical entanglement. That zero-logical-cost extreme comes with tradeoffs. The phantom codes we find tend to have high stabilizer weights, and for families with scalable kk, the number of physical qubits grows exponentially. These are real costs, and they matter.

Still, the important lesson is that even at this extreme point, codes can outperform LDPC-based architectures on well-chosen tasks. That observation motivates an approach to QEC code design in which the logical gates of interest are placed at the centre of the design process, rather than treated as an afterthought. This is my first takeaway from this work.

Second is that phantom codes are naturally well suited to circuits that are heavy on logical entangling gates. Some interesting applications fall into this category, including fermionic simulation and correlated-phase preparation. Combined with recent algorithmic advances that reduce the overhead of digital fermionic simulation, these code-level ideas could potentially improve near-term experimental feasibility.

Back to being spooked

The space of QEC codes is massive. Perhaps two axes are not enough. Stabilizer weight might deserve its own. Perhaps different applications demand different projections of this space. I don’t yet know the best way to organize it.

The size of this space is a little spooky — and that’s part of what makes it exciting to explore, and to see what these corners of code space can teach us about fault-tolerant quantum computation.

Identical twins and quantum entanglement

“If I had a nickel for every unsolicited and very personal health question I’ve gotten at parties, I’d have paid off my medical school loans by now,” my doctor friend complained. As a physicist, I can somewhat relate. I occasionally find myself nodding along politely to people’s eccentric theories about the universe. A gentleman once explained to me how twin telepathy (the phenomenon where, for example, one twin feels the other’s pain despite being in separate countries) comes from twins’ brains being entangled in the womb. Entanglement is a nonclassical correlation that can exist between spatially separated systems. If two objects are entangled, it’s possible to know everything about both of them together but nothing about either one. Entangling two particles (let alone full brains) over tens of kilometres (let alone full countries) is incredibly challenging. “Using twins to study entanglement, that’ll be the day,” I thought. Well, my last paper did something like that. 

In theory, a twin study consists of two people that are as identical as possible in every way except for one. What that allows you to do is isolate the effect of that one thing on something else. Aleksander Lasek (postdoc at QuICS), David Huse (professor of physics at Princeton), Nicole Yunger Halpern (NIST physicist and Quantum Frontiers blogger), and I were interested in isolating the effects of quantities’ noncommutation (explained below) on entanglement. To do so, we first built a pair of twins and then compared them

Consider a well-insulated thermos filled with soup. The heat and the number of “soup particles” inside the thermos are conserved. So the energy and the number of “soup particles” are conserved quantities. In classical physics, conserved quantities commute. This means that we can simultaneously measure the amount of each conserved quantity in our system, like the energy and number of soup particles. However, in quantum mechanics, this needn’t be true. Measuring one property of a quantum system can change another measurement’s outcome.

Conserved quantities’ noncommutation in thermodynamics has led to some interesting results. For example, it’s been shown that conserved quantities’ noncommutation can decrease the rate of entropy production. For the purposes of this post, entropy production is something that limits engine efficiency—how well engines can convert fuel to useful work. For example, if your car engine had zero entropy production (which is impossible), it would convert 100% of the energy in your car’s fuel into work that moved your car along the road. Current car engines can convert about 30% of this energy, so it’s no wonder that people are excited about the prospective application of decreasing entropy production. Other results (like this one and that one) have connected noncommutation to potentially hindering thermalization—the phenomenon where systems interact until they have similar properties, like when a cup of coffee cools. Thermalization limits memory storage and battery lifetimes. Thus, learning how to resist thermalization could also potentially lead to better technologies, such as longer-lasting batteries. 

One can measure the amount of entanglement within a system, and as quantum particles thermalize, they entangle. Given the above results about thermalization, we might expect that noncommutation would decrease entanglement. Testing this expectation is where the twins come in.

Say we built a pair of twins that were identical in every way except for one. Nancy, the noncommuting twin, has some features that don’t commute, say, her hair colour and height. This means that if we measure her height, we’ll have no idea what her hair colour is. For Connor, the commuting twin, his hair colour and height commute, so we can determine them both simultaneously. Which twin has more entanglement? It turns out it’s Nancy.

Disclaimer: This paragraph is written for an expert audience. Our actual models consist of 1D chains of pairs of qubits. Each model has three conserved quantities (“charges”), which are sums over local charges on the sites. In the noncommuting model, the three local charges are tensor products of Pauli matrices with the identity (XI, YI, ZI). In the commuting model, the three local charges are tensor products of the Pauli matrices with themselves (XX, YY, ZZ). The paper explains in what sense these models are similar. We compared these models numerically and analytically in different settings suggested by conventional and quantum thermodynamics. In every comparison, the noncommuting model had more entanglement on average.

Our result thus suggests that noncommutation increases entanglement. So does charges’ noncommutation promote or hinder thermalization? Frankly, I’m not sure. But I’d bet the answer won’t be in the next eccentric theory I hear at a party.

John Preskill and the dawn of the entanglement frontier

Editor’s Note: John Preskill’s recent election to the National Academy of Sciences generated a lot of enthusiasm among his colleagues and students. In an earlier post today, famed Stanford theoretical physicist, Leonard Susskind, paid tribute to John’s early contributions to physics ranging from magnetic monopoles to the quantum mechanics of black holes. In this post, Daniel Gottesman, a faculty member at the Perimeter Institute, takes us back to the formative years of the Institute for Quantum Information at Caltech, the precursor to IQIM and a world-renowned incubator for quantum information and quantum computation research. Though John shies away from the spotlight, we, at IQIM, believe that the integrity of his character and his role as a mentor and catalyst for science are worthy of attention and set a good example for current and future generations of theoretical physicists.

Preskill's legacy may well be the incredible number of preeminent research scientists in quantum physics he has mentored throughout his extraordinary career.

Preskill’s legacy may well be the incredible number of preeminent research scientists in quantum physics he has mentored throughout his extraordinary career.

When someone wins a big award, it has become traditional on this blog for John Preskill to write something about them. The system breaks down, though, when John is the one winning the award. Therefore I’ve been brought in as a pinch hitter (or should it be pinch lionizer?).

The award in this case is that John has been elected to the National Academy of Sciences, along with Charlie Kane and a number of other people that don’t work on quantum information. Lenny Susskind has already written about John’s work on other topics; I will focus on quantum information.

On the research side of quantum information, John is probably best known for his work on fault-tolerant quantum computation, particularly topological fault tolerance. John jumped into the field of quantum computation in 1994 in the wake of Shor’s algorithm, and brought me and some of his other grad students with him. It was obvious from the start that error correction was an important theoretical challenge (emphasized, for instance, by Unruh), so that was one of the things we looked at. We couldn’t figure out how to do it, but some other people did. John and I embarked on a long drawn-out project to get good bounds on the threshold error rate. If you can build a quantum computer with an error rate below the threshold value, you can do arbitrarily large quantum computations. If not, then errors will eventually overwhelm you. Early versions of my project with John suggested that the threshold should be about 10^{-4}, and the number began floating around (somewhat embarrassingly) as the definitive word on the threshold value. Our attempts to bound the higher-order terms in the computation became rather grotesque, and the project proceeded very slowly until a new approach and the recruitment of Panos Aliferis finally let us finish a paper with a rigorous proof of a slightly lower threshold value.

Meanwhile, John had also been working on topological quantum computation. John has already written about his excitement when Kitaev visited Caltech and talked about the toric code. The two of them, plus Eric Dennis and Andrew Landahl, studied the application of this code for fault tolerance. If you look at the citations of this paper over time, it looks rather … exponential. For a while, topological things were too exotic for most quantum computer people, but over time, the virtues of surface codes have become obvious (apparently high threshold, convenient for two-dimensional architectures). It’s become one of the hot topics in recent years and there are no signs of flagging interest in the community.

John has also made some important contributions to security proofs for quantum key distribution, known to the cognoscenti just by its initials. QKD allows two people (almost invariably named Alice and Bob) to establish a secret key by sending qubits over an insecure channel. If the eavesdropper Eve tries to live up to her name, her measurements of the qubits being transmitted will cause errors revealing her presence. If Alice and Bob don’t detect the presence of Eve, they conclude that she is not listening in (or at any rate hasn’t learned much about the secret key) and therefore they can be confident of security when they later use the secret key to encrypt a secret message. With Peter Shor, John gave a security proof of the best-known QKD protocol, known as the “Shor-Preskill” proof. Sometimes we scientists lack originality in naming. It was not the first proof of security, but earlier ones were rather complicated. The Shor-Preskill proof was conceptually much clearer and made a beautiful connection between the properties of quantum error-correcting codes and QKD. The techniques introduced in their paper got adopted into much later work on quantum cryptography.

Collaborating with John is always an interesting experience. Sometimes we’ll discuss some idea or some topic and it will be clear that John does not understand the idea clearly or knows little about the topic. Then, a few days later we discuss the same subject again and John is an expert, or at least he knows a lot more than me. I guess this ability to master
topics quickly is why he was always able to answer Steve Flammia’s random questions after lunch. And then when it comes time to write the paper … John will do it. It’s not just that he will volunteer to write the first draft — he keeps control of the whole paper and generally won’t let you edit the source, although of course he will incorporate your comments. I think this habit started because of incompatibilities between the TeX editor he was using and any other program, but he maintains it (I believe) to make sure that the paper meets his high standards of presentation quality.

This also explains why John has been so successful as an expositor. His
lecture notes for the quantum computation class at Caltech are well-known. Despite being incomplete and not available on Amazon, they are probably almost as widely read as the standard textbook by Nielsen and Chuang.

Before IQIM, there was IQI, and before that was QUIC.

Before IQIM, there was IQI, and before that was QUIC.

He apparently is also good at writing grants. Under his leadership and Jeff Kimble’s, Caltech has become one of the top places for quantum computation. In my last year of graduate school, John and Jeff, along with Steve Koonin, secured the QUIC grant, and all of a sudden Caltech had money for quantum computation. I got a research assistantship and could write my thesis without having to worry about TAing. Postdocs started to come — first Chris Fuchs, then a long stream of illustrious others. The QUIC grant grew into IQI, and that eventually sprouted an M and drew in even more people. When I was a student, John’s group was located in Lauritsen with the particle theory group. We had maybe three grad student offices (and not all the students were working on quantum information), plus John’s office. As the Caltech quantum effort grew, IQI acquired territory in another building, then another, and then moved into a good chunk of the new Annenberg building. Without John’s efforts, the quantum computing program at Caltech would certainly be much smaller and maybe completely lacking a theory side. It’s also unlikely this blog would exist.

The National Academy has now elected John a member, probably more for his research than his twitter account (@preskill), though I suppose you never know. Anyway, congratulations, John!

-D. Gottesman

Of magnetic monopoles and fast-scrambling black holes

Editor’s Note: On April 29th, 2014, the National Academy of Sciences announced the new electees to the prestigious organization. This was an especially happy occasion for everyone here at IQIM, since the new members included our very own John Preskill, Richard P. Feynman Professor of Theoretical Physics and regular blogger on this site. A request was sent to Leonard Susskind, a close friend and collaborator of John’s, to take a trip down memory lane and give the rest of us a glimpse of some of John’s early contributions to Physics. John, congratulations from all of us here at IQIM.

Preskill-John_7950_WebJohn Preskill was elected to the National Academy of Sciences, an event long overdue. Perhaps it took longer than it should have because there is no way to pigeon-hole him; he is a theoretical physicist, and that’s all there is to it.

John has long been one of my heroes in theoretical physics. There is something very special about his work. It has exceptional clarity, it has vision, it has integrity—you can count on it. And sometimes it has another property: it can surprise. The first time I heard his name come up, sometime around 1979, I was not only surprised; I was dismayed. A student whose name I had never heard of, had uncovered a serious clash between two things, both of which I deeply wanted to believe in. One was the Big-Bang theory and the other was the discovery of grand unified particle theories. Unification led to the extraordinary prediction that Dirac’s magnetic monopoles must exist, at least in principle. The Big-Bang theory said they must exist in fact. The extreme conditions at the beginning of the universe were exactly what was needed to create loads of monopoles; so many that they would flood the universe with too much mass. John, the unknown graduate student, did a masterful analysis. It left no doubt that something had to give. Cosmology gave. About a year later, inflationary cosmology was discovered by Guth who was in part motivated by Preskill’s monopole puzzle.

John’s subsequent career as a particle physicist was marked by a number of important insights which often had that surprising quality. The cosmology of the invisible axion was one. Others had to do with very subtle and counterintuitive features of quantum field theory, like the existence of “Alice strings”. In the very distant past, Roger Penrose and I had a peculiar conversation about possible generalizations of the Aharonov-Bohm effect. We speculated on all sorts of things that might happen when something is transported around a string. I think it was Roger who got excited about the possibilities that might result if a topological defect could change gender. Alice strings were not quite that exotic, only electric charge flips, but nevertheless it was very surprising.

John of course had a long standing interest in the quantum mechanics of black holes: I will quote a passage from a visionary 1992 review paper, “Do Black Holes Destroy Information?

“I conclude that the information loss paradox may well presage a revolution in fundamental physics.”

At that time no one knew the answer to the paradox, although a few of us, including John, thought the answer was that information could not be lost. But almost no one saw the future as clearly as John did. Our paths crossed in 1993 in a very exciting discussion about black holes and information. We were both thinking about the same thing, now called black hole complementarity. We were concerned about quantum cloning if information is carried by Hawking radiation. We thought we knew the answer: it takes too long to retrieve the information to then be able to jump into the black hole and discover the clone. This is probably true, but at that time we had no idea how close a call this might be.

It took until 2007 to properly formulate the problem. Patrick Hayden and John Preskill utterly surprised me, and probably everyone else who had been thinking about black holes, with their now-famous paper “Black Holes as Mirrors.” In a sense, this paper started a revolution in applying the powerful methods of quantum information theory to black holes.

We live in the age of entanglement. From quantum computing to condensed matter theory, to quantum gravity, entanglement is the new watchword. Preskill was in the vanguard of this revolution, but he was also the teacher who made the new concepts available to physicists like myself. We can now speak about entanglement, error correction, fault tolerance, tensor networks and more. The Preskill lectures were the indispensable source of knowledge and insight for us.

Congratulations John. And congratulations NAS.

-L. S.

Entanglement = Wormholes

One of the most enjoyable and inspiring physics papers I have read in recent years is this one by Mark Van Raamsdonk. Building on earlier observations by Maldacena and by Ryu and Takayanagi. Van Raamsdonk proposed that quantum entanglement is the fundamental ingredient underlying spacetime geometry.* Since my first encounter with this provocative paper, I have often mused that it might be a Good Thing for someone to take Van Raamsdonk’s idea really seriously.

Now someone has.

I love wormholes. (Who doesn’t?) Picture two balls, one here on earth, the other in the Andromeda galaxy. It’s a long trip from one ball to the other on the background space, but there’s a shortcut:You can walk into the ball on earth and moments later walk out of the ball in Andromeda. That’s a wormhole.

I’ve mentioned before that John Wheeler was one of my heros during my formative years. Back in the 1950s, Wheeler held a passionate belief that “everything is geometry,” and one particularly intriguing idea he called “charge without charge.” There are no pointlike electric charges, Wheeler proclaimed; rather, electric field lines can thread the mouth of a wormhole. What looks to you like an electron is actually a tiny wormhole mouth. If you were small enough, you could dive inside the electron and emerge from a positron far away. In my undergraduate daydreams, I wished this idea could be true.

But later I found out more about wormholes, and learned about “topological censorship.” It turns out that if energy is nonnegative, Einstein’s gravitational field equations prevent you from traversing a wormhole — the throat always pinches off (or becomes infinitely long) before you get to the other side. It has sometimes been suggested that quantum effects might help to hold the throat open (which sounds like a good idea for a movie), but today we’ll assume that wormholes are never traversable no matter what you do.

Alice and Bob are in different galaxies, but each lives near a black hole, and their black holes are connected by a wormhole.

Love in a wormhole throat: Alice and Bob are in different galaxies, but each lives near a black hole, and their black holes are connected by a wormhole. If both jump into their black holes, they can enjoy each other’s company for a while before meeting a tragic end.

Are wormholes any fun if we can never traverse them? The answer might be yes if two black holes are connected by a wormhole. Then Alice on earth and Bob in Andromeda can get together quickly if each jumps into a nearby black hole. For solar mass black holes Alice and Bob will have only 10 microseconds to get acquainted before meeting their doom at the singularity. But if the black holes are big enough, Alice and Bob might have a fulfilling relationship before their tragic end.

This observation is exploited in a recent paper by Juan Maldacena and Lenny Susskind (MS) in which they reconsider the AMPS puzzle (named for Almheiri, Marolf, Polchinski, and Sully). I wrote about this puzzle before, so I won’t go through the whole story again. Here’s the short version: while classical correlations can easily be shared by many parties, quantum correlations are harder to share. If Bob is highly entangled with Alice, that limits his ability to entangle with Carrie, and if he entangles with Carrie instead he can’t entangle with Alice. Hence we say that entanglement is “monogamous.” Now, if, as most of us are inclined to believe, information is “scrambled” but not destroyed by an evaporating black hole, then the radiation emitted by an old black hole today should be highly entangled with radiation emitted a long time ago. And if, as most of us are inclined to believe, nothing unusual happens (at least not right away) to an observer who crosses the event horizon of a black hole, then the radiation emitted today should be highly entangled with stuff that is still inside the black hole. But we can’t have it both ways without violating the monogamy of entanglement!

The AMPS puzzle invites audacious reponses, and AMPS were suitably audacious. They proposed that an old black hole has no interior — a freely falling observer meets her doom right at the horizon rather than at a singularity deep inside.

MS are also audacious, but in a different way. They helpfully summarize their key point succinctly in a simple equation:

ER = EPR

Here, EPR means Einstein-Podolsky-Rosen, whose famous paper highlighted the weirdness of quantum correlations, while ER means Einstein-Rosen (sorry, Podolsky), who discovered wormhole solutions to the Einstein equations. (Both papers were published in 1935.) MS (taking Van Raamsdonk very seriously) propose that whenever any two quantum subsystems are entangled they are connected by a wormhole. In many cases, these wormholes are highly quantum mechanical, but in some cases (where the quantum system under consideration has a weakly coupled “gravitational dual”), the wormhole can have a smooth geometry like the one ER described. That wormholes are not traversable is important for the consistency of ER = EPR: just as Alice cannot use their shared entanglement to send a message to Bob instantaneously, so she is unable to send Bob a message through their shared wormhole.

AMPS imagined that Alice could distill qubit C from the black hole’s early radiation and carry it back to the black hole, successfully verifying its entanglement with another qubit B distilled from the recent radiation. Monogamy then ensures that qubit B cannot be entangled with qubit A behind the horizon. Hence when Alice falls through the horizon she will not observe the quiescent vacuum state in which A and B are entangled; instead she encounters a high-energy particle. MS agree with this conclusion.

AMPS go on to say that Alice’s actions before entering the black hole could not have created that energetic particle; it must have been there all along, one of many such particles constituting a seething firewall.

Here MS disagree. They argue that the excitation encountered by Alice as she crosses the horizon was actually created by Alice herself when she interacted with qubit C. How could Alice’s actions, executed far, far away from the black hole, dramatically affect the state of the black hole’s interior? Because C and A are connected by a wormhole!

The ER = EPR conjecture seems to allow us to view the early radiation with which the black hole is entangled as a complementary description of the black hole interior. It’s not clear yet whether this picture works in detail, and even if it does there could still be firewalls; maybe in some sense the early radiation is connected to the black hole via a wormhole, yet this wormhole is wildly fluctuating rather than a smooth geometry. Still, MS provide a promising new perspective on a deep problem.

As physicists we often rely on our sense of smell in judging scientific ideas, and earlier proposed resolutions of the AMPS puzzle (like firewalls) did not smell right. At first whiff, ER = EPR may smell fresh and sweet, but it will have to ripen on the shelf for a while. If this idea is on the right track, there should be much more to say about it. For now, wormhole lovers can relish the possibilities.

Eventually, Wheeler discarded “everything is geometry” in favor of an ostensibly deeper idea: “everything is information.” It would be a fitting vindication of Wheeler’s vision if everything in the universe, including wormholes, is made of quantum correlations.

*Update: Commenter JM reminded me to mention Brian Swingle’s beautiful 2009 paper, which preceded Van Raamsdonk’s and proposed a far-reaching connection between quantum entanglement and spacetime geometry.

A Public Lecture on Quantum Information

Sooner or later, most scientists are asked to deliver a public lecture about their research specialties. When successful, lecturing about science to the lay public can give one a feeling of deep satisfaction. But preparing the lecture is a lot of work!

Caltech sponsors the Earnest C. Watson lecture series (named after the same Earnest Watson mentioned in my post about Jane Werner Watson), which attracts very enthusiastic audiences to Beckman Auditorium nine times a year. I gave a Watson lecture on April 3 about Quantum Entanglement and Quantum Computing, which is now available from iTunes U and also on YouTube:

I did a Watson lecture once before, in 1997. That occasion precipitated some big changes in my presentation style. To prepare for the lecture, I acquired my first laptop computer and learned to use PowerPoint. This was still the era when a typical physics talk was handwritten on transparencies and displayed using an overhead projector, so I was sort of a pioneer. And I had many anxious moments in the late 1990s worrying about whether my laptop would be able to communicate with the projector — that can still be a problem even today, but was a more common problem then.

I invested an enormous amount of time in preparing that 1997 lecture, an investment still yielding dividends today. Aside from figuring out what computer to buy (an IBM ThinkPad) and how to do animation in PowerPoint, I also learned to draw using Adobe Illustrator under the tutelage of Caltech’s digital media expert Wayne Waller. And apart from all that technical preparation, I had to figure out the content of the lecture!

That was when I first decided to represent a qubit as a box with two doors, which contains a ball that can be either red or green, and I still use some of the drawings I made then.

Entanglement, illustrated with balls in boxes.

Entanglement, illustrated with balls in boxes.

This choice of colors was unfortunate, because people with red-green color blindness cannot tell the difference. I still feel bad about that, but I don’t have editable versions of the drawings anymore, so fixing it would be a big job …

I also asked my nephew Ben Preskill (then 10 years old, now a math PhD candidate at UC Berkeley), to make a drawing for me illustrating weirdness.

The desire to put weirdness to work has driven the emergence of quantum information science.

The desire to put weirdness to work has driven the emergence of quantum information science.

I still use that, for sentimental reasons, even though it would be easier to update.

The turnout at the lecture was gratifying (you can’t really see the audience with the spotlight shining in your eyes, but I sensed that the main floor of the Auditorium was mostly full), and I have gotten a lot of positive feedback (including from the people who came up to ask questions afterward — we might have been there all night if the audio-visual staff had not forced us to go home).

I did make a few decisions about which I have had second thoughts. I was told I had the option of giving a 45 minute talk with a public question period following, or a 55 minute talk with only a private question period, and I opted for the longer talk. Maybe I should have pushed back and insisted on allowing some public questions even after the longer talk — I like answering questions. And I was told that I should stay in the spotlight, to ensure good video quality, so I decided to stand behind the podium the whole time to curb my tendency to pace across the stage. But maybe I would have seemed more dynamic if I had done some pacing.

I got some gentle criticism from my wife, Roberta, who suggested I could modulate my voice more. I have heard that before, particularly in teaching evaluations that complain about my “soporific” tone. I recall that Mike Freedman once commented after watching a video of a public lecture I did at the KITP in Santa Barbara — he praised its professionalism and “newscaster quality”. But that cuts two ways, doesn’t it? Paul Ginsparg listened to a podcast of that same lecture while doing yardwork, and then sent me a compliment by email, with a characteristic Ginspargian twist. Noting that my sentences were clear, precise, and grammatical, Paul asked: “is this something that just came naturally at some early age, or something that you were able to acquire at some later stage by conscious design (perhaps out of necessity, talks on quantum computing might not go over as well without the reassuring smoothness)?”

Another criticism stung more. To illustrate the monogamy of entanglement, I used a slide describing the frustration of Bob, who wants to entangle with both Alice and Carrie, but finds that he can increase his entanglement with Carrie only my sacrificing some of his entanglement with Alice.

Entanglement is monogamous. Bob is frustrated to find that he cannot be fully entangled with both Alice and Carrie.

Entanglement is monogamous. Bob is frustrated to find that he cannot be fully entangled with both Alice and Carrie.

This got a big laugh. But I used the same slide in a talk at the APS Denver meeting the following week (at a session celebrating the 100th anniversary of Niels Bohr’s atomic model), and a young woman came up to me after that talk to complain. She suggested that my monogamy metaphor was offensive and might discourage women from entering the field!

After discussing the issue with Roberta, I decided to address the problem by swapping the gender roles. The next day, during the question period following Stephen Hawking’s Public Lecture, I spoke about Betty’s frustration over her inability to entangle fully with both Adam and Charlie. But is that really an improvement, or does it reflect negatively on Betty’s morals? I would appreciate advice about this quandary in the comments.

In case you watch the video, there are a couple of things you should know. First, in his introduction, Tom Soifer quotes from a poem about me, but neglects to name the poet. It is former Caltech postdoc Patrick Hayden. And second, toward the end of the lecture I talk about some IQIM outreach activities, but neglect to name our Outreach Director Spiros Michalakis, without whose visionary leadership these things would not have happened.

The most touching feedback I received came from my Caltech colleague Oskar Painter. I joked in the lecture about how mild mannered IQIM scientists can unleash the superpower of quantum information at a moment’s notice.

Mild mannered professor unleashes the super power of quantum information.

Mild mannered professor unleashes the superpower of quantum information.

After watching the video, Oskar shot me an email:

“I sent a link to my son [Ewan, age 11] and daughter [Quinn, age 9], and they each watched it from beginning to end on their iPads, without interruption.  Afterwards, they had a huge number of questions for me, and were dreaming of all sorts of “quantum super powers” they imagined for the future.”

Jeff Kimble stands tall

Jeff Kimble played college basketball. I conjecture that he is more than two meters tall, though being a theorist I have never measured him. Jeff certainly stands tall in the Pantheon of outstanding physicists, and we at Quantum Frontiers were thrilled to hear that Jeff has received the 2013 Herbert Walther Award, which is very well deserved.

About four years ago, Jeff gave a public lecture at Caltech about “The Quantum Internet,” and I had the honor of introducing him. The video of Jeff’s lecture and my introduction are embedded at the end of this post. You’ll have to watch the video to hear all the Buddy Holly references in my introduction (Jeff and Buddy come from the same county in Texas). Jeff’s lecture was memorable, too, featuring a dance performance by his research group.

One of my most annoying quirks is that I like to use poems to introduce people, so I wrote one to fit the topic of the lecture. Among many other achievements, Jeff’s group has done pioneering experiments distributing quantum entanglement among multiple nodes in a quantum network, which is probably all you need to know to understand the poem.

Fluorescence image of four laser-cooled atomic ensembles, each used as a quantum node in an entanglement distribution experiment by the Kimble group.

Fluorescence image of four laser-cooled atomic ensembles, each used as a quantum node in an entanglement distribution experiment by the Kimble group.

Listening to one of my poems tends to make the audience uncomfortable (which I’ve been told is a sign that it’s good poetry). But Jeff did not seem to mind the poem too much, so I will seize the opportunity to post it here to commemorate the occasion.

Congratulations, Jeff!

The Quantum Internet

Professor HJ Kimble
Is much larger than a thimble
And a veritable symbol
Of the physicist today.

Could it be prodigious height
Explains his knack for squeezing light
Or is Jeff’s mind extremely bright?
I guess that’s hard to say…

Jeff wants to build a quantum net
It seems quite hard, but still I bet
Someday we’ll get there, just not yet.
There’ll be a slight delay.

At least they’ve made a quantum node,
That’s a start along the road.
They showed a photon could be stowed
And then released. Okay?

Jeff’s students stay up very late
And try to share a quantum state
Between two nodes. But when you wait
Entanglement decays.

Once entanglement is strong
And they can make it last quite long
One node could be inside Hong Kong
The other in Bombay.

And once the quantum net’s begun
We’re going to have a lot of fun
Exploiting work that Jeff has done
Hear what he has to say!

Is Alice burning? The black hole firewall controversy

Quantum correlations are monogamous. Bob can be highly entangled with Alice or with Carrie, but not both.

Quantum correlations are monogamous. Bob can be highly entangled with Alice or with Carrie, but not both.

Back in the early 1990s, I was very interested in the quantum physics of black holes and devoted much of my research effort to thinking about how black holes process quantum information. That effort may have prepared me to appreciate Peter Shor’s spectacular breakthrough — the discovery of a quantum algorithm for factoring intergers efficiently. I told the story here of how I secretly struggled to understand Shor’s algorithm while attending a workshop on black holes in 1994.

Since the mid-1990s, quantum information has been the main focus of my research. I hope that some of the work I’ve done can help to hasten the onset of a new era in which quantum computers are used routinely to perform super-classical tasks. But I have always had another motivation for working on quantum information science — a conviction that insights gained by thinking about quantum computation can illuminate deep issues in other areas of physics, especially quantum condensed matter and quantum gravity. In recent years quantum information concepts have begun penetrating into other fields, and I expect that trend to continue.
Continue reading

How to build a teleportation machine: Intro to qubits

A match made in heaven.

If a tree falls in a forest, and nobody is there to hear it, does it make a sound? The answer was obvious to my 12-year-old self — of course it made a sound. More specifically, something ranging from a thud to a thump. There doesn’t need to be an animal present for the tree to jiggle air molecules. Classical physics for the win! Around the same time I was exposed to this thought experiment, I read Michael Crichton’s Timeline. The premise is simple, but not necessarily feasible: archeologists use ‘quantum technology’ (many-worlds interpretation and quantum teleportation) to travel to the Dordogne region of France in the mid 1300s. Blood, guts, action, drama, and plot twists ensue. I haven’t returned to this book since I was thirteen, so I’m guaranteed to have the plot wrong, but for better or worse, I credit this book with planting the seeds of a misconception about what ‘quantum teleportation’ actually entails. This is the first of a multi-part post which will introduce readers to the one-and-only way we know of how teleportation works.
Continue reading