Back in the early 1990s, I was very interested in the quantum physics of black holes and devoted much of my research effort to thinking about how black holes process quantum information. That effort may have prepared me to appreciate Peter Shor’s spectacular breakthrough — the discovery of a quantum algorithm for factoring intergers efficiently. I told the story here of how I secretly struggled to understand Shor’s algorithm while attending a workshop on black holes in 1994.

Since the mid-1990s, quantum information has been the main focus of my research. I hope that some of the work I’ve done can help to hasten the onset of a new era in which quantum computers are used routinely to perform super-classical tasks. But I have always had another motivation for working on quantum information science — a conviction that insights gained by thinking about quantum computation can illuminate deep issues in other areas of physics, especially quantum condensed matter and quantum gravity. In recent years quantum information concepts have begun penetrating into other fields, and I expect that trend to continue.

The study of quantum black holes has continued to be a very active and fruitful research area in recent years. I’ve not been much involved myself, though, except for one foray (well, also this one). But my friend Lenny Susskind encouraged me to attend a workshop on black holes at Stanford this past weekend, and I’m glad I did. It was fun, and it was gratifying to see that quantum information concepts were prominently featured in many of the talks.

The goal of the workshop was to clarify a question raised in this paper by Almheiri, Marolf, Polchinski, and Sully (AMPS): if a black hole is highly entangled with its surroundings, does a freely falling observer who crosses the event horizon burn to a crisp *immediately right at the horizon*. We have always believed that if Alice foolishly enters a black hole she will be just fine for a while, but will gradually encounter stronger and stronger gravitational forces which will eventually tear her to pieces. AMPS argued that under the right circumstances, Alice’s horrible death comes much earlier than expected, and without any warning. Joe Polchinski wrote a nice explanation of the AMPS argument over at Cosmic Variance, but I’ll give my own version here.

To understand the AMPS puzzle, one needs to appreciate that quantum correlations are different than classical correlations. Classical correlations can be “polygamous” while quantum correlations are “monogamous.”

If Alice and Bob both have copies of the same newspaper, then Alice and Bob become correlated because both can access the same information. But Carrie can acquire a copy of that same newspaper; Bob’s correlation with Alice does not prevent him from becoming just as strongly correlated with Carrie. For that matter, anyone else can buy a newspaper to join the party.

A quantum newspaper is different, because you can read it in two (or more) complementary ways, and we say that two newspapers are “maximally entangled” (have the strongest possible quantum correlations) if both newspapers have the same content when both are read in the same way. In that case, if Alice reads her paper held right-side up she finds only random gibberish, but if Bob reads his newspaper right-side up he sees exactly the same gibberish as Alice. If on the other hand Alice had chosen to read the paper turned sideways, she would have found some other random gibberish, but again Bob would see the same gibberish as Alice if he read his paper sideways, too. Because there is just one way to read a classical newspaper, and lots of ways to read a quantum newspaper, the quantum correlations are stronger than classical ones.

So strong, in fact, that Bob’s entanglement with Alice limits his ability to entangle with Carrie. Bob’s entropy S(B), a measure of his capacity to entangle with others, is an upper bound on the sum of Bob’s entanglement E(A,B) with Alice and his entanglement E(B,C) with Carrie. If Bob is highly entangled with Alice then he can entangle with Carrie only by sacrificing some of his entanglement with Alice. That’s why we say that entanglement is monogamous.

Following AMPS, imagine a black hole which is maximally entangled with another quantum system C outside the black hole. Like any black hole, this one evaporates by emitting Hawking radiation. Also following AMPS, assume that the evaporation is unitary, i.e., conserves quantum information. There is strong evidence that unitarity is an inviolable principle of physics, and we don’t really know how to make sense of quantum mechanics without it. Unitarity implies that as a system B is emitted by the black hole in the form of Hawking radiation, this system B, like the black hole from which it emerged, must be maximally entanged with C. And monogamy of entanglement means that B cannot be entangled with anything else besides C.

But this spells trouble for Alice, the brave soul who dares to fall into the black hole. If Alice’s passage through the event horizon were uneventful then she would fall though space that is nearly devoid of particles. But if we cut the empty space seen by Alice into the inside and outside of the black hole at the event horizon, then the particles in system B seen by an observer who stays outside are paired with particles on the inside — B is entangled with a system A inside the horizon, violating the monogamy of entanglement. Something’s wrong.

The AMPS proposal is that what Alice encounters at the horizon does not look like empty space at all — rather B and A are unentangled, which means that Alice sees many energetic particles. Monogamy of entanglement is rescued, but not poor Alice. She is incinerated by an intense wall of fire as she attempts to pass through the event horizon.

If a black hole forms from a collapsing star and then radiates for a long, long, long time until it has shed more than half its initial entropy, we expect the black hole to become maximally entangled with the radiation already emitted, and hence (if AMPS are right) for a firewall to appear. It is as though the singularity, which we expected to find deep inside the black hole, has crept right up to the event horizon when the black hole is very old.

Like many other physicists, I distrust this conclusion. The black hole could be very large, so that as Alice approaches the horizon she experiences only very weak tidal gravitational forces. It seems terribly unjust for Alice, unaware of the black hole’s age and with no indication that anything is amiss, to suddenly fry without any warning at all.

My first reaction to the AMPS paper was that we should think very carefully about whether, if there are no firewalls, the putative violation of monogamy of entanglement really has a clear operational meaning. We might be willing to tolerate polygamous entanglement if no observer can ever detect the crime! We must ask whether it is possible, at least in principle, for Alice to verify the entanglement between B and C, and then test the entanglement between B and A by plunging into the black hole. AMPS discuss this issue in their paper, but I don’t consider it to be settled. One consideration, mentioned at the workshop by both Patrick Hayden and Daniel Harlow, is that verifying the BC entanglement requires a quantum computation that might be infeasible as a matter of principle, at least for a large black hole.

For now, it seems appropriate to assume both information conservation and no firewalls, seeking some way of reconciling the two. This might involve truly radical revisions in the foundations of quantum mechanics, or bizarre nonlocal dynamics outside the black hole. If we are forced to accept that firewalls really exist, then we will need a deeper understanding of their dynamical origin than the indirect argument AMPS provided.

The workshop was invigorating because nearly everyone seemed confused. Paradoxes are always welcome in physics, as they can help to point us toward revolutionary advances. While no consensus has yet emerged about what the AMPS puzzle is teaching us, I’m hoping that the outcome will be a big stride forward in our understanding of quantum information in gravitational systems.

I was hoping you’d return from the workshop with a unified theory of everything… but an inspiring blog post is a decent consolation. Anyways, would you please clarify these sentences:

“Unitarity implies that as a system B is emitted by the black hole in the form of Hawking radiation, this system B, like the black hole from which it emerged, must be maximally entanged with C. And monogamy of entanglement means that B cannot be entangled with anything else besides C.”

In particular, is system B maximally entangled with BOTH C AND the black hole? Or is B only entangled with C at this point? If the latter, then doesn’t this mean that C needs to become less entangled with the black hole (due to monogamy?) Thanks!

I meant that B is only entangled with C. The crude picture is that initially the black hole is an n-qubit system, maximally entangled with the n-qubit system C. Then the black hole emits k qubits of Hawking radiation (system B), leaving behind an (n-k)-qubit black hole with reduced mass (call that system H). After B is emitted, both B and H are maximally entangled with subsystems of C, and not at all entangled with one another.

Thanks for the reply! I asked the question because my neanderthal-level interpretation of Hawking radiation was as the creation of an entangled pair of virtual particles. One of which (negative energy) enters the black hole and the other (positive energy) escapes. I can’t reconcile this with your comment, so I think I need to revisit these concepts someday. Here is Hawking’s comment (and note of caution) which gave me this intuition (from “Particle Creation by Black Holes”):

“One might picture this negative energy flux in the following way. Just outside the event horizon there will be virtual pairs of particles, one with negative energy and one with positive energy. The negative particle is in a region which is classically forbidden but it can tunnel through the event horizon to the region inside the black hole where the Killing vector which represents time translations is spacelike. In this region the particle can exist as a real particle with a timelike momentum vector even though its energy relative to infinity as measured by the time translation Killing vector is negative. The other particle of the pair, having a positive energy, can escape to infinity where it constitutes a part of the thermal emission described above… …It should be emphasized that these pictures of the mechanism responsible for the thermal emission and area decrease are heuristic only and should not be taken too literally.”

You’re right. The two pictures cannot be reconciled, and that is the AMPS puzzle. Hawking’s heuristic picture is a description of the entanglement between B and A that occurs in a state that looks like vacuum to the freely falling Alice. But AMPS say that if the black hole is maximally entangled with another system (as is the case for a sufficiently old black hole), then unitarity and monogamy imply that Hawking’s picture does not apply: B and A are not entangled. Therefore, the state is not such that freely falling Alice sees vacuum; she sees a firewall instead.

Could Susskind’s stretched horizon compromise be a misinterpretation of fecund universes, and thus, the problem is resolved?

Dear John, there are some really excellent papers-followups to AMPS which are much better than AMPS itself – a paper that is no good and that should only be praised for its provoking tone that led to a new wave of activity in these conceptual matters. The fresh Raju-Papadodimas paper is a real gem that clarifies many things. But most of the papers say pretty much the same thing and they conclude that the AMPS argument is flawed.

The briefest way to describe where your version of AMPS goes wrong is the assumption that Alice and Carrie are two distinct women. They’re not. They’re two sets of degrees of freedom expressed in two totally different bases, preferring two different sets of natural, local, coarse-grained observables (fields inside black hole; fields outside black hole) – but they carry the same information. More precisely, A is a subset of C in the usual “more inclusive” definitions of C. The black hole interior is correlated with some degrees of freedom in the exterior at the a priori Hilbert space level, so the maximal entanglement between A-B and B-C is really the same thing.

Raju and Papadodimas – two great students I knew from Harvard, now higher at the academic ladder – are extremely explicit in showing why the constraints allowing you to say that the interior is just a reshuffled exterior doesn’t cause any violations of nonlocality that could be measured by n-point functions with a finite n not scaling with the black hole size, and with accuracy worse than an exponentially fine one. The interior degrees of freedom are linked to an analytical continuation of some exterior ones but this relationship between them may only be decoded when one does some superfine experiments that are impossible in reality.

Some other anti-AMPS papers, such as the fuzzball-based ones, still say that Alice won’t burn but they say that the effective local theory will break down “somewhat earlier than you would like”. None of these statements is ever supported by waterproof arguments and Raju and Papadodimas actually show that a careful calculation shows that the effective field theory only breaks down in the most extreme conditions. I think that Raju and Papadodimas don’t really show that fuzzballs are wrong as a description – they just invalidate a qualitative conclusion that was associated with the fuzzball theory although it doesn’t follow from it.

Nomura and Varela wrote some papers that are simpler and more conceptual but I agree with them – and wrote the same things independently, see some of my texts (especially the end of the article “Raphael converts”)

http://motls.blogspot.com/search?q=black+hole+firewall&m=1&by-date=true

that Polchinski et al. seem to misunderstand some basic principles of quantum mechanics in this context, however unexpected I would find this statement about a prominent group led by a top theorist a decade ago. The problem is that AMPS assume that A and C “have to be” two independent sets of degrees of freedom – although the black hole complementarity has always been about the very statement that this independence is wrong. AMPS justify this independence by saying that before crossing the horizon, Alice may decide whether she will jump into the black hole or stay out, so she must have some “information in the wave function” that is ready to answer questions about her later measurements both inside and outside, according to her decision, and they assume that these must be two different subsets of the degrees of freedom describing the wave function.

But that’s just wrong. Depending on whether Alice decides to fall into the hole or stay out, the *same* degrees of freedom in the wave function she uses get interpreted in one (internal) way or another, completely different (external) way. Before her decision, her wave function is a linear superposition of two macroscopically different worlds (like in Schrodinger cat: most wave functions in the Hilbert space are these cat-like states!). In a part of the wave function, with some probability amplitudes and inclusive probability, she will jump inside; in another part, she stays out. These two basic terms in the wave functions are tensor multiplied by some wave functions of the “detailed degrees of freedom” but the composition of these “detailed degrees of freedom” (the natural way to describe them) differs in the two terms of the wave function, it depends on whether she jumps in or not (so it’s not strictly a tensor product; it’s a “fibration” of a sort). There’s nothing wrong about it because e.g. decoherence operates in a Hamiltonian- and state-dependent way. The preferred basis isn’t determined a priori; it emerges out of the dynamics that requires us to calculate how the Hamiltonian acts on the initial state. The Hamiltonian and initial state are needed to find out which observables “self-replicate” into the environmental degrees of freedom, and therefore become “classical-like” and able to define coarse-grained states. The myth (invalid consequence of a classical way of looking at the wave function) about the “a priori preferred bases” is the most widespread misunderstanding of quantum mechanics among physicists who are making far-reaching but wrong claims about QM and AMPS now belong to this set, too.

I think your blog entry is fair and balanced but, given the fact that I’ve considered you a top-ten world expert on black hole quantum information for years – although partly because of your bet victory against Hawking – I am disappointed by its being somewhat superficial and uncritical.

It’s people like Raju and Papadodimas whose work should be celebrated because they actually crack the formulae needed to learn how things may actually work and where are the actual limits of measurements that restrict locality – these local expectations only break down at the last possible moment, so to say, they show. It’s frustrating if even prominent physics bloggers such as you only promote the part of this research that is visible because it makes manifestly wrong claims.

All the best

Lubos

I agree that regarding A to be a complementary description of the subsystem of C that is entangled with B would be a pleasing resolution of the puzzle. For that point of view to be consistent, there should be an obstruction that prevents any single observer from verifying both the AB entanglement and the BC entanglement. That’s what I had in mind when I said we should examine whether the violation of monogamy of entanglement has a clear operational meaning.

An older puzzle is that unitarity seems to imply that the same quantum information can be in two places on the same time slice (both encoded in the emitted Hawking radiation and behind the black hole horizon), a putative violation of the principle that quantum information cannot be cloned. That puzzle can be resolved by noting that there is a delay between when quantum information falls into the black hole and when it is released, so that by the time one copy of a quantum state is available in the Hawking radiation it is too late to verify that the other copy is still intact behind the horizon.

Resolving the AMPS puzzle seems to be trickier because there is no such delay, and Alice’s verification task is easy — she just falls through the horizon and either encounters a firewall or doesn’t. Still, I might feel satisfied by a compelling argument that a single observer can verify either the AB entanglement or the BC entanglement, but not both. I meant to say that in my post, but I wasn’t very clear.

Dear John,

right, very true, there is an obstruction preventing an (one) observer from checking both entanglements at the same moment. At a basic level, it really boils down to the fact that the same observer can’t simultaneously measure degrees of freedom in A and C. But one may say much more.

I really recommend you the Papadodimas-Raju paper

http://motls.blogspot.com/2012/12/hawking-radiation-pure-and-thermal.html?m=1

which makes all these things extremely quantitative (you may want to jump to sections 4,5,6 kind of directly, to skip some heavy AdS/CFT dictionaries with a lower density of new concepts). Also, I just wrote a summary of a small subsection of that paper whose value is still great here:

http://motls.blogspot.com/2012/12/hawking-radiation-pure-and-thermal.html?m=1

It’s an explanation why pure density matrices and mixed density matrices are close to each other on very large Hilbert spaces (of Hawking radiation). Exponentially small corrections to the Hawking precisely mixed density matrix, corrections of order exp(-K/hbar) per matrix element, are enough to purify the density matrix. This exponential is zero to all orders in perturbative expansions in hbar.

One may show that this is possible for “generic bases” (plausibility arguments are easy) but the two guys get much further – they pretty much explicitly calculate what all these corrections are in AdS/CFT etc. So they know the full unitary/pure answer as well as its mixed/perturbative/information-destroying approximation and can show the difference is tiny.

They find out that if you want to determine that the entanglement is maximal etc., you need to make very many observations, or exponentially precise observations. In the former case, i.e. measuring too many Hawking quanta, you will find out that you have defined your coarse-grained tensor factor of the Hilbert space to be too large in dimension and it’s precisely this situation that will prevent you from defining the “doubled degrees of freedom” beneath the horizon (inside the hole). So exactly when you approach the accuracy/number of measurements that are needed to verify the maximal entanglements of both things, the geometric picture of the spacetime with the independent regions beneath/above the horizon breaks down. But it (and effective local field theory) always holds as long as one is modest and only chooses a small enough number of the coarse-grained degrees of freedom so that the microstates of the remaining, fine-grained degrees of freedom may be assigned to each of the coarse-grained microstates.

Best regards

Lubos

given enough time, will everything in the observable universe be entangled with something inside a black hole eventually?

What always puzzles me about this description is the lack of discussion of simultaneity issues, which is the usual resolution of GR ‘paradoxes’. As Alice approaches the horizon her Kerr time coordinate is in our very distant future. I don’t understand the technicalities of Hawking radiation but it seems to me that if we are generating it arbitrarily close to the horizon then as seen by a distant observer it only appears long after the stellar collapse (the null geodesics that the Hawking radiation follows reach the outside at large future t). So I worry when you say “maximally entangled with another system (as is the case for a sufficiently old black hole)” – old in which time coordinate? Can you clarify for me?

.. or is that the resolution, that Alice arrives at the horizon at large future t, at which point the hole HAS ALREADY EXPLODED in the final Hawking blast and that wave of Hawking photons is the thing that incinerates her? So she never reaches the horizon?

Though in classical general relativity the horizon is a surface with infinite red shift, the Hawking radiation still manages to escape. Another point emphasized in AMPS, which for the sake of brevity I skipped in my discussion, is that if B is entangled with C once it propagates far from the horizon, then it was already entangled with C while still close to the horizon, e.g. of order a Planck length away. (They call this the local effective quantum field theory assumption.) It is this entanglement of B with C when B is still close to the horizon which seems to imply firewalls.

What _do_ you think of the paper to which Lubos Motl referred you?

thanks,

Raghu.

It is not quite clear for me, if monogamy could be used in some universal way,

e.g. there is an analogue of Bell pair for three spin-1 particles (qutrits) –

it is completely antisymmetric entangled state of such particles.

Why did not consider ABC in such a state?

http://rmp.aps.org/abstract/RMP/v81/i2/p865_1 ch.XVI “Aharonov state”

LIke any three-part state, the Aharonov state obeys a monogamy relation like the one I stated in my post, if we use an entanglement measure such as distillable entanglement or squashed entanglement. The key point is that nothing prevents us from considering a state in which B and C are nearly maximally entangled, and in fact we expect such a state to arise when the black hole is old enough. At that point, monogamy implies that A and B are nearly unentangled, hence a firewall.

The RMP article above claims that only squashed entanglement may be

appropriate for such a state and some subtleties with this measure (eg cf the same RMP paper) maybe also are not occasional. Either unitarity or “irreversible” measures…

This post encourages me to think over what happens when half of a Popescu-Rohrlich box is hidden inside a black hole . :)

More about quantum monogamy, if anybody is interested:

http://qbnets.wordpress.com/2012/12/07/synergetic-ears/

My thoughts have turned to randomness and the nature of asking a black hole questions. If you had an infinite mass black hole, and one thought (wrongly) that all information coming out of the black hole was random, then after a sufficient number of queries, one would get a string of information that would appear to be a history of Alice conducting experiments after jumping into the hole. However, a black hole is finite in mass and contains finite information, if information coming out of the black hole is correlated with information going in then I think we all agree that Alice’s post fall information must be inaccessible. However, that inaccessibility is largely associated with the apparent macroscopic properties of the black hole which imposes limits on the ultimate size of the measurable data. The question then is when does the information about Alice’s activities become destroyed?

The information extracted from the evaporating black hole must be better than random. However, accepting a complexity argument, Alice’s signal must be quantitatively closer to random to make it inaccessible if sent at all. If we accept that Alice’s post fall information is inaccessible, yet still representable as part of the pure density state then I think we can partition the Hilbert space accordingly. These in my mind represent information pockets that are effectively outside any accessible decryption schemes. I would be tempted to suggest these represent background noise. Whether these can also be tied to the idea of pocket universes is intriguing.

Beyond some of the concerns about firewalls that have already been stated, I have an additional one in that if we accept the firewall scenario, the wall of radiation encountered by the in-falling observer (Alice) could be interpreted as the black hole measuring Alice. The wall of particles encountered would not be entirely random. I am stuck on this idea that the proposal of firewalls is effectively a claim that black holes are part of a type of quantum teleportation scheme, however it seems that the concept breaks down at the our assumptions about EPR pairs. Will have to think a little more on this.

Pingback: When You Fall Into a Black Hole, How Long Have You Got? - WebsOutFit | Experts Mania | WebsOutFit | Experts Mania

What happens if Alice and Bob are entangled and Alice enters the black hole? I understand it this way. Say, Alice and Bob are in the first Bell state Alice being the first qubit. As she enters the black hole, at some point she gets fried. This means the state of the first qubit changes. In that case, the state of the second qubit at Bob’s end outside the black hole also changes. According to the Landauer’s principle, change in energy at Bob’s end is kT · ln 2. Can I say that there is also a complementary change of same amount of energy inside the black hole?

Pingback: Firewalls! | Sean Carroll

Pingback: Alice and Bob Meet the Wall of Fire « Tracing Knowledge … Στα ίχνη της Γνώσης

Pingback: East Coast versus West Coast | Not Even Wrong

I had thought that the inside of a black hole, i.e. inside the event horizon, mostly consisted of photons orbiting the singularity, much like planets & such orbit a sun, but losing energy the further out the photons travelled instead of velocity. Particles would mostly be found near the singularity, spiraling in on a one-way trip to being converted to photons. So when Alice crosses the event horizon, gravitationally there would be negligible difference, but she would smack into the “holeopause” of extremely low-level radiation lurking just below – the firewall of AMPS. But would this extremely long-wavelength radiation, however intense, be sufficient to disrupt any quantum entanglement? It’s as if water drops hit a cold skillet and started dancing over the surface as if it were hot.

Pingback: Apocalypses, Firewalls, and Boltzmann Brains | The Quantum Pontiff

What do you think of this argument?

http://arxiv.org/abs/1212.6944

Jacobson asserts that his description is “dual” to the AdS/CFT description by Papadodimas and Raju, which Lubos referenced in a previous comment. In the notation of my post above (as Lubos explained), it seems that we should think of A and C as complementary descriptions of the same system, rather than separate subsystems. Monogamy of entanglement is not violated — it’s possible for B to be maximally entangled with A and also maximally entangled with C if A and C are the same system!

Perhaps that’s right (I’m not sure), but it seems quite bizarre because the map between A and C is so highly nonlocal. A is behind the black hole horizon and C, a system of Hawking radiation emitted eons ago, is distributed in intergalactic space zillions of light years from the black hole. Bob, hovering just outside the horizon, should be able to influence the state of A by dropping something inside the horizon. Hence he would be able to influence C, which if far, far away.

I guess we accepted long ago that quantum gravity is intrinsically nonlocal — that strict locality is an emergent property of semiclassical physics — so the question is how nonlocal can it be? The nonlocality in the above scenario seems extreme, but could it be true? Maybe.

Dear Prof Preskill,

the maps in the black hole complementarity have always been nonlocal – for two decades. I wonder why you (and others) have never protested that the black hole complementarity looked strange to you since the early 1990s and why the explosion of the anti-complementarity revolt was waiting for the year 2012. It has never looked strange to me. What looks strange to me is the sudden overgrowth of irrational complaints and flawed arguments that suddenly want to claim that complementarity is wrong.

In fact, the map’s being heavily nonlocal and complicated – Raju and Papadodimas also tell you how analytic continuation to complex values of time is important for a natural transformation of the degrees of freedom that undo some of the complementary code – is actually needed for the effective local field theory description to work sufficiently well in all the regions.

For example, the Hawking-style perturbative treatment of QFT on a black hole background gives a mixed state to all orders in the perturbative expansion; nevertheless, the exact result for the Hawking radiation is pure (assuming a pure initial state). This is no contradiction because pure and mixed density matrices may be insanely (exponentially) close to each other in high-dimensional Hilbert spaces:

http://motls.blogspot.com/2012/12/hawking-radiation-pure-and-thermal.html?m=1

What’s necessary for this proximity, however, is that the matrix form of “natural observables” for the observer inside the black hole are heavily off-diagonal and diagonalized in a pretty much “random basis” relatively to the exterior observer. In other words, a necessary condition for Nature to be able to preserve the approximate validity of all these points of view and locality in all the regions related by complementarity *is* that the map between the degrees of freedom that are identified is very complicated, essentially chaotic. But it surely is. In Raju-Papadodimas’ language, the emergence of the two regions / complementary descriptions is a special property of pure states that are close to mixed ones (and such pure states are complicated and boast lots of random phases etc.). There’s a lot of other evidence or proofs that the map is complicated. For example, the black holes are the “fastest scramblers”, using the jargon of Susskind and a collaborator. This really means that they’re able to mix up the complex amplitudes in state vectors very quickly and very thoroughly. So even ordinary “waiting” operation produces chaotic rearrangements of the whole accessible Hilbert space of the black hole microstates. It shouldn’t be surprising that if we “leave” the black hole at two different moments, in two different directions, the map between the information as seen by the natural degrees of freedom on the two “exits” will be impenetrable.

Maybe you didn’t mean that the complementarity map is “complicated” and you really meant that it’s nonlocal in the sense that it relates degrees of freedom we associate with different regions – but this is true really by definition. If a map related degrees of freedom in the same region but at different points, by some simple enough map, then the locality in that region would be heavily violated, wouldn’t it? But in practice, the fact that we’re related “two different spaces/regions” is no more mysterious than the fact that the Fourier transform (relevant for the ordinary Bohr complementarity) relates two different spaces – the coordinate space and the momentum space.

Best regards

Lubos

It seems to me that the nonlocality we are now contemplating in the Papadodimas-Raju scenario goes beyond the previously accepted nonlocality in the black hole complementarity hypothesis, at least as I had understood it. This relates to our exchange on Dec. 4 regarding whether the A-B entanglement and the B-C entanglement are both verifiable. If as your claimed (but I did not fully understand) it is not possible to verify both types of entanglement, then the Papadodimas-Raju picture may hold together fairly well.

In the picture I thought I had previously understood (explained in this paper with Hayden), there was a nonlocal map relating observables inside the black hole to observables in the exterior. But this nonlocality could not be discerned by the outside observer Bob. From his point of view, stuff could fall into the black hole, be thermalized at the stretched horizon after the scrambling time, and then be re-emitted in the Hawking radiation in a highly scrambled form. In contrast, the A=C hypothesis seems to suggest that Bob can influence C without waiting for scrambling and re-emission. Maybe I have that wrong.

Anyway, the A=C hypothesis seems more plausible to me than firewalls or violations of unitarity.

Fwiw…the paper by Jacobson is probably my favorite on this so far. The concept of boundary unitarity is worth thinking about. http://arxiv.org/pdf/1212.6944v1.pdf

On the monogamy of entanglement. I’ve confused myself about the following: Suppose you have a system A that’s composed of 2n spin states that are n pairwise entangled pairs, and another system B with 2m states that are m pairwise entangled states. Can A and B be entangled? I am thinking, yes, they can if the states are distinguishable, for example by location. You could have two different pairings in A and two different pairings in B and entangle them much like you’d entangle states rather than pairs of states. Does that make sense?

Dear Sabine,

note that your scenario only contains two letters A,B, so it isn’t in conflict with monogamy which bans two entanglements among the 3 groups of degrees of freedom (e.g. 3 groups of qubits) A,B,C: it says that A-B and B-C maximum entanglements are impossible. You don’t have any C so it’s no contradiction with monogamy.

In your scenario, you may divide A to A1,A2 – the two groups of spins that are up for A1 and down for B2, or vice versa. Similarly, B may be split to the maximally entangled collections B1,B2. Then monogamy says that A1 can’t be maximally entangled e.g. with B1. It’s important that in the monogamy statement, A1 or B1 that are prohibited from having two spouses are fixed collections of qubits. So if you discuss “two different pairings” in A, you are apparently dividing A to A1,A2 or A3,A4 by two different cuts. But it’s like changing the rules of the game. The no-monogamy law says something about what A1 isn’t allowed to do; or A2 isn’t allowed to do. What A3 or A4 does is an entirely different issue.

There’s no contradiction between your state of the qubits in A,B and the no-monogamy law. You just constructed something that “vaguely feels” that it has “too much entanglement” or “several types of entanglement”. But the no-monogamy law isn’t a wide assertion about all vague sloppy statements one could make. It is a very particular claim about A,B,C that you haven’t violated.

Incidentally, the key claim in your firewall paper – that the early and late Hawking radiation isn’t entangled – is totally indefensible given the current knowledge of black hole information. The entanglement between “already emitted radiation” and the “remaining black hole” is demonstrably increasing up to some point because the Hawking pairs (one outgoing, one incoming) are clearly entangled with each other. This increases the remaining_black_hole – old_radiation entanglement, and the remaining black hole is then unitarily evaporated into the late Hawking radiation, so the late Hawking radiation must be entangled with the early one, too.

Your last paper about gravity not being quantum is wrong as well but it’s arguably “off-topic” – it’s a claim that would make all the discussions above totally irrelevant but be sure they’re not irrelevant.

Best regards

Lubos

Dear Lubos,

Thanks for the explanation. I wasn’t trying to construct any “contradiction” as you seem to believe, I was simply trying to understand what the monogamy is about. Neither was I thinking in particular about the ABC subsystems in the firewall paper. So let me summarize it like this: If I have a system A with objects a_i, each of which is entangled with a partner in A, then A can still be entangled with another system B because, as you put it “different cuts change the game.”

Re firewall: I didn’t claim that the early and late radiation is not entangled. I said, if it is not, then there’s no problem. To be more precise, they can be entangled, just not too much. You should know that I don’t believe in the statisical interpretation of the BH entropy anyway, so the whole “firewall” problem, for what I am concerned, is a non-problem.

Best,

Sabine

Dear Sabine, you say that you were trying to understand what monogamy was about – you wrote it now – but you also discussed, in the very same discussion, some particular collections of qubits divided into pairs of qubits that are entangled etc.

So one would think that the reason why you talked about this collection of qubits is that you thought that this thought experiment about qubits was relevant for the monogamy theorem, and I explained to you why it wasn’t. But if you already discussed your thought experiment because it had nothing to do with the question and you think it’s logical to discuss thought experiments that have nothing to do with the question you are trying to understand, then I apologize I can’t help you because I am confined into the narrow-minded and obsolete male logic that tries to discuss arguments that are relevant, not irrelevant.

The early and late Hawking radiation is not only entangled; when one makes the cut in the middle of the radiated entropy, it is nearly maximally entangled.

I hear you don’t believe the statistical interpretation of entropy. What can I say about that except for a capitalized WOW? The fact that entropy is always interpreted statistically in a complete microscopic theory has been known since the 19th century insights by people like Boltzmann. Is that fashionable in the contemporary Northern European physics to dismiss such basic things and be proud about it?

All the best

LM

It is quite amusing that you write you are “confined into the narrow-minded and obsolete male logic that tries to discuss arguments that are relevant, not irrelevant” while you are the one bringing up one after the other irrelevant distraction, including your attempt to guess reasons for my question and misinterpret my motivation to seek understanding. I have zero interest to continue this discussion – experience tells me it will be a waste of time. Ha det så bra, Sabine

Let me be even more clear now. You wrote that you wanted to understand monogamy and in the very same comment, you presented a situation involving two groups of qubits that you wanted to be explained.

I have explained that it didn’t obey the conditions of monogamy so it had no relevance. You apparently didn’t like the fact that I pointed out that your scenario wasn’t a scenario relevant for monogamy – Prof Preskill told you the same thing.

So you started to mask why you were writing about this wrong – for monogamy irrelevant – thought experiment. But there simply isn’t any conceivable way for you to suggest that you haven’t done anything silly. You were demonstrably thinking about a situation that isn’t a realization of monogamy. There are only two possibilities: you either didn’t know that your scenario wasn’t a valid scenario to discuss monogamy because you didn’t have three groups of quantum degrees of freedom in it at all; or you did realize that your scenario was irrelevant.

In the first case, it just means that you didn’t know what monogamy claims and you tried to guess what it is and your guess wasn’t quite right, so you were corrected and you had a chance to learn something. In the second case, it follows that you lack basics of logical reasoning because you are deliberately proposing arguments that you know to be irrelevant for the points you are trying to discuss.

At any rate, it has to be one of these two things, and if you try to spread fog claiming that it was none of the things above, you are just being demonstrably dishonest.

You implicitly divide A into subsystems A1-A2 and B into subsystems B1-B2, and consider a superposition in which the “pairing” of A1 with A2 is correlated with the pairing of B1 with B2. You are asking, I think, whether the monogamy of entanglement is violated because A1-A2 is maximally entangled yet A2 has some entanglement with B.

In this case, though, A1-A2 is not maximally entangled — rather its state is a mixture of distinguishable maximally entangled states. In this mixture, the A1-A2 entanglement is reduced sufficiently for monogamy of entanglement to be satisfied.

Yes, thanks, that’s what I was wondering. Is there a way to quantify how non-maximal the entanglement is? Consider A is an ordered chain of its constituents a_i, with i=1..2n, and you can pairwise entangle them in all possible orders. Same for B with b_j, j=1..2m. How would I figure out what entanglement is possible between A and B?

I believe what you are asking is the how to measure entanglement between two systems A,B. A quick reference is found at in Structured Quantum Programming ( //tph.tuwien.ac.at/~oemer/doc/structquprog.pdf ) under 1.2.3.4 Composite Systems. Another good discussion is found in Exploring the Quantum, Haroche and Raimond, Section 2.4.3 Schmidt Expansion and entropy of entanglement. In any case, remember that in the simplest cases, entanglement is a statement about the idempotency of the density matrix. The trace of the squared density matrix, Tr(rho^2), will take take a value between 1/n and 1 where n is the size of the matrix.

Dear Hal,

The idempotency of the density matrix is a measure of purity (how close the quantum state is to a pure state). You are probably thinking of how pure the reduced density matrix of a pure state is, along a particular cut. The more pure the reduced density matrix, the closer to an unentangled (product) state the original state will be.

Thank you. I have a horrible tendency of over-generalization due to a tremendous lack of time. The three thoughts in my head at that moment where:

1. Density matrices at n = infinity

2. All “classical” statistical mixtures are composed of something that could only be described by representations of “pseudo-pure” states. IOW, as we talk about pure states, coherence and entanglement, we have to recognize that at some point the notion of separable pure states breaks down since off-diagonal components of the density matrix can never really vanish.

3. In certain constructions of the density matrix, such as the reduced density matrix, and because of the close relationship between entanglement and coherence, we can generalize the degree of entanglement into a type of phase factor that for lack of a better word can be understood as idempotency.

Thanks :) I’ll look up these references.

My pleasure. One last comment for clarity. As you probably know, Von Neumann Entropy is also used as a measure of entanglement, however it is only based on the diagonal components of the density matrix …e.g. S = -tr(rho ln rho). So this measure, by definition, disregards information about the off diagonal components. The test for purity (what may be viewed as idempotency), relies upon the square of the density matrix, and thus retains information about the off diagonal components, and because of the closeness of the relationship of coherence and entanglement one can capture information about entanglement in the measure, however, at a fundamental level one needs to keep the two concepts separate. That said, in certain circumstances, I would argue the purity measure (tr(rho^2)) can serve as an entanglement measure, and as mentioned above, in reality the off diagonal components never truly vanish even though it is convenient to assume they do when performing calculations.

Dear Hal,

I am afraid that I must once again play the role of the entanglement police :)

First thing I would like to clarify is that the von Neumann entropy depends on all elements of the matrix, not just the diagonal ones. As you noted, the entropy is given by , and even though the trace of a matrix only depends on the diagonal elements, because of the multiplication with the logarithm of the density matrix inside the trace, this is no longer true (that the entropy S still depends only on the diagonal elements of the matrix ). What is always true is that the entropy (as well as the purity) are functions of the eigenvalues (only) of the density matrix in question. In other words, your intuition is correct, but only once we have rotated our viewpoint to the basis that diagonalizes the density matrix.

The second thing to clarify is that purity and entropy are not measures of entanglement in the sense that you think: coherence and entanglement are not connected to each other. You can have a fully coherent state that has no entanglement (product state of two coherent states) and another coherent state with maximal entanglement (Bell-like state). It is the choice of a cut of the state into two parts and the subsequent use of either entropy or purity on the *reduced* density matrix that determines the amount of entanglement present in the initial pure state. If the initial state is mixed, then things become even more complicated and one may need to use different measures of entanglement like Entanglement of Formation, or Squashed Entanglement to see how much entanglement can be extracted from the initial state.

[Note: In the discussion about coherence vs entanglement above, I have made the assumption that your use of the term coherence means purity, but it may be that you really mean Glauber coherent states - that is, states satisfying the Least uncertainty in position and momentum (Gaussian states). Please specify what coherence you were referring to, in case I assumed incorrectly.]

Thanks, definitely appropriate corrections and certainly adds clarity. As you surmised I was only thinking about coherence in a general sense. In any case, in the case of firewalls, one wonders if it is appropriate to make a claim of starting with a pure state and propose its unitary evolution and then inject another classical entity into the mix arbitrarily, I am not sure why that procedure makes sense, or if it is given that is mechanically possible without qualification.

If anyone is interested, here is an updated line of thinking. I was thinking along the lines of the issue of auto-synchronization of oscillators (like metronomes) in classical physics, which at some level becomes a question of how a massive fermion knows what its mass is supposed to be when it crosses the black hole horizon. Of course we know now that the Higgs field and spontaneous symmetry breaking leads us to a Higgs condensate which can then give particles mass. So the situation of no firewalls implicitly assumes the Higgs field has the same effect inside the horizon as outside the horizon. If we look at the firewall scenario, and the injection of an arbitrary third entity, if they have not co-evolved with the other parties, then it seems reasonable to question why we would expect them to share the same broken symmetry as it relates to their mass.

So the question is whether there are any analogous scenarios that one can reference, and after a while one thinks more about the metronome situation, and Dynamical Mean Field Theory and Ising models start coming into view. What also comes into view is the problem of Domain Wall formation and other topological solitons. The firewall proposal seems very similar, since there is an energy and temperature that must be associated with the firewall. It turns out there are a few Arxiv articles ( http://arxiv.org/pdf/hep-ph/0204154v1.pdf ) that discuss the possibility of EPR correlations in the Higgs field explaining away domain wall formation. To quote the referenced paper. “just the large energy concentrated in the domain walls turns out to be the factor substantially suppressing the probability of their formation.”

This sort of non-locality in the Higgs field is not prohibited I think in the firewall scenario. It seems that there is no reason for us to think that the physics inside the horizon is substantially different from outside the horizon given that the extra entity (Carrie for instance) shares the same physics as Alice and Bob before Alice jumps into the black hole. In the Higgs field, we have an example of a global means of coupling physics in some sense a priori to the situation in question. This is especially important I think when we talk about massive objects, and the very mass we are talking about we now know requires a mechanism like symmetry breaking to occur very early in evolution in order for mass to even be quantifiable.

Maybe my thinking is wrong on this.

In a paper published in Z. Naturforschung 56a, 889, 2001, I had explained the following: The w^3 spectrum of the zero point vacuum energy, obtained by quantum mechanics with E= 1/2 hw, multiplied with 4pi x w^2 dw, proportinal to w^3, assumes the only form which is Lorentz invariant. Therefore, one can say the Minkowski space -time is “generated” by the zero point vacuum energy. But if cut off at the Planck energy, it is Lorentz invariant only up to this energy, generating a distinguished reference system in which the zero point energy is at rest. According to the pre-Einstein theory of relativity by Lorentz and Poincare, for velocities below the velocity of light (in this reference system), objects are held together in a static equilibrium by electrostatic forces (or forces acting like them), by a solution of an elliptic partial differential equation derived from Maxwell’s equations. In aproching the velocity of light this differential equation goes over the Euler-Tricomi equation into a hyperbolic differential equation where there is no such equilibrium. In gas dynamics this is analogous to the transition from subsonic to supersonic flow. But this is the same sort of transition which happens in approaching the event horizon, where the flow against the zero point energy reaches the velocity of light. For astronomical objects with velocties small to the velocity of light against this reference system, special and general relativity remain good approximations, but not for objects (or elementary particles) approaching the event horizon. Now, for a collapsing large spherical mass, the event horizon first appears in the center of the mass as a point, and in reaching this point the energy of particles can reach the Planck energy, whereby the particles decay into leptons and photons, with the result that the entire mass is converted into a gamma ray burst. This result is in agreement with the observed gamma ray bursters where a solar mass m is fully converted into radiation according to E = mc^2. Under these circumstances unitarity is not violated, and there can be no particle entanglement crossing the event horizon. As it turns out, a black hole is the best particle accelerator, reaching particle energies 15 orders of magnitude larger than the LHC. But it must not always lead to a gamma ray burst. I believe it was Jeans or Eddington who noticed that there is a flow of hydrogen coming from the center of the galaxy. This then might happen for a very large black hole, where in a slow collapse towards the event horizon in the center, gamma rays accelerate the hydrogen out of the center.

Pingback: QIP 2013 from the perspective of a greenhorn (grad student) | Quantum Frontiers

“Is Alice burning? The black hole firewall controversy | Quantum Frontiers” really enables me personally contemplate a little bit extra.

I actually admired every individual element of it.

Thanks for your effort ,Graig

Dear Dr. Preskill,

I have noticed in the upcoming April 2013 APS meeting in Denver there will be an invited session on black hole firewalls. I have all the speakers informed of my 2001 paper in Z. f. Naturforschung, electronically sending them copies of my paper. I am wondering if they will acknowledge my much earlier work. My paper gives a plausible explantion of the observed gamma ray bursters. The decay of matter into gamma photons should begin a Planck length away from the event horizon where the elliptic differential equation merges into the parabolic Euler-Tricomi equation. I know string theory but never believed in it. I once had told Witten that supersymmetry can exist without string theory but not the other way around. And I believe much less in the fuzzball model of a black hole.

Yours Friedwardt Winterberg

Pingback: Shtetl-Optimized » Blog Archive » John Preskill: My Lodestar of Awesomeness

Pingback: Big ball of fire | Cuentos Cuánticos

If we demand an operational meaning for states, then Alice and Bob is now in an asymmetric situation: Alice can have information about Bob’s particle but not vice versa. So according to Bob, he no longer ascribe an entangled state to the particle pair if one of them is inside the event horizon.

But to Alice they are still entangled, so no equivalence principle is violated here.

In this way, state assignment fundamentally depends whether the particles are causally reachable. Since in normal usage of QM every particles are “causally transparent”, state assignments can be user independent. But in the case where spacetime regions are divided into causally unconnected parts, state assignment depends on the spacetime region of the particles.

Pingback: Black hole firewall paradox

Pingback: A Public Lecture on Quantum Information | Quantum Frontiers

Pingback: Entanglement = Wormholes | Quantum Frontiers

Pingback: Kβαντική σύμπλεξη = Σκουληκότρυπες | physicsgg

Pingback: Qué pasa al entrar en un agujero negro | Francis (th)E mule Science's News

Pingback: Wormholes May Save Physics From Black Hole Infernos | RocketNews

Pingback: Shtetl-Optimized » Blog Archive » Firewalls

Pingback: What’s inside a black hole? | Quantum Frontiers

Pingback: Update on the Amplituhedron | 4 gravitons and a grad student

Pingback: Reporting from the ‘Frontiers of Quantum Information Science’ | Quantum Frontiers

Pingback: No, Hawking Isn’t Saying There Are No Black Holes | Whiskey…Tango…Foxtrot?

Pingback: Snow and “Brick Wall” Firewall Precursors; t’Hooft’s primacy | The Furloff

Pingback: Making predictions in the multiverse | Quantum Frontiers

Pingback: The Complexity Horizon | An Island in Theoryspace