# Inflation on the back of an envelope

Last Monday was an exciting day!

After following the BICEP2 announcement via Twitter, I had to board a transcontinental flight, so I had 5 uninterrupted hours to think about what it all meant. Without Internet access or references, and having not thought seriously about inflation for decades, I wanted to reconstruct a few scraps of knowledge needed to interpret the implications of r ~ 0.2.

I did what any physicist would have done … I derived the basic equations without worrying about niceties such as factors of 3 or $2 \pi$. None of what I derived was at all original —  the theory has been known for 30 years — but I’ve decided to turn my in-flight notes into a blog post. Experts may cringe at the crude approximations and overlooked conceptual nuances, not to mention the missing references. But some mathematically literate readers who are curious about the implications of the BICEP2 findings may find these notes helpful. I should emphasize that I am not an expert on this stuff (anymore), and if there are serious errors I hope better informed readers will point them out.

By tradition, careless estimates like these are called “back-of-the-envelope” calculations. There have been times when I have made notes on the back of an envelope, or a napkin or place mat. But in this case I had the presence of mind to bring a notepad with me.

Notes from a plane ride

According to inflation theory, a nearly homogeneous scalar field called the inflaton (denoted by $\phi$)  filled the very early universe. The value of $\phi$ varied with time, as determined by a potential function $V(\phi)$. The inflaton rolled slowly for a while, while the dark energy stored in $V(\phi)$ caused the universe to expand exponentially. This rapid cosmic inflation lasted long enough that previously existing inhomogeneities in our currently visible universe were nearly smoothed out. What inhomogeneities remained arose from quantum fluctuations in the inflaton and the spacetime geometry occurring during the inflationary period.

Gradually, the rolling inflaton picked up speed. When its kinetic energy became comparable to its potential energy, inflation ended, and the universe “reheated” — the energy previously stored in the potential $V(\phi)$ was converted to hot radiation, instigating a “hot big bang”. As the universe continued to expand, the radiation cooled. Eventually, the energy density in the universe came to be dominated by cold matter, and the relic fluctuations of the inflaton became perturbations in the matter density. Regions that were more dense than average grew even more dense due to their gravitational pull, eventually collapsing into the galaxies and clusters of galaxies that fill the universe today. Relic fluctuations in the geometry became gravitational waves, which BICEP2 seems to have detected.

Both the density perturbations and the gravitational waves have been detected via their influence on the inhomogeneities in the cosmic microwave background. The 2.726 K photons left over from the big bang have a nearly uniform temperature as we scan across the sky, but there are small deviations from perfect uniformity that have been precisely measured. We won’t worry about the details of how the size of the perturbations is inferred from the data. Our goal is to achieve a crude understanding of how the density perturbations and gravitational waves are related, which is what the BICEP2 results are telling us about. We also won’t worry about the details of the shape of the potential function $V(\phi)$, though it’s very interesting that we might learn a lot about that from the data.

Exponential expansion

Einstein’s field equations tell us how the rate at which the universe expands during inflation is related to energy density stored in the scalar field potential. If a(t) is the “scale factor” which describes how lengths grow with time, then roughly

$\left(\frac{\dot a}{a}\right)^2 \sim \frac{V}{m_P^2}$.

Here $\dot a$ means the time derivative of the scale factor, and $m_P = 1/\sqrt{8 \pi G} \approx 2.4 \times 10^{18}$ GeV is the Planck scale associated with quantum gravity. (G is Newton’s gravitational constant.) I’ve left our a factor of 3 on purpose, and I used the symbol ~ rather than = to emphasize that we are just trying to get a feel for the order of magnitude of things. I’m using units in which Planck’s constant $\hbar$ and the speed of light c are set to one, so mass, energy, and inverse length (or inverse time) all have the same dimensions. 1 GeV means one billion electron volts, about the mass of a proton.

(To persuade yourself that this is at least roughly the right equation, you should note that a similar equation applies to an expanding spherical ball of radius a(t) with uniform mass density V. But in the case of the ball, the mass density would decrease as the ball expands. The universe is different — it can expand without diluting its mass density, so the rate of expansion $\dot a / a$ does not slow down as the expansion proceeds.)

During inflation, the scalar field $\phi$ and therefore the potential energy $V(\phi)$ were changing slowly; it’s a good approximation to assume $V$ is constant. Then the solution is

$a(t) \sim a(0) e^{Ht},$

where $H$, the Hubble constant during inflation, is

$H \sim \frac{\sqrt{V}}{m_P}.$

To explain the smoothness of the observed universe, we require at least 50 “e-foldings” of inflation before the universe reheated — that is, inflation should have lasted for a time at least $50 H^{-1}$.

Slow rolling

During inflation the inflaton $\phi$ rolls slowly, so slowly that friction dominates inertia — this friction results from the cosmic expansion. The speed of rolling $\dot \phi$ is determined by

$H \dot \phi \sim -V'(\phi).$

Here $V'(\phi)$ is the slope of the potential, so the right-hand side is the force exerted by the potential, which matches the frictional force on the left-hand side. The coefficient of $\dot \phi$ has to be $H$ on dimensional grounds. (Here I have blown another factor of 3, but let’s not worry about that.)

Density perturbations

The trickiest thing we need to understand is how inflation produced the density perturbations which later seeded the formation of galaxies. There are several steps to the argument.

Quantum fluctuations of the inflaton

As the universe inflates, the inflaton field is subject to quantum fluctuations, where the size of the fluctuation depends on its wavelength. Due to inflation, the wavelength increases rapidly, like $e^{Ht}$, and once the wavelength gets large compared to $H^{-1}$, there isn’t enough time for the fluctuation to wiggle — it gets “frozen in.” Much later, long after the reheating of the universe, the oscillation period of the wave becomes comparable to the age of the universe, and then it can wiggle again. (We say that the fluctuations “cross the horizon” at that stage.) Observations of the anisotropy of the microwave background have determined how big the fluctuations are at the time of horizon crossing. What does inflation theory say about that?

Well, first of all, how big are the fluctuations when they leave the horizon during inflation? Then the wavelength is $H^{-1}$ and the universe is expanding at the rate $H$, so $H$ is the only thing the magnitude of the fluctuations could depend on. Since the field $\phi$ has the same dimensions as $H$, we conclude that fluctuations have magnitude

$\delta \phi \sim H.$

From inflaton fluctuations to density perturbations

Reheating occurs abruptly when the inflaton field reaches a particular value. Because of the quantum fluctuations, some horizon volumes have larger than average values of $\phi$ and some have smaller than average values; hence different regions reheat at slightly different times. The energy density in regions that reheat earlier starts to be reduced by expansion (“red shifted”) earlier, so these regions have a smaller than average energy density. Likewise, regions that reheat later start to red shift later, and wind up having larger than average density.

When we compare different regions of comparable size, we can find the typical (root-mean-square) fluctuations $\delta t$ in the reheating time, knowing the fluctuations in $\phi$ and the rolling speed $\dot \phi$:

$\delta t \sim \frac{\delta \phi}{\dot \phi} \sim \frac{H}{\dot\phi}.$

Small fractional fluctuations in the scale factor $a$ right after reheating produce comparable small fractional fluctuations in the energy density $\rho$. The expansion rate right after reheating roughly matches the expansion rate $H$ right before reheating, and so we find that the characteristic size of the density perturbations is

$\delta_S\equiv\left(\frac{\delta \rho}{\rho}\right)_{hor} \sim \frac{\delta a}{a} \sim \frac{\dot a}{a} \delta t\sim \frac{H^2}{\dot \phi}.$

The subscript hor serves to remind us that this is the size of density perturbations as they cross the horizon, before they get a chance to grow due to gravitational instabilities. We have found our first important conclusion: The density perturbations have a size determined by the Hubble constant $H$ and the rolling speed $\dot \phi$ of the inflaton, up to a factor of order one which we have not tried to keep track of. Insofar as the Hubble constant and rolling speed change slowly during inflation, these density perturbations have a strength which is nearly independent of the length scale of the perturbation. From here on we will denote this dimensionless scale of the fluctuations by $\delta_S$, where the subscript $S$ stands for “scalar”.

Perturbations in terms of the potential

Putting together $\dot \phi \sim -V' / H$ and $H^2 \sim V/{m_P}^2$ with our expression for $\delta_S$, we find

$\delta_S^2 \sim \frac{H^4}{\dot\phi^2}\sim \frac{H^6}{V'^2} \sim \frac{1}{{m_P}^6}\frac{V^3}{V'^2}.$

The observed density perturbations are telling us something interesting about the scalar field potential during inflation.

Gravitational waves and the meaning of r

The gravitational field as well as the inflaton field is subject to quantum fluctuations during inflation. We call these tensor fluctuations to distinguish them from the scalar fluctuations in the energy density. The tensor fluctuations have an effect on the microwave anisotropy which can be distinguished in principle from the scalar fluctuations. We’ll just take that for granted here, without worrying about the details of how it’s done.

While a scalar field fluctuation with wavelength $\lambda$ and strength $\delta \phi$ carries energy density $\sim \delta\phi^2 / \lambda^2$, a fluctuation of the dimensionless gravitation field $h$ with wavelength $\lambda$ and strength $\delta h$ carries energy density $\sim m_P^2 \delta h^2 / \lambda^2$. Applying the same dimensional analysis we used to estimate $\delta \phi$ at horizon crossing to the rescaled field $h/m_P$, we estimate the strength $\delta_T$ of the tensor fluctuations as

$\delta_T^2 \sim \frac{H^2}{m_P^2}\sim \frac{V}{m_P^4}.$

From observations of the CMB anisotropy we know that $\delta_S\sim 10^{-5}$, and now BICEP2 claims that the ratio

$r = \frac{\delta_T^2}{\delta_S^2}$

is about $r\sim 0.2$ at an angular scale on the sky of about one degree. The conclusion (being a little more careful about the O(1) factors this time) is

$V^{1/4} \sim 2 \times 10^{16}~GeV \left(\frac{r}{0.2}\right)^{1/4}.$

This is our second important conclusion: The energy density during inflation defines a mass scale, which turns our to be $2 \times 10^{16}~GeV$ for the observed value of $r$. This is a very interesting finding because this mass scale is not so far below the Planck scale, where quantum gravity kicks in, and is in fact pretty close to theoretical estimates of the unification scale in supersymmetric grand unified theories. If this mass scale were a factor of 2 smaller, then $r$ would be smaller by a factor of 16, and hence much harder to detect.

Rolling, rolling, rolling, …

Using $\delta_S^2 \sim H^4/\dot\phi^2$, we can express $r$ as

$r = \frac{\delta_T^2}{\delta_S^2}\sim \frac{\dot\phi^2}{m_P^2 H^2}.$

It is convenient to measure time in units of the number $N = H t$ of e-foldings of inflation, in terms of which we find

$\frac{1}{m_P^2} \left(\frac{d\phi}{dN}\right)^2\sim r;$

Now, we know that for inflation to explain the smoothness of the universe we need $N$ larger than 50, and if we assume that the inflaton rolls at a roughly constant rate during $N$ e-foldings, we conclude that, while rolling, the change in the inflaton field is

$\frac{\Delta \phi}{m_P} \sim N \sqrt{r}.$

This is our third important conclusion — the inflaton field had to roll a long, long, way during inflation — it changed by much more than the Planck scale! Putting in the O(1) factors we have left out reduces the required amount of rolling by about a factor of 3, but we still conclude that the rolling was super-Planckian if $r\sim 0.2$. That’s curious, because when the scalar field strength is super-Planckian, we expect the kind of effective field theory we have been implicitly using to be a poor approximation because quantum gravity corrections are large. One possible way out is that the inflaton might have rolled round and round in a circle instead of in a straight line, so the field strength stayed sub-Planckian even though the distance traveled was super-Planckian.

Spectral tilt

As the inflaton rolls, the potential energy, and hence also the Hubble constant $H$, change during inflation. That means that both the scalar and tensor fluctuations have a strength which is not quite independent of length scale. We can parametrize the scale dependence in terms of how the fluctuations change per e-folding of inflation, which is equivalent to the change per logarithmic length scale and is called the “spectral tilt.”

To keep things simple, let’s suppose that the rate of rolling is constant during inflation, at least over the length scales for which we have data. Using $\delta_S^2 \sim H^4/\dot\phi^2$, and assuming $\dot\phi$ is constant, we estimate the scalar spectral tilt as

$-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim - \frac{4 \dot H}{H^2}.$

Using $\delta_T^2 \sim H^2/m_P^2$, we conclude that the tensor spectral tilt is half as big.

From $H^2 \sim V/m_P^2$, we find

$\dot H \sim \frac{1}{2} \dot \phi \frac{V'}{V} H,$

and using $\dot \phi \sim -V'/H$ we find

$-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim \frac{V'^2}{H^2V}\sim m_P^2\left(\frac{V'}{V}\right)^2\sim \left(\frac{V}{m_P^4}\right)\left(\frac{m_P^6 V'^2}{V^3}\right)\sim \delta_T^2 \delta_S^{-2}\sim r.$

Putting in the numbers more carefully we find a scalar spectral tilt of $r/4$ and a tensor spectral tilt of $r/8$.

This is our last important conclusion: A relatively large value of $r$ means a significant spectral tilt. In fact, even before the BICEP2 results, the CMB anisotropy data already supported a scalar spectral tilt of about .04, which suggested something like $r \sim .16$. The BICEP2 detection of the tensor fluctuations (if correct) has confirmed that suspicion.

Summing up

If you have stuck with me this far, and you haven’t seen this stuff before, I hope you’re impressed. Of course, everything I’ve described can be done much more carefully. I’ve tried to convey, though, that the emerging story seems to hold together pretty well. Compared to last week, we have stronger evidence now that inflation occurred, that the mass scale of inflation is high, and that the scalar and tensor fluctuations produced during inflation have been detected. One prediction is that the tensor fluctuations, like the scalar ones, should have a notable spectral tilt, though a lot more data will be needed to pin that down.

I apologize to the experts again, for the sloppiness of these arguments. I hope that I have at least faithfully conveyed some of the spirit of inflation theory in a way that seems somewhat accessible to the uninitiated. And I’m sorry there are no references, but I wasn’t sure which ones to include (and I was too lazy to track them down).

It should also be clear that much can be done to sharpen the confrontation between theory and experiment. A whole lot of fun lies ahead.

Okay, here’s a good reference, a useful review article by Baumann. (I found out about it on Twitter!)

From Baumann’s lectures I learned a convenient notation. The rolling of the inflaton can be characterized by two “potential slow-roll parameters” defined by

$\epsilon = \frac{m_p^2}{2}\left(\frac{V'}{V}\right)^2,\quad \eta = m_p^2\left(\frac{V''}{V}\right).$

Both parameters are small during slow rolling, but the relationship between them depends on the shape of the potential. My crude approximation ($\epsilon = \eta$) would hold for a quadratic potential.

We can express the spectral tilt (as I defined it) in terms of these parameters, finding $2\epsilon$ for the tensor tilt, and $6 \epsilon - 2\eta$ for the scalar tilt. To derive these formulas it suffices to know that $\delta_S^2$ is proportional to $V^3/V'^2$, and that $\delta_T^2$ is proportional to $H^2$; we also use

$3H\dot \phi = -V', \quad 3H^2 = V/m_P^2,$

keeping factors of 3 that I left out before. (As a homework exercise, check these formulas for the tensor and scalar tilt.)

It is also easy to see that $r$ is proportional to $\epsilon$; it turns out that $r = 16 \epsilon$. To get that factor of 16 we need more detailed information about the relative size of the tensor and scalar fluctuations than I explained in the post; I can’t think of a handwaving way to derive it.

We see, though, that the conclusion that the tensor tilt is $r/8$ does not depend on the details of the potential, while the relation between the scalar tilt and $r$ does depend on the details. Nevertheless, it seems fair to claim (as I did) that, already before we knew the BICEP2 results, the measured nonzero scalar spectral tilt indicated a reasonably large value of $r$.

Once again, we’re lucky. On the one hand, it’s good to have a robust prediction (for the tensor tilt). On the other hand, it’s good to have a handle (the scalar tilt) for distinguishing among different inflationary models.

One last point is worth mentioning. We have set Planck’s constant $\hbar$ equal to one so far, but it is easy to put the powers of $\hbar$ back in using dimensional analysis (we’ll continue to assume the speed of light c is one). Since Newton’s constant $G$ has the dimensions of length/energy, and the potential $V$ has the dimensions of energy/volume, while $\hbar$ has the dimensions of energy times length, we see that

$\delta_T^2 \sim \hbar G^2V.$

Thus the production of gravitational waves during inflation is a quantum effect, which would disappear in the limit $\hbar \to 0$. Likewise, the scalar fluctuation strength $\delta_S^2$ is also $O(\hbar)$, and hence also a quantum effect.

Therefore the detection of primordial gravitational waves by BICEP2, if correct, confirms that gravity is quantized just like the other fundamental forces. That shouldn’t be a surprise, but it’s nice to know.

# Oh, the Places You’ll Do Theoretical Physics!

I won’t run lab tests in a box.
I won’t run lab tests with a fox.
But I’ll prove theorems here or there.
Yes, I’ll prove theorems anywhere…

Physicists occupy two camps. Some—theorists—model the world using math. We try to predict experiments’ outcomes and to explain natural phenomena. Others—experimentalists—gather data using supermagnets, superconductors, the world’s coldest atoms, and other instruments deserving of superlatives. Experimentalists confirm that our theories deserve trashing or—for this we pray—might not model the world inaccurately.

Theorists, people say, can work anywhere. We need no million-dollar freezers. We need no multi-pound magnets.* We need paper, pencils, computers, and coffee. Though I would add “quiet,” colleagues would add “iPods.”

Theorists’ mobility reminds me of the book Green Eggs and Ham. Sam-I-am, the antagonist, drags the protagonist to spots as outlandish as our workplaces. Today marks the author’s birthday. Since Theodor Geisel stimulated imaginations, and since imagination drives physics, Quantum Frontiers is paying its respects. In honor of Oh, the Places You’ll Go!, I’m spotlighting places you can do theoretical physics. You judge whose appetite for exotica exceeds whose: Dr. Seuss’s or theorists’.

I’ve most looked out-of-place doing physics by a dirt road between sheep-populated meadows outside Lancaster, UK. Lancaster, the War of the Roses victor, is a city in northern England. The year after graduating from college, I worked in Lancaster University as a research assistant. I studied a crystal that resembles graphene, a material whose superlatives include “superstrong,” “supercapacitor,” and “superconductor.” From morning to evening, I’d submerse in math till it poured out my ears. Then I’d trek from “uni,” as Brits say, to the “city centre,” as they write.

The trek wound between trees; fields; and, because I was in England, puddles. Many evenings, a rose or a sunset would arrest me. Other evenings, physics would. I’d realize how to solve an equation, or that I should quit banging my head against one. Stepping off the road, I’d fish out a notebook and write. Amidst the puddles and lambs. Cyclists must have thought me the queerest sight since a cloudless sky.

A colleague loves doing theory in the sky. On planes, he explained, hardly anyone interrupts his calculations. And who minds interruptions by pretzels and coffee?

“A mathematician is a device for turning coffee into theorems,” some have said, and theoretical physicists live down the block from mathematicians in the neighborhood of science. Turn a Pasadena café upside-down and shake it, and out will fall theorists. Since Hemingway’s day, the romanticism has faded from the penning of novels in cafés. But many a theorist trumpets about an equation derived on a napkin.

Trumpeting filled my workplace in Oxford. One of Clarendon Lab’s few theorists, I neighbored lasers, circuits, and signs that read “DANGER! RADIATION.” Though radiation didn’t leak through our walls (I hope), what did contributed more to that office’s eccentricity more than radiation would. As early as 9:10 AM, the experimentalists next door blasted “Born to Be Wild” and Animal House tunes. If you can concentrate over there, you can concentrate anywhere.

One paper I concentrated on had a Crumple-Horn Web-Footed Green-Bearded Schlottz of an acknowledgements section. In a physics paper’s last paragraph, one thanks funding agencies and colleagues for support and advice. “The authors would like to thank So-and-So for insightful comments,” papers read. This paper referenced a workplace: “[One coauthor] is grateful to the Half Moon Pub.” Colleagues of the coauthor confirmed the acknowledgement’s aptness.

Though I’ve dwelled on theorists’ physical locations, our minds roost elsewhere. Some loiter in atoms; others, in black holes; some, on four-dimensional surfaces; others, in hypothetical universes. I hobnob with particles in boxes. As Dr. Seuss whisks us to a Bazzim populated by Nazzim, theorists tell of function spaces populated by Rényi entropies.

The next time you see someone standing in a puddle, or in a ditch, or outside Buckingham Palace, scribbling equations, feel free to laugh. You might be seeing a theoretical physicist. You might be seeing me. To me, physics has relevance everywhere. Scribbling there and here should raise eyebrows no more than any setting in a Dr. Seuss book.

The author would like to thank this emporium of Seussoria. And Java & Co.

*We need for them to confirm that our theories deserve trashing, but we don’t need them with us. Just as, when considering quitting school to break into the movie business, you need for your mother to ask, “Are you sure that’s a good idea, dear?” but you don’t need for her to hang on your elbow. Except experimentalists don’t say “dear” when crushing theorists’ dreams.

# Guns versus butter in quantum information

while(not_dead){

sleep--;

time--;

awesome++;

}

/*There’s a reason we can’t hang out with you…*/

The message is written in Java, a programming language. Even if you’ve never programmed, you likely catch the drift: CS majors are the bees’ knees because, at the expense of sleep and social lives, they code. I disagree with part of said drift: CS majors hung out with me despite being awesome.

The rest of the drift—you have to give some to get some—synopsizes the physics I encountered this fall. To understand tradeoffs, you needn’t study QI. But what trades off with what, according to QI, can surprise us.

The T-shirt haunted me at the University of Nottingham, where researchers are blending QI with Einstein’s theory of relativity. Relativity describes accelerations, gravity, and space-time’s curvature. In other sources, you can read about physicists’ attempts to unify relativity and quantum mechanics, the Romeo and Tybalt of modern physics, into a theory of quantum gravity. In this article, relativity tangos with quantum mechanics in relativistic quantum information (RQI). If I move my quantum computer, RQIers ask, how do I change its information processing? How does space-time’s curvature affect computation? How can motion affect measurements?

Nottingham researchers kindly tolerating a seminar by me

For example, acceleration entangles particles. Decades ago, physicists learned that acceleration creates particles. Say you’re gazing into a vacuum—not empty space, but nearly empty space, the lowest-energy system that can exist. Zooming away on a rocket, I accelerate relative to you. From my perspective, more particles than you think—and higher-energy particles—surround us.

Have I created matter? Have I violated the Principle of Conservation of Energy (and Mass)? I created particles in a sense, but at the expense of rocket fuel. You have to give some to get some:

Fuel--;
Particles++;

The math that describes my particles relates to the math that describes entanglement.* Entanglement is a relationship between quantum systems. Say you entangle two particles, then separate them. If you measure one, you instantaneously affect the other, even if the other occupies another city.

Say we encode information in quantum particles stored in a box.** Just as you encode messages by writing letters, we write messages in the ink of quantum particles. Say the box zooms off on a rocket. Just as acceleration led me to see particles in a vacuum, acceleration entangles the particles in our box. Since entanglement facilitates computation, you can process information by shaking a box. And performing another few steps.

When an RQIer told me so, she might as well have added that space-time has 106 dimensions and the US would win the World Cup. Then my T-shirt came to mind. To get some, you have to give some. When you give something, you might get something. Giving fuel gets you entanglement. To prove that statement, I need to do and interpret math. Till I have time to,

Fuel--;
Entanglement++;

offers intuition.

After cropping up in Nottingham, my T-shirt reared its head (collar?) in physics problem after physics problem. By “consuming entanglement”—forfeiting that ability to affect the particle in another city—you can teleport quantum information.

Entanglement--;
Quantum teleportation++;

My research involves tradeoffs between information and energy. As the Hungarian physicist Leó Szilárd showed, you can exchange information for work. Say you learn which half of a box*** a particle occupies, and you trap the particle in that half. Upon freeing the particle—forfeiting your knowledge about its location—you can lift a weight, charge a battery, or otherwise store energy.

Information--;
Energy++;

If you expend energy, Rolf Landauer showed, you can gain knowledge.

Energy--;
Information++;

No wonder my computer-science friends joked about sleep deprivation. But information can energize. For fuel, I forage in the blending of fields like QI and relativity, and in physical intuitions like those encapsulated in the pseudo-Java above. Much as Szilard’s physics enchants me, I’m glad that the pursuit of physics contradicts his conclusion:

while(not_dead){

Information++;

Energy++;

}

The code includes awesome++ implicitly.

*Bogoliubov transformations, to readers familiar with the term.

**In the fields in a cavity, to readers familiar with the terms.

***Physicists adore boxes, you might have noticed.

With thanks to Ivette Fuentes and the University of Nottingham for their hospitality and for their introduction to RQI.

## Making predictions in the multiverse

### Image

I am a theoretical physicist at University of California, Berkeley. Last month, I attended a very interesting conference organized by Foundamental Questions Institute (FQXi) in Puerto Rico, and presented a talk about making predictions in cosmology, especially in the eternally inflating multiverse. I very much enjoyed discussions with people at the conference, where I was invited to post a non-technical account of the issue as well as my own view of it. So here I am.

I find it quite remarkable that some of us in the physics community are thinking with some “confidence” that we live in the multiverse, more specifically one of the many universes in which low-energy physical laws take different forms. (For example, these universes have different elementary particles with different properties, possibly different spacetime dimensions, and so on.) This idea of the multiverse, as we currently think, is not simply a result of random imagination by theorists, but is based on several pieces of observational and theoretical evidence.

Observationally, we have learned more and more that we live in a highly special universe—it seems that the “physical laws” of our universe (summarized in the form of standard models of particle physics and cosmology) takes such a special form that if its structure were varied slightly, then there would be no interesting structure in the universe, let alone intelligent life. It is hard to understand this fact unless there are many universes with varying “physical laws,” and we simply happen to emerge in a universe which allows for intelligent life to develop (which seems to require special conditions). With multiple universes, we can understand the “specialness” of our universe precisely as we understand the “specialness” of our planet Earth (e.g. the ideal distance from the sun), which is only one of the many planets out there.

Perhaps more nontrivial is the fact that our current theory of fundamental physics leads to this picture of the multiverse in a very natural way. Imagine that at some point in the history of the universe, space is exponentially expanding. This expansion—called inflation—occurs when space is filled with a “positive vacuum energy” (which happens quite generally). We knew, already in 80′s, that such inflation is generically eternal. During inflation, various non-inflating regions called bubble universes—of which our own universe could be one—may form, much like bubbles in boiling water. Since ambient space expands exponentially, however, these bubbles do not percolate; rather, the process of creating bubble universes lasts forever in an eternally inflating background. Now, recent progress in string theory suggests that low energy theories describing phyics in these bubble universes (such as the elementary particle content and their properties) may differ bubble by bubble. This is precisely the setup needed to understand the “specialness” of our universe because of the selection effect associated with our own existence, as described above.

A schematic depiction of the eternally inflating multiverse. The horizontal and vertical directions correspond to spatial and time directions, respectively, and various regions with the inverted triangle or argyle shape represent different universes. While regions closer to the upper edge of the diagram look smaller, it is an artifact of the rescaling made to fit the large spacetime into a finite drawing—the fractal structure near the upper edge actually corresponds to an infinite number of large universes.

This particular version of the multiverse—called the eternally inflating multiverse—is very attractive. It is theoretically motivated and has a potential to explain various features seen in our universe. The eternal nature of inflation, however, causes a serious issue of predictivity. Because the process of creating bubble universes occurs infinitely many times, “In an eternally inflating universe, anything that can happen will happen; in fact, it will happen an infinite number of times,” as phrased in an article by Alan Guth. Suppose we want to calculate the relative probability for (any) events $A$ and $B$ to happen in the multiverse. Following the standard notion of probability, we might define it as the ratio of the numbers of times events $A$ and $B$ happen throughout the whole spacetime

$P = \frac{N_A}{N_B}$.

In the eternally inflating multiverse, however, both $A$ and $B$ occur infinitely many times: $N_A, N_B = \infty$. This expression, therefore, is ill-defined. One might think that this is merely a technical problem—we simply need to “regularize” to make both $N_{A,B}$ finite, at a middle stage of the calculation, and then we get a well-defined answer. This is, however, not the case. One finds that depending on the details of this regularization procedure, one can obtain any “prediction” one wants, and there is no a priori preferred way to proceed over others—predictivity of physical theory seems lost!

Over the past decades, some physicists and cosmologists have been thinking about many aspects of this so-called measure problem in eternal inflation. (There are indeed many aspects to the problem, and I’m omitting most of them in my simplified presentation above.) Many of the people who contributed were in the session at the conference, including Aguirre, Albrecht, Bousso, Carroll, Guth, Page, Tegmark, and Vilenkin. My own view, which I think is shared by some others, is that this problem offers a window into deep issues associated with spacetime and gravity. In my 2011 paper I suggested that quantum mechanics plays a crucial role in understanding the multiverse, even at the largest distance scales. (A similar idea was also discussed here around the same time.) In particular, I argued that the eternally inflating multiverse and quantum mechanical many worlds a la Everett are the same concept:

Multiverse = Quantum Many Worlds

in a specific, and literal, sense. In this picture, the global spacetime of general relativity appears only as a derived concept at the cost of overcounting true degrees of freedom; in particular, infinitely large space associated with eternal inflation is a sort of “illusion.” A “true” description of the multiverse must be “intrinsically” probabilistic in a quantum mechanical sense—probabilities in cosmology and quantum measurements have the same origin.

To illustrate the basic idea, let us first consider an (apparently unrelated) system with a black hole. Suppose we drop some book $A$ into the black hole and observe subsequent evolution of the system from a distance. The book will be absorbed into (the horizon of) the black hole, which will then eventually evaporate, leaving Hawking radiation. Now, let us consider another process of dropping a different book $B$, instead of $A$, and see what happens. The subsequent evolution in this case is similar to the case with $A$, and we will be left with Hawking radiation. However, this final-state Hawking radiation arising from $B$ is (believed by many to be) different from that arising from $A$ in its subtle quantum correlation structure, so that if we have perfect knowledge about the final-state radiation then we can reconstruct what the original book was. This property is called unitarity and is considered to provide the correct picture for black hole dynamics, based on recent theoretical progress. To recap, the information about the original book will not be lost—it will simply be distributed in final-state Hawking radiation in a highly scrambled form.

A puzzling thing occurs, however, if we observe the same phenomenon from the viewpoint of an observer who is falling into the black hole with a book. In this case, the equivalence principle says that the book does not feel gravity (except for the tidal force which is tiny for a large black hole), so it simply passes through the black hole horizon without any disruption. (Recently, this picture was challenged by the so-called firewall argument—the book might hit a collection of higher energy quanta called a firewall, rather than freely fall. Even if so, it does not affect our basic argument below.) This implies that all the information about the book (in fact, the book itself) will be inside the horizon at late times. On the other hand, we have just argued that from a distant observer’s point of view, the information will be outside—first on the horizon and then in Hawking radiation. Which is correct?

One might think that the information is simply duplicated: one copy inside and the other outside. This, however, cannot be the case. Quantum mechanics prohibits faithful copying of full quantum information, the so-called no-cloning theorem. Therefore, it seems that the two pictures by the two observers cannot both be correct.

The proposed solution to this puzzle is interesting—both pictures are correct, but not at the same time. The point is that one cannot be both a distant observer and a falling observer at the same time. If you are a distant observer, the information will be outside, and the interior spacetime must be viewed as non-existent since you can never access it even in principle (because of the existence of the horizon). On the other hand, if you are a falling observer, then you have the interior spacetime in which the information (the book itself) will fall, but this happens only at the cost of losing a part of spacetime in which Hawking radiation lies, which you can never access since you yourself are falling into the black hole. There is no inconsistency in either of these two pictures; only if you artificially “patch” the two pictures, which you cannot physically do, does the apparent inconsistency of information duplication occurs. This somewhat surprising aspect of a system with gravity is called black hole complementarity, pioneered by ‘t Hooft, Susskind, and their collaborators.

What does this discussion of black holes have to do with cosmology, and, in particular the eternally inflating multiverse? In cosmology our space is surrounded by a cosmological horizon. (For example, imagine that space is expanding exponentially; this makes it impossible for us to obtain any signal from regions farther than some distance because objects in these regions recede faster than speed of light. The definition of appropriate horizons in general cases is more subtle, but can be made.) The situation, therefore, is the “inside out” version of the black hole case viewed from a distant observer. As in the case of the black hole, quantum mechanics requires that spacetime on the other side of the horizon—in this case the exterior to the cosmological horizon—must be viewed as non-existent. (In the paper I made this claim based on some simple supportive calculations.) In a more technical term, a quantum state describing the system represents only the region within the horizon—there is no infinite space in any single, consistent description of the system!

If a quantum state represents only space within the horizon, then where is the multiverse, which we thought exists in an eternally inflating space further away from our own horizon? The answer is—probability! The process of creating bubble universes is a probabilistic process in the quantum mechanical sense—it occurs through quantum mechanical tunneling. This implies that, starting from some initially inflating space, we could end up with different universes probabilistically. All different universes—including our own—live in probability space. In a more technical term, a state representing eternally inflating space evolves into a superposition of terms—or branches—representing different universes, but with each of them representing only the region within its own horizon. Note that there is no concept of infinitely large space here, which led to the ill-definedness of probability. The picture of initially large multiverse, naively suggested by general relativity, appears only after “patching” pictures based on different branches together; but this vastly overcounts true degrees of freedom as was the case if we include both the interior spacetime and Hawking radiation in our description of a black hole.

The description of the multiverse presented here provides complete unification of the eternally inflating multiverse and the many worlds interpretation in quantum mechanics. Suppose the multiverse starts from some initial state $|\Psi(t_0)\rangle$. This state evolves into a superposition of states in which various bubble universes nucleate in various locations. As time passes, a state representing each universe further evolves into a superposition of states representing various possible cosmic histories, including different outcomes of “experiments” performed within that universe. (These “experiments” may, but need not, be scientific experiments—they can be any physical processes.) At late times, the multiverse state $|\Psi(t)\rangle$ will thus contain an enormous number of terms, each of which represents a possible world that may arise from $|\Psi(t_0)\rangle$ consistently with the laws of physics. Probabilities in cosmology and microscopic processes are then both given by quantum mechanical probabilities in the same manner. The multiverse and quantum many worlds are really the same thing—they simply refer to the same phenomenon occurring at (vastly) different scales.

A schematic picture for the evolution of the multiverse state. As t increases, the state evolves into a superposition of states in which various bubble universes nucleate in various locations. Each of these states then evolves further into a superposition of states representing various possible cosmic histories, including different outcomes of experiments performed within that universe.

The picture presented here does not solve all the problems in eternally inflating cosmology. What is the actual quantum state of the multiverse? What is its “initial conditions”? What is time? How does it emerge? The picture, however, does provide a framework to address these further, deep questions, and I have recently made some progress: the basic idea is that the state of the multiverse (which may be selected uniquely by the normalizability condition) never changes, and yet time appears as an emergent concept locally in branches as physical correlations among objects (along the lines of an old idea by DeWitt). Given the length already, I will not elaborate on this new development here. If you are interested, you might want to read my paper.

It is fascinating that physicists can talk about big and deep questions like the ones discussed here based on concrete theoretical progress. Nobody really knows where these explorations will finally lead us to. It seems, however, clear that we live in an exciting era in which our scientific explorations reach beyond what we thought to be the entire physical world, our universe.

# Reporting from the ‘Frontiers of Quantum Information Science’

What am I referring to with this title? It is similar to the name of this blog–but that’s not where this particular title comes from–although there is a common denominator. Frontiers of Quantum Information Science was the theme for the 31st Jerusalem winter school in theoretical physics, which takes place annually at the Israeli Institute for Advanced Studies located on the Givat Ram campus of the Hebrew University of Jerusalem. The school took place from December 30, 2013 through January 9, 2014, but some of the attendees are still trickling back to their home institutions. The common denominator is that our very own John Preskill was the director of this school; co-directed by Michael Ben-Or and Patrick Hayden. John mentioned during a previous post and reiterated during his opening remarks that this is the first time the IIAS has chosen quantum information to be the topic for its prestigious advanced school–another sign of quantum information’s emergence as an important sub-field of physics. In this blog post, I’m going to do my best to recount these festivities while John protects his home from forest fires, prepares a talk for the Simons Institute’s workshop on Hamiltonian complexityteaches his quantum information course and celebrates his birthday 60+1.

The school was mainly targeted at physicists, but it was diversely represented. Proof of the value of this diversity came in an interaction between a computer scientist and a physicist, which led to one of the school’s most memorable moments. Both of my most memorable moments started with the talent show (I was surprised that so many talents were on display at a physics conference…) Anyways, towards the end of the show, Mateus Araújo Santos, a PhD student in Vienna, entered the stage and mentioned that he could channel “the ghost of Feynman” to serve as an oracle for NP-complete decision problems. After making this claim, people obviously turned to Scott Aaronson, hoping that he’d be able to break the oracle. However, in order for this to happen, we had to wait until Scott’s third lecture about linear optics and boson sampling the next day. You can watch Scott bombard the oracle with decision problems from 1:00-2:15 during the video from his third lecture.

Scott Aaronson grilling the oracle with a string of NP-complete decision problems! From 1:00-2:15 during this video.

The other most memorable moment was when John briefly danced Gangnam style during Soonwon Choi‘s talent show performance. Unfortunately, I thought I had this on video, but the video didn’t record. If anyone has video evidence of this, then please share!

# Jostling the unreal in Oxford

So wrote Philip Pullman, author of The Golden Compass and its sequels. In the series, a girl wanders from the Oxford in another world to the Oxford in ours.

I’ve been honored to wander Oxford this fall. Visiting Oscar Dahlsten and Jon Barrett, I’ve been moonlighting in Vlatko Vedral’s QI group. We’re interweaving 21st-century knowledge about electrons and information with a Victorian fixation on energy and engines. This research program, quantum thermodynamics, should open a window onto our world.

A new world. At least, a world new to the author.

To study our world from another angle, Oxford researchers are jostling the unreal. Oscar, Jon, Andrew Garner, and others are studying generalized probabilistic theories, or GPTs.

What’s a specific probabilistic theory, let alone a generalized one? In everyday, classical contexts, probabilities combine according to rules you know. Suppose you have a 90% chance of arriving in London-Heathrow Airport at 7:30 AM next Sunday. Suppose that, if you arrive in Heathrow at 7:30 AM, you’ll have a 70% chance of catching the 8:05 AM bus to Oxford. You have a probability 0.9 * 0.7 = 0.63 of arriving in Heathrow at 7:30 and catching the 8:05 bus. Why 0.9 * 0.7? Why not 0.90.7, or 0.9/(2 * 0.7)? How might probabilities combine, GPT researchers ask, and why do they combine as they do?

Not that, in GPTs, probabilities combine as in 0.9/(2 * 0.7). Consider the 0.9/(2 * 0.7) plucked from a daydream inspired by this City of Dreaming Spires. But probabilities do combine in ways we wouldn’t expect. By entangling two particles, separating them, and measuring one, you immediately change the probability that a measurement of Particle 2 yields some outcome. John Bell explored, and experimentalists have checked, statistics generated by entanglement. These statistics disobey rules that govern Heathrow-and-bus statistics. As do entanglement statistics, so do effects of quantum phenomena like discord, negative Wigner functions, and weak measurements. Quantum theory and its contrast with classicality force us to reconsider probability.

# Polarizer: Rise of the Efficiency

How should a visitor to Zürich spend her weekend?

Launch this question at a Swiss lunchtable, and you split diners into two camps. To take advantage of Zürich, some say, visit Geneva, Lucerne, or another spot outside Zürich. Other locals suggest museums, the lake, and the 19th-century ETH building.

The 19th-century ETH building

ETH, short for a German name I’ve never pronounced, is the polytechnic from which Einstein graduated. The polytechnic houses a quantum-information (QI) theory group that’s pioneering ideas I’ve blogged about: single-shot information, epsilonification, and small-scale thermodynamics. While visiting the group this August, I triggered an avalanche of tourism advice. Caught between two camps, I chose Option Three: Contemplate polar codes.

Polar codes compress information into the smallest space possible. Imagine you write a message (say, a Zürich travel guide) and want to encode it in the fewest possible symbols (so it fits in my camera bag). The longer the message, the fewer encoding symbols you need per encoded symbol: The more punch each code letter can pack. As the message grows, the encoding-to-encoded ratio decreases. The lowest possible ratio is a number, represented by H, called the Shannon entropy.

So established Claude E. Shannon in 1948. But Shannon didn’t know how to code at efficiency H. Not for 51 years did we know.

I learned how, just before that weekend. ETH student David Sutter walked me through polar codes as though down Zürich’s Banhofstrasse.

The Banhofstrasse, one of Zürich’s trendiest streets, early on a Sunday.

Say you’re encoding n copies of a random variable. When I say, “random variable,” think, “character in the travel guide.” Just as each character is one of 26 letters, each variable has one of many possible values.

Suppose the variables are independent and identically distributed. Even if you know some variables’ values, you can’t guess others’. Cryptoquote players might object that we can infer unknown from known letters. For example, a three-letter word that begins with “th” likely ends with “e.” But our message lacks patterns.

Think of the variables as diners at my lunchtable. Asking how to fill a weekend in Zürich—splitting the diners—I resembled the polarizer.

The polarizer is a mathematical object that sounds like an Arnold Schwarzenegger film and acts on the variables. Just as some diners pointed me outside Zürich, the polarizer gives some variables one property. Just some diners pointed me to within Zürich, the polarizer gives some variables another property. Just as I pointed myself at polar codes, the polarizer gives some variables a third property.

These properties involve entropy. Entropy quantifies uncertainty about a variable’s value—about which of the 26 letters a character represents. Even if you know the early variables’ values, you can’t guess the later variables’. But we can guess some polarized variables’ values. Call the first polarized variable u1, the second u2, etc. If we can guess the value of some ui, that ui has low entropy. If we can’t guess the value, ui has high entropy. The Nicole-esque variables have entropies like the earnings of Terminator Salvation: noteworthy but not chart-topping.

To recap: We want to squeeze a message into the tiniest space possible. Even if we know early variables’ values, we can’t infer later variables’. Applying the polarizer, we split the variables into low-, high-, and middling-entropy flocks. We can guess the value of each low-entropy ui, if we know the foregoing uh’s.

Almost finished!

In your camera-size travel guide, transcribe the high-entropy ui’s. These ui’s suggest the values of the low-entropy ui’s. When you want to decode the guide, guess the low-entropy ui’s. Then reverse the polarizer to reconstruct much of the original text.

The longer the original travel guide, the fewer errors you make while decoding, and the smaller the ratio of the encoded guide’s length to the original guide’s length. That ratio shrinks–as the guide’s length grows–to H. You’ve compressed a message maximally efficiently. As the Swiss say: Glückwünsche.

How does compression relate to QI? Quantum states form messages. Polar codes, ETH scientists have shown, compress quantum messages maximally efficiently. Researchers are exploring decoding strategies and relationships among (quantum) polar codes. With their help, Shannon-coded travel guides might fit not only in my camera bag, but also on the tip of my water bottle.

Should you need a Zürich travel guide, I recommend Grossmünster Church. Not only does the name fulfill your daily dose of umlauts. Not only did Ulrich Zwingli channel the Protestant Reformation into Switzerland there. Climbing a church tower affords a panorama of Zürich. After oohing over the hills and ahhing over the lake, you can shift your gaze toward ETH. The worldview being built there bewitches as much as the vista from any tower.

A tower with a view.

With gratitude to ETH’s QI-theory group (particularly to Renato Renner) for its hospitality. And for its travel advice. With gratitude to David Sutter for his explanations and patience.

The author and her neue Freunde.

# The cost and yield of moving from (quantum) state to (quantum) state

In ten days, I’d move from Florida, where I’d spent the summer with family, to Caltech. Unfolded boxes leaned against my dresser, and suitcases yawned on the floor. I was working on a paper. Even if I’d turned around from my desk, I wouldn’t have seen the stacked books and folded sheets. I’d have seen Lorenz curves, because I’d drawn Lorenz curves all week, and the curves seemed imprinted on my eyeballs.

Using Lorenz curves, we illustrate how much we know about a quantum state. Say you have an electron, you’ll measure it using a magnet, and you can’t predict any measurement’s outcome. Whether you orient the magnet up-and-down, left-to-right, etc., you haven’t a clue what number you’ll read out. We represent this electron’s state by a straight line from (0, 0) to (1, 1).

Say you know the electron’s state. Say you know that, if you orient the magnet up-and-down, you’ll read out +1. This state, we call “pure.” We represent it by a tented curve.

The more you know about a state, the more the state’s Lorenz curve deviates from the straight line.

If Curve A fails to dip below Curve B, we know at least as much about State A as about State B. We can transform State A into State B by manipulating and/or discarding information.

By the time I’d drawn those figures, I’d listed the items that needed packing. A coauthor had moved from North America to Europe during the same time. If he could hop continents without impeding the paper, I could hop states. I unzipped the suitcases, packed a box, and returned to my desk.

Say Curve A dips below Curve B. We know too little about State A to transform it into State B. But we might combine State A with a state we know lots about. The latter state, C, might be pure. We have so much information about A + C, the amalgam can turn into B.

What’s the least amount of information we need about C to ensure that A + C can turn into B? That number, we call the “cost of transforming State A into State B.”

We call it that usually. But late in the evening, after I’d miscalculated two transformation costs and deleted four curves, days before my flight, I didn’t type the cost’s name into emails to coauthors. I typed “the cost of turning A into B” or “the cost of moving from state to state.”

# The million dollar conjecture you’ve never heard of…

Curating a blog like this one and writing about imaginary stuff like Fermat’s Lost Theorem means that you get the occasional comment of the form: I have a really short proof of a famous open problem in math. Can you check it for me? Usually, the answer is no. But, about a week ago, a reader of the blog that had caught an omission in a proof contained within one of my previous posts, asked me to do just that: Check out a short proof of Beal’s Conjecture. Many of you probably haven’t heard of billionaire Mr. Beal and his \$1,000,000 conjecture, so here it is:

Let $a,b,c$ and $x,y,z > 2$ be positive integers satisfying $a^x+b^y=c^z$. Then, $gcd(a,b,c) > 1$; that is, the numbers $a,b,c$ have a common factor.

After reading the “short proof” of the conjecture, I realized that this was a pretty cool conjecture! Also, the short proof was wrong, though the ideas within were non-trivial. But, partial progress had been made by others, so I thought I would take a crack at it on the 10 hour flight from Athens to Philadelphia. In particular, I convinced myself that if I could prove the conjecture for all even exponents $x,y,z$, then I could claim half the prize. Well, I didn’t quite get there, but I made some progress using knowledge found in these two blog posts: Redemption: Part I and Fermat’s Lost Theorem. In particular, one can show that the conjecture holds true for $x=y=2n$ and $z = 2k$, for $n \ge 3, k \ge 1$. Moreover, the general case of even exponents can be reduced to the case of $x=y=p \ge 3$ and $y=z=q \ge 3$, for $p,q$ primes. Which makes one wonder if the general case has a similar reduction, where two of the three exponents can be assumed equal.

The proof is pretty trivial, since most of the heavy lifting is done by Fermat’s Last Theorem (which itself has a rather elegant, short proof I wanted to post in the margins – alas, WordPress has a no-writing-on-margins policy). Moreover, it turns out that the general case of even exponents follows from a combination of results obtained by others over the past two decades (see the Partial Results section of the Wikipedia article on the conjecture linked above – in particular, the (n,n,2) case). So why am I even bothering to write about my efforts? Because it’s math! And math equals magic. Also, in case this proof is not known and in the off chance that some of the ideas can be used in the general case. Okay, here we go…

Proof. The idea is to assume that the numbers $a,b,c$ have no common factor and then reach a contradiction. We begin by noting that $a^{2m}+b^{2n}=c^{2k}$ is equivalent to $(a^m)^2+(b^n)^2=(c^k)^2$. In other words, the triplet $(a^m,b^n,c^k)$ is a Pythagorean triple (sides of a right triangle), so we must have $a^m=2rs, b^n=r^2-s^2, c^k =r^2+s^2$, for some positive integers $r,s$ with no common factors (otherwise, our assumption that $a,b,c$ have no common factor would be violated). There are two cases to consider now:

Case I: $r$ is even. This implies that $2r=a_0^m$ and $s=a_1^m$, where $a=a_0\cdot a_1$ and $a_0,a_1$ have no factors in common. Moreover, since $b^n=r^2-s^2=(r+s)(r-s)$ and $r,s$ have no common factors, then $r+s,r-s$ have no common factors either (why?) Hence, $r+s = b_0^n, r-s=b_1^n$, where $b=b_0\cdot b_1$ and $b_0,b_1$ have no factors in common. But, $a_0^m = 2r = (r+s)+(r-s)=b_0^n+b_1^n$, implying that $a_0^m=b_0^n+b_1^n$, where $b_0,b_1,a_0$ have no common factors.

Case II: $s$ is even. This implies that $2s=a_1^m$ and $r=a_0^m$, where $a=a_0\cdot a_1$ and $a_0,a_1$ have no factors in common. As in Case I, $r+s = b_0^n, r-s=b_1^n$, where $b=b_0\cdot b_1$ and $b_0,b_1$ have no factors in common. But, $a_1^m = 2s = (r+s)-(r-s)=b_0^n-b_1^n$, implying that $a_1^m+b_1^n=b_0^n$, where $b_0,b_1,a_1$ have no common factors.

We have shown, then, that if Beal’s conjecture holds for the exponents $(x,y,z)=(n,n,m)$ and $(x,y,z)=(m,n,n)$, then it holds for $(x,y,z)=(2m,2n,2k)$, for arbitrary $k \ge 1$. As it turns out, when $m=n$, Beal’s conjecture becomes Fermat’s Last Theorem, implying that the conjecture holds for all exponents $(x,y,z)=(2n,2n,2k)$, with $n\ge 3$ and $k\ge 1$.

Open Problem: Are there any solutions to $a^p+b^p= c\cdot (a+b)^q$, for $a,b,c$ positive integers and primes $p,q\ge 3$?

PS: If you find a mistake in the proof above, please let everyone know in the comments. I would really appreciate it!

# What’s inside a black hole?

I have a multiple choice question for you.

What’s inside a black hole?

(A) An unlimited amount of stuff.
(B) Nothing at all.
(C) A huge but finite amount of stuff, which is also outside the black hole.
(D) None of the above.

The first three answers all seem absurd, boosting the credibility of (D). Yet … at the “Rapid Response Workshop” on black holes I attended last week at the KITP in Santa Barbara (and which continues this week), most participants were advocating some version of (A), (B), or (C), with varying degrees of conviction.

When physicists get together to talk about black holes, someone is bound to draw a cartoon like this one:

Part of a Penrose diagram depicting the causal structure of a black hole spacetime.

I’m sure I’ve drawn and contemplated some version of this diagram hundreds of times over the past 25 years in the privacy of my office, and many times in public discussions (including at least five times during the talk I gave at the KITP). This picture vividly captures the defining property of a black hole, found by solving Einstein’s classical field equations for gravitation: once you go inside there is no way out. Instead you are unavoidably drawn to the dreaded singularity, where known laws of physics break down (and the picture can no longer be trusted). If taken seriously, the picture says that whatever falls into a black hole is gone forever, at least from the perspective of observers who stay outside.

But for nearly 40 years now, we have known that black holes can shed their mass by emitting radiation, and presumably this process continues until the black hole disappears completely. If we choose to, we can maintain the black hole for as long as we please by feeding it new stuff at the same rate that radiation carries energy away. What I mean by option (A) is that  the radiation is completely featureless, carrying no information about what kind of stuff fell in. That means we can hide as much information as we please inside a black hole of a given mass.

On the other hand, the beautiful theory of black hole thermodynamics indicates that the entropy of a black hole is determined by its mass. For all other systems we know of besides black holes, the entropy of the system quantifies how much information we can hide in the system. If (A) is the right answer, then black holes would be fundamentally different in this respect, able to hide an unlimited amount of information even though their entropy is finite. Maybe that’s possible, but it would be rather disgusting, a reason to dislike answer (A).

There is another way to argue that (A) is not the right answer, based on what we call AdS/CFT duality. AdS just describes a consistent way to put a black hole in a “bottle,” so we can regard the black hole together with the radiation outside it as a closed system. Now, in gravitation it is crucial to focus on properties of spacetime that do not depend on the observer’s viewpoint; otherwise we can easily get very confused. The best way to be sure we have a solid way of describing things is to pay attention to what happens at the boundary of the spacetime, the walls of the bottle — that’s what CFT refers to. AdS/CFT provides us with tools for describing what happens when a black hole forms and evaporates, phrased entirely in terms of what happens on the walls of the bottle. If we can describe the physics perfectly by sticking to the walls of the bottle, always staying far away from the black hole, there doesn’t seem to be anyplace to hide an unlimited amount of stuff.

At the KITP, both Bill Unruh and Bob Wald argued forcefully for (A). They acknowledge the challenge of understanding the meaning of black hole entropy and of explaining why the AdS/CFT argument is wrong. But neither is willing to disavow the powerful message conveyed by that telling diagram of the black hole spacetime. As Bill said: “There is all that stuff that fell in and it crashed into the singularity and that’s it. Bye-bye.”

Adherents of (B) and (C) like to think about black hole physics from the perspective of an observer who stays outside the black hole. From that viewpoint, they say, the black hole behaves like any other system with a temperature and a finite entropy. Stuff falling in sticks to the black hole’s outer edge and gets rapidly mixed in with other stuff the black hole absorbed previously. For a black hole of a given mass, though, there is a limit to how much stuff it can hold. Eventually, what fell in comes out again, but in a form so highly scrambled as to be nearly unrecognizable.

Where the (B) and (C) camps differ concerns what happens to a brave observer who falls into a black hole. According to (C), an observer falling in crosses from the outside to the inside of a black hole peacefully, which poses a puzzle I discussed here. The puzzle arises because an uneventful crossing implies strong quantum entanglement between the region A just inside the black hole and region B just outside. On the other hand, as information leaks out of a black hole, region B should be strongly  entangled with the radiation system R emitted by the black hole long ago. Entanglement can’t be shared, so it does not make sense for B to be entangled with both A and R. What’s going on? Answer (C) resolves the puzzle by positing that A and R are not really different systems, but rather two ways to describe the same system, as I discussed here.That seems pretty crazy, because R could be far, far away from the black hole.

Answer (B) resolves the puzzle differently, by positing that region A does not actually exist, because the black hole has no interior. An observer who attempts to fall in gets a very rude surprise, striking a seething “firewall” at the last moment before passing to the inside. That seems pretty crazy, because no firewall is predicted by Einstein’s trusty equations, which are normally very successful at describing spacetime geometry.

At the workshop, Don Marolf and Raphael Bousso gave some new arguments supporting (B). Both acknowledge that we still lack a concrete picture of how firewalls are created as black holes form, but Bousso insisted that “It is time to constrain and construct the dynamics of firewalls.” Joe Polchinski emphasized that, while AdS/CFT provides a very satisfactory description of physics outside a black hole, it has not yet been able to tell us enough about the black hole interior to settle whether there are firewalls or not, at least for generic black holes formed from collapsing matter.

Lenny Susskind, Juan Maldacena, Ted Jacobson, and I all offered different perspectives on how (C) could turn out to be the right answer. We all told different stories, but perhaps each of us had at least part of the right answer. I’m not at KITP this week, but there have been further talks supporting (C) by Raju, Nomura, and the Verlindes.

I had a fun week at the KITP. If you watch the videos of the talks, you might get an occasional glimpse of me typing furiously on my laptop. It looks like I’m doing my email, but actually that’s how I take notes, which helps me to pay attention. Every once in a while I was inspired to tweet.

I have felt for a while that ideas from quantum information can help us to grasp the mysteries of quantum gravity, so I appreciated that quantum information concepts came up in many of the talks. Susskind invoked quantum error-correcting codes in discussing how sensitively the state of the Hawking radiation depends on the information it encodes, and Maldacena used tensor networks to explain how to build spacetime geometry from quantum entanglement. Scott Aaronson proposed the appropriate acronym HARD for HAwking Radiation Decoding, and argued (following Harlow and Hayden) that this task is as hard as inverting an injective one-way function, something we don’t expect quantum computers to be able to do.

In the organizational session that launched the meeting, Polchinski remarked regarding firewalls that “Nobody has the slightest idea what is going on,” and Gary Horowitz commented that “I’m still getting over the shock over how little we’ve learned in the past 30 years.” I guess that’s fair. Understanding what’s inside black holes has turned out to be remarkably subtle, making the problem more and more tantalizing. Maybe the current state of confusion regarding black hole information means that we’re on the verge of important discoveries about quantum gravity, or maybe not. In any case, invigorating discussions like what I heard last week are bound to facilitate progress.