# Inflation on the back of an envelope

Last Monday was an exciting day!

After following the BICEP2 announcement via Twitter, I had to board a transcontinental flight, so I had 5 uninterrupted hours to think about what it all meant. Without Internet access or references, and having not thought seriously about inflation for decades, I wanted to reconstruct a few scraps of knowledge needed to interpret the implications of r ~ 0.2.

I did what any physicist would have done … I derived the basic equations without worrying about niceties such as factors of 3 or $2 \pi$. None of what I derived was at all original —  the theory has been known for 30 years — but I’ve decided to turn my in-flight notes into a blog post. Experts may cringe at the crude approximations and overlooked conceptual nuances, not to mention the missing references. But some mathematically literate readers who are curious about the implications of the BICEP2 findings may find these notes helpful. I should emphasize that I am not an expert on this stuff (anymore), and if there are serious errors I hope better informed readers will point them out.

By tradition, careless estimates like these are called “back-of-the-envelope” calculations. There have been times when I have made notes on the back of an envelope, or a napkin or place mat. But in this case I had the presence of mind to bring a notepad with me.

Notes from a plane ride

According to inflation theory, a nearly homogeneous scalar field called the inflaton (denoted by $\phi$)  filled the very early universe. The value of $\phi$ varied with time, as determined by a potential function $V(\phi)$. The inflaton rolled slowly for a while, while the dark energy stored in $V(\phi)$ caused the universe to expand exponentially. This rapid cosmic inflation lasted long enough that previously existing inhomogeneities in our currently visible universe were nearly smoothed out. What inhomogeneities remained arose from quantum fluctuations in the inflaton and the spacetime geometry occurring during the inflationary period.

Gradually, the rolling inflaton picked up speed. When its kinetic energy became comparable to its potential energy, inflation ended, and the universe “reheated” — the energy previously stored in the potential $V(\phi)$ was converted to hot radiation, instigating a “hot big bang”. As the universe continued to expand, the radiation cooled. Eventually, the energy density in the universe came to be dominated by cold matter, and the relic fluctuations of the inflaton became perturbations in the matter density. Regions that were more dense than average grew even more dense due to their gravitational pull, eventually collapsing into the galaxies and clusters of galaxies that fill the universe today. Relic fluctuations in the geometry became gravitational waves, which BICEP2 seems to have detected.

Both the density perturbations and the gravitational waves have been detected via their influence on the inhomogeneities in the cosmic microwave background. The 2.726 K photons left over from the big bang have a nearly uniform temperature as we scan across the sky, but there are small deviations from perfect uniformity that have been precisely measured. We won’t worry about the details of how the size of the perturbations is inferred from the data. Our goal is to achieve a crude understanding of how the density perturbations and gravitational waves are related, which is what the BICEP2 results are telling us about. We also won’t worry about the details of the shape of the potential function $V(\phi)$, though it’s very interesting that we might learn a lot about that from the data.

Exponential expansion

Einstein’s field equations tell us how the rate at which the universe expands during inflation is related to energy density stored in the scalar field potential. If a(t) is the “scale factor” which describes how lengths grow with time, then roughly

$\left(\frac{\dot a}{a}\right)^2 \sim \frac{V}{m_P^2}$.

Here $\dot a$ means the time derivative of the scale factor, and $m_P = 1/\sqrt{8 \pi G} \approx 2.4 \times 10^{18}$ GeV is the Planck scale associated with quantum gravity. (G is Newton’s gravitational constant.) I’ve left our a factor of 3 on purpose, and I used the symbol ~ rather than = to emphasize that we are just trying to get a feel for the order of magnitude of things. I’m using units in which Planck’s constant $\hbar$ and the speed of light c are set to one, so mass, energy, and inverse length (or inverse time) all have the same dimensions. 1 GeV means one billion electron volts, about the mass of a proton.

(To persuade yourself that this is at least roughly the right equation, you should note that a similar equation applies to an expanding spherical ball of radius a(t) with uniform mass density V. But in the case of the ball, the mass density would decrease as the ball expands. The universe is different — it can expand without diluting its mass density, so the rate of expansion $\dot a / a$ does not slow down as the expansion proceeds.)

During inflation, the scalar field $\phi$ and therefore the potential energy $V(\phi)$ were changing slowly; it’s a good approximation to assume $V$ is constant. Then the solution is

$a(t) \sim a(0) e^{Ht},$

where $H$, the Hubble constant during inflation, is

$H \sim \frac{\sqrt{V}}{m_P}.$

To explain the smoothness of the observed universe, we require at least 50 “e-foldings” of inflation before the universe reheated — that is, inflation should have lasted for a time at least $50 H^{-1}$.

Slow rolling

During inflation the inflaton $\phi$ rolls slowly, so slowly that friction dominates inertia — this friction results from the cosmic expansion. The speed of rolling $\dot \phi$ is determined by

$H \dot \phi \sim -V'(\phi).$

Here $V'(\phi)$ is the slope of the potential, so the right-hand side is the force exerted by the potential, which matches the frictional force on the left-hand side. The coefficient of $\dot \phi$ has to be $H$ on dimensional grounds. (Here I have blown another factor of 3, but let’s not worry about that.)

Density perturbations

The trickiest thing we need to understand is how inflation produced the density perturbations which later seeded the formation of galaxies. There are several steps to the argument.

Quantum fluctuations of the inflaton

As the universe inflates, the inflaton field is subject to quantum fluctuations, where the size of the fluctuation depends on its wavelength. Due to inflation, the wavelength increases rapidly, like $e^{Ht}$, and once the wavelength gets large compared to $H^{-1}$, there isn’t enough time for the fluctuation to wiggle — it gets “frozen in.” Much later, long after the reheating of the universe, the oscillation period of the wave becomes comparable to the age of the universe, and then it can wiggle again. (We say that the fluctuations “cross the horizon” at that stage.) Observations of the anisotropy of the microwave background have determined how big the fluctuations are at the time of horizon crossing. What does inflation theory say about that?

Well, first of all, how big are the fluctuations when they leave the horizon during inflation? Then the wavelength is $H^{-1}$ and the universe is expanding at the rate $H$, so $H$ is the only thing the magnitude of the fluctuations could depend on. Since the field $\phi$ has the same dimensions as $H$, we conclude that fluctuations have magnitude

$\delta \phi \sim H.$

From inflaton fluctuations to density perturbations

Reheating occurs abruptly when the inflaton field reaches a particular value. Because of the quantum fluctuations, some horizon volumes have larger than average values of $\phi$ and some have smaller than average values; hence different regions reheat at slightly different times. The energy density in regions that reheat earlier starts to be reduced by expansion (“red shifted”) earlier, so these regions have a smaller than average energy density. Likewise, regions that reheat later start to red shift later, and wind up having larger than average density.

When we compare different regions of comparable size, we can find the typical (root-mean-square) fluctuations $\delta t$ in the reheating time, knowing the fluctuations in $\phi$ and the rolling speed $\dot \phi$:

$\delta t \sim \frac{\delta \phi}{\dot \phi} \sim \frac{H}{\dot\phi}.$

Small fractional fluctuations in the scale factor $a$ right after reheating produce comparable small fractional fluctuations in the energy density $\rho$. The expansion rate right after reheating roughly matches the expansion rate $H$ right before reheating, and so we find that the characteristic size of the density perturbations is

$\delta_S\equiv\left(\frac{\delta \rho}{\rho}\right)_{hor} \sim \frac{\delta a}{a} \sim \frac{\dot a}{a} \delta t\sim \frac{H^2}{\dot \phi}.$

The subscript hor serves to remind us that this is the size of density perturbations as they cross the horizon, before they get a chance to grow due to gravitational instabilities. We have found our first important conclusion: The density perturbations have a size determined by the Hubble constant $H$ and the rolling speed $\dot \phi$ of the inflaton, up to a factor of order one which we have not tried to keep track of. Insofar as the Hubble constant and rolling speed change slowly during inflation, these density perturbations have a strength which is nearly independent of the length scale of the perturbation. From here on we will denote this dimensionless scale of the fluctuations by $\delta_S$, where the subscript $S$ stands for “scalar”.

Perturbations in terms of the potential

Putting together $\dot \phi \sim -V' / H$ and $H^2 \sim V/{m_P}^2$ with our expression for $\delta_S$, we find

$\delta_S^2 \sim \frac{H^4}{\dot\phi^2}\sim \frac{H^6}{V'^2} \sim \frac{1}{{m_P}^6}\frac{V^3}{V'^2}.$

The observed density perturbations are telling us something interesting about the scalar field potential during inflation.

Gravitational waves and the meaning of r

The gravitational field as well as the inflaton field is subject to quantum fluctuations during inflation. We call these tensor fluctuations to distinguish them from the scalar fluctuations in the energy density. The tensor fluctuations have an effect on the microwave anisotropy which can be distinguished in principle from the scalar fluctuations. We’ll just take that for granted here, without worrying about the details of how it’s done.

While a scalar field fluctuation with wavelength $\lambda$ and strength $\delta \phi$ carries energy density $\sim \delta\phi^2 / \lambda^2$, a fluctuation of the dimensionless gravitation field $h$ with wavelength $\lambda$ and strength $\delta h$ carries energy density $\sim m_P^2 \delta h^2 / \lambda^2$. Applying the same dimensional analysis we used to estimate $\delta \phi$ at horizon crossing to the rescaled field $h/m_P$, we estimate the strength $\delta_T$ of the tensor fluctuations as

$\delta_T^2 \sim \frac{H^2}{m_P^2}\sim \frac{V}{m_P^4}.$

From observations of the CMB anisotropy we know that $\delta_S\sim 10^{-5}$, and now BICEP2 claims that the ratio

$r = \frac{\delta_T^2}{\delta_S^2}$

is about $r\sim 0.2$ at an angular scale on the sky of about one degree. The conclusion (being a little more careful about the O(1) factors this time) is

$V^{1/4} \sim 2 \times 10^{16}~GeV \left(\frac{r}{0.2}\right)^{1/4}.$

This is our second important conclusion: The energy density during inflation defines a mass scale, which turns our to be $2 \times 10^{16}~GeV$ for the observed value of $r$. This is a very interesting finding because this mass scale is not so far below the Planck scale, where quantum gravity kicks in, and is in fact pretty close to theoretical estimates of the unification scale in supersymmetric grand unified theories. If this mass scale were a factor of 2 smaller, then $r$ would be smaller by a factor of 16, and hence much harder to detect.

Rolling, rolling, rolling, …

Using $\delta_S^2 \sim H^4/\dot\phi^2$, we can express $r$ as

$r = \frac{\delta_T^2}{\delta_S^2}\sim \frac{\dot\phi^2}{m_P^2 H^2}.$

It is convenient to measure time in units of the number $N = H t$ of e-foldings of inflation, in terms of which we find

$\frac{1}{m_P^2} \left(\frac{d\phi}{dN}\right)^2\sim r;$

Now, we know that for inflation to explain the smoothness of the universe we need $N$ larger than 50, and if we assume that the inflaton rolls at a roughly constant rate during $N$ e-foldings, we conclude that, while rolling, the change in the inflaton field is

$\frac{\Delta \phi}{m_P} \sim N \sqrt{r}.$

This is our third important conclusion — the inflaton field had to roll a long, long, way during inflation — it changed by much more than the Planck scale! Putting in the O(1) factors we have left out reduces the required amount of rolling by about a factor of 3, but we still conclude that the rolling was super-Planckian if $r\sim 0.2$. That’s curious, because when the scalar field strength is super-Planckian, we expect the kind of effective field theory we have been implicitly using to be a poor approximation because quantum gravity corrections are large. One possible way out is that the inflaton might have rolled round and round in a circle instead of in a straight line, so the field strength stayed sub-Planckian even though the distance traveled was super-Planckian.

Spectral tilt

As the inflaton rolls, the potential energy, and hence also the Hubble constant $H$, change during inflation. That means that both the scalar and tensor fluctuations have a strength which is not quite independent of length scale. We can parametrize the scale dependence in terms of how the fluctuations change per e-folding of inflation, which is equivalent to the change per logarithmic length scale and is called the “spectral tilt.”

To keep things simple, let’s suppose that the rate of rolling is constant during inflation, at least over the length scales for which we have data. Using $\delta_S^2 \sim H^4/\dot\phi^2$, and assuming $\dot\phi$ is constant, we estimate the scalar spectral tilt as

$-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim - \frac{4 \dot H}{H^2}.$

Using $\delta_T^2 \sim H^2/m_P^2$, we conclude that the tensor spectral tilt is half as big.

From $H^2 \sim V/m_P^2$, we find

$\dot H \sim \frac{1}{2} \dot \phi \frac{V'}{V} H,$

and using $\dot \phi \sim -V'/H$ we find

$-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim \frac{V'^2}{H^2V}\sim m_P^2\left(\frac{V'}{V}\right)^2\sim \left(\frac{V}{m_P^4}\right)\left(\frac{m_P^6 V'^2}{V^3}\right)\sim \delta_T^2 \delta_S^{-2}\sim r.$

Putting in the numbers more carefully we find a scalar spectral tilt of $r/4$ and a tensor spectral tilt of $r/8$.

This is our last important conclusion: A relatively large value of $r$ means a significant spectral tilt. In fact, even before the BICEP2 results, the CMB anisotropy data already supported a scalar spectral tilt of about .04, which suggested something like $r \sim .16$. The BICEP2 detection of the tensor fluctuations (if correct) has confirmed that suspicion.

Summing up

If you have stuck with me this far, and you haven’t seen this stuff before, I hope you’re impressed. Of course, everything I’ve described can be done much more carefully. I’ve tried to convey, though, that the emerging story seems to hold together pretty well. Compared to last week, we have stronger evidence now that inflation occurred, that the mass scale of inflation is high, and that the scalar and tensor fluctuations produced during inflation have been detected. One prediction is that the tensor fluctuations, like the scalar ones, should have a notable spectral tilt, though a lot more data will be needed to pin that down.

I apologize to the experts again, for the sloppiness of these arguments. I hope that I have at least faithfully conveyed some of the spirit of inflation theory in a way that seems somewhat accessible to the uninitiated. And I’m sorry there are no references, but I wasn’t sure which ones to include (and I was too lazy to track them down).

It should also be clear that much can be done to sharpen the confrontation between theory and experiment. A whole lot of fun lies ahead.

Okay, here’s a good reference, a useful review article by Baumann. (I found out about it on Twitter!)

From Baumann’s lectures I learned a convenient notation. The rolling of the inflaton can be characterized by two “potential slow-roll parameters” defined by

$\epsilon = \frac{m_p^2}{2}\left(\frac{V'}{V}\right)^2,\quad \eta = m_p^2\left(\frac{V''}{V}\right).$

Both parameters are small during slow rolling, but the relationship between them depends on the shape of the potential. My crude approximation ($\epsilon = \eta$) would hold for a quadratic potential.

We can express the spectral tilt (as I defined it) in terms of these parameters, finding $2\epsilon$ for the tensor tilt, and $6 \epsilon - 2\eta$ for the scalar tilt. To derive these formulas it suffices to know that $\delta_S^2$ is proportional to $V^3/V'^2$, and that $\delta_T^2$ is proportional to $H^2$; we also use

$3H\dot \phi = -V', \quad 3H^2 = V/m_P^2,$

keeping factors of 3 that I left out before. (As a homework exercise, check these formulas for the tensor and scalar tilt.)

It is also easy to see that $r$ is proportional to $\epsilon$; it turns out that $r = 16 \epsilon$. To get that factor of 16 we need more detailed information about the relative size of the tensor and scalar fluctuations than I explained in the post; I can’t think of a handwaving way to derive it.

We see, though, that the conclusion that the tensor tilt is $r/8$ does not depend on the details of the potential, while the relation between the scalar tilt and $r$ does depend on the details. Nevertheless, it seems fair to claim (as I did) that, already before we knew the BICEP2 results, the measured nonzero scalar spectral tilt indicated a reasonably large value of $r$.

Once again, we’re lucky. On the one hand, it’s good to have a robust prediction (for the tensor tilt). On the other hand, it’s good to have a handle (the scalar tilt) for distinguishing among different inflationary models.

One last point is worth mentioning. We have set Planck’s constant $\hbar$ equal to one so far, but it is easy to put the powers of $\hbar$ back in using dimensional analysis (we’ll continue to assume the speed of light c is one). Since Newton’s constant $G$ has the dimensions of length/energy, and the potential $V$ has the dimensions of energy/volume, while $\hbar$ has the dimensions of energy times length, we see that

$\delta_T^2 \sim \hbar G^2V.$

Thus the production of gravitational waves during inflation is a quantum effect, which would disappear in the limit $\hbar \to 0$. Likewise, the scalar fluctuation strength $\delta_S^2$ is also $O(\hbar)$, and hence also a quantum effect.

Therefore the detection of primordial gravitational waves by BICEP2, if correct, confirms that gravity is quantized just like the other fundamental forces. That shouldn’t be a surprise, but it’s nice to know.

# My 10 biggest thrills

Wow!

Evidence for gravitational waves produced during cosmic inflation. BICEP2 results for the ratio r of gravitational wave perturbations to density perturbations, and the density perturbation spectral tilt n.

Like many physicists, I have been reflecting a lot the past few days about the BICEP2 results, trying to put them in context. Other bloggers have been telling you all about it (here, here, and here, for example); what can I possibly add?

The hoopla this week reminds me of other times I have been really excited about scientific advances. And I recall some wise advice I received from Sean Carroll: blog readers like lists.  So here are (in chronological order)…

My 10 biggest thrills (in science)

This is a very personal list — your results may vary. I’m not saying these are necessarily the most important discoveries of my lifetime (there are conspicuous omissions), just that, as best I can recall, these are the developments that really started my heart pounding at the time.

1) The J/Psi from below (1974)

I was a senior at Princeton during the November Revolution. I was too young to appreciate fully what it was all about — having just learned about the Weinberg-Salam model, I thought at first that the Z boson had been discovered. But by stalking the third floor of Jadwin I picked up the buzz. No, it was charm! The discovery of a very narrow charmonium resonance meant we were on the right track in two ways — charm itself confirmed ideas about the electroweak gauge theory, and the narrowness of the resonance fit in with the then recent idea of asymptotic freedom. Theory triumphant!

2) A magnetic monopole in Palo Alto (1982)

By 1982 I had been thinking about the magnetic monopoles in grand unified theories for a few years. We thought we understood why no monopoles seem to be around. Sure, monopoles would be copiously produced in the very early universe, but then cosmic inflation would blow them away, diluting their density to a hopelessly undetectable value. Then somebody saw one …. a magnetic monopole obediently passed through Blas Cabrera’s loop of superconducting wire, producing a sudden jump in the persistent current. On Valentine’s Day!

According to then current theory, the monopole mass was expected to be about 10^16 GeV (10 million billion times heavier than a proton). Had Nature really been so kind as the bless us with this spectacular message from an staggeringly high energy scale? It seemed too good to be true.

It was. Blas never detected another monopole. As far as I know he never understood what glitch had caused the aberrant signal in his device.

3) “They’re green!” High-temperature superconductivity (1987)

High-temperature superconductors were discovered in 1986 by Bednorz and Mueller, but I did not pay much attention until Paul Chu found one in early 1987 with a critical temperature of 77 K. Then for a while the critical temperature seemed to be creeping higher and higher on an almost daily basis, eventually topping 130K …. one wondered whether it might go up, up, up forever.

It didn’t. Today 138K still seems to be the record.

My most vivid memory is that David Politzer stormed into my office one day with a big grin. “They’re green!” he squealed. David did not mean that high-temperature superconductors would be good for the environment. He was passing on information he had just learned from Phil Anderson, who happened to be visiting Caltech: Chu’s samples were copper oxides.

4) “Now I have mine” Supernova 1987A (1987)

What was most remarkable and satisfying about the 1987 supernova in the nearby Large Magellanic Cloud was that the neutrinos released in a ten second burst during the stellar core collapse were detected here on earth, by gigantic water Cerenkov detectors that had been built to test grand unified theories by looking for proton decay! Not a truly fundamental discovery, but very cool nonetheless.

Soon after it happened some of us were loafing in the Lauritsen seminar room, relishing the good luck that had made the detection possible. Then Feynman piped up: “Tycho Brahe had his supernova, Kepler had his, … and now I have mine!” We were all silent for a few seconds, and then everyone burst out laughing, with Feynman laughing the hardest. It was funny because Feynman was making fun of his own gargantuan ego. Feynman knew a good gag, and I heard him use this line at a few other opportune times thereafter.

5) Science by press conference: Cold fusion (1989)

The New York Times was my source for the news that two chemists claimed to have produced nuclear fusion in heavy water using an electrochemical cell on a tabletop. I was interested enough to consult that day with our local nuclear experts Charlie Barnes, Bob McKeown, and Steve Koonin, none of whom believed it. Still, could it be true?

I decided to spend a quiet day in my office, trying to imagine ways to induce nuclear fusion by stuffing deuterium into a palladium electrode. I came up empty.

My interest dimmed when I heard that they had done a “control” experiment using ordinary water, had observed the same excess heat as with heavy water, and remained just as convinced as before that they were observing fusion. Later, Caltech chemist Nate Lewis gave a clear and convincing talk to the campus community debunking the original experiment.

6) “The face of God” COBE (1992)

I’m often too skeptical. When I first heard in the early 1980s about proposals to detect the anisotropy in the cosmic microwave background, I doubted it would be possible. The signal is so small! It will be blurred by reionization of the universe! What about the galaxy! What about the dust! Blah, blah, blah, …

The COBE DMR instrument showed it could be done, at least at large angular scales, and set the stage for the spectacular advances in observational cosmology we’ve witnessed over the past 20 years. George Smoot infamously declared that he had glimpsed “the face of God.” Overly dramatic, perhaps, but he was excited! And so was I.

7) “83 SNU” Gallex solar neutrinos (1992)

Until 1992 the only neutrinos from the sun ever detected were the relatively high energy neutrinos produced by nuclear reactions involving boron and beryllium — these account for just a tiny fraction of all neutrinos emitted. Fewer than expected were seen, a puzzle that could be resolved if neutrinos have mass and oscillate to another flavor before reaching earth. But it made me uncomfortable that the evidence for solar neutrino oscillations was based on the boron-beryllium side show, and might conceivably be explained just by tweaking the astrophysics of the sun’s core.

The Gallex experiment was the first to detect the lower energy pp neutrinos, the predominant type coming from the sun. The results seemed to confirm that we really did understand the sun and that solar neutrinos really oscillate. (More compelling evidence, from SNO, came later.) I stayed up late the night I heard about the Gallex result, and gave a talk the next day to our particle theory group explaining its significance. The talk title was “83 SNU” — that was the initially reported neutrino flux in Solar Neutrino Units, later revised downward somewhat.

8) Awestruck: Shor’s algorithm (1994)

I’ve written before about how Peter Shor’s discovery of an efficient quantum algorithm for factoring numbers changed my life. This came at a pivotal time for me, as the SSC had been cancelled six months earlier, and I was growing pessimistic about the future of particle physics. I realized that observational cosmology would have a bright future, but I sensed that theoretical cosmology would be dominated by data analysis, where I would have little comparative advantage. So I became a quantum informationist, and have not regretted it.

9) The Higgs boson at last (2012)

The discovery of the Higgs boson was exciting because we had been waiting soooo long for it to happen. Unable to stream the live feed of the announcement, I followed developments via Twitter. That was the first time I appreciated the potential value of Twitter for scientific communication, and soon after I started to tweet.

10) A lucky universe: BICEP2 (2014)

Many past experiences prepared me to appreciate the BICEP2 announcement this past Monday.

I first came to admire Alan Guth‘s distinctive clarity of thought in the fall of 1973 when he was the instructor for my classical mechanics course at Princeton (one of the best classes I ever took). I got to know him better in the summer of 1979 when I was a graduate student, and Alan invited me to visit Cornell because we were both interested in magnetic monopole production  in the very early universe. Months later Alan realized that cosmic inflation could explain the isotropy and flatness of the universe, as well as the dearth of magnetic monopoles. I recall his first seminar at Harvard explaining his discovery. Steve Weinberg had to leave before the seminar was over, and Alan called as Steve walked out, “I was hoping to hear your reaction.” Steve replied, “My reaction is applause.” We all felt that way.

I was at a wonderful workshop in Cambridge during the summer of 1982, where Alan and others made great progress in understanding the origin of primordial density perturbations produced from quantum fluctuations during inflation (Bardeen, Steinhardt, Turner, Starobinsky, and Hawking were also working on that problem, and they all reached a consensus by the end of the three-week workshop … meanwhile I was thinking about the cosmological implications of axions).

I also met Andrei Linde at that same workshop, my first encounter with his mischievous grin and deadpan wit. (There was a delegation of Russians, who split their time between Xeroxing papers and watching the World Cup on TV.) When Andrei visited Caltech in 1987, I took him to Disneyland, and he had even more fun than my two-year-old daughter.

During my first year at Caltech in 1984, Mark Wise and Larry Abbott told me about their calculations of the gravitational waves produced during inflation, which they used to derive a bound on the characteristic energy scale driving inflation, a few times 10^16 GeV. We mused about whether the signal might turn out to be detectable someday. Would Nature really be so kind as to place that mass scale below the Abbott-Wise bound, yet high enough (above 10^16 GeV) to be detectable? It seemed unlikely.

Last week I caught up with the rumors about the BICEP2 results by scanning my Twitter feed on my iPad, while still lying in bed during the early morning. I immediately leapt up and stumbled around the house in the dark, mumbling to myself over and over again, “Holy Shit! … Holy Shit! …” The dog cast a curious glance my way, then went back to sleep.

Like millions of others, I was frustrated Monday morning, trying to follow the live feed of the discovery announcement broadcast from the hopelessly overtaxed Center for Astrophysics website. I was able to join in the moment, though, by following on Twitter, and I indulged in a few breathless tweets of my own.

Many of his friends have been thinking a lot these past few days about Andrew Lange, who had been the leader of the BICEP team (current senior team members John Kovac and Chao-Lin Kuo were Caltech postdocs under Andrew in the mid-2000s). One day in September 2007 he sent me an unexpected email, with the subject heading “the bard of cosmology.” Having discovered on the Internet a poem I had written to introduce a seminar by Craig Hogan, Andrew wrote:

“John,

just came across this – I must have been out of town for the event.

l love it.

it will be posted prominently in our lab today (with “LISA” replaced by “BICEP”, and remain our rallying cry till we detect the B-mode.

have you set it to music yet?

a”

I lifted a couplet from that poem for one of my tweets (while rumors were swirling prior to the official announcement):

We’ll finally know how the cosmos behaves
If we can detect gravitational waves.

Assuming the BICEP2 measurement r ~ 0.2 is really a detection of primordial gravitational waves, we have learned that the characteristic mass scale during inflation is an astonishingly high 2 X 10^16 GeV. Were it a factor of 2 smaller, the signal would have been far too small to detect in current experiments. This time, Nature really is on our side, eagerly revealing secrets about physics at a scale far, far beyond what we will every explore using particle accelerators. We feel lucky.

We physicists can never quite believe that the equations we scrawl on a notepad actually have something to do with the real universe. You would think we’d be used to that by now, but we’re not — when it happens we’re amazed. In my case, never more so than this time.

The BICEP2 paper, a historic document (if the result holds up), ends just the way it should:

“We dedicate this paper to the memory of Andrew Lange, whom we sorely miss.”

# Building a Computer: Part I

During my senior year in high school, I was fortunate enough to participate in the Intel International Science and Engineering Fair. At the awards banquet I was seated with fourteen others and we each had the choice of ordering either salmon or steak for our respective entrées. I noticed that while taking our fifteen different orders, our waiter did not write anything down. How on Earth was he going to remember what each of us had requested?!

It turns out the answer is intimately related to Problems 2 and 5 in my last post. Did you realize you always save at least 999 people on the game show? Here’s how:

The person at the back of the line will look at all 999 hats in front of him. If the number of black hats is odd, he will shout “Black!” If the number of black hats is even, he will shout “White!” From this information, the second person in line can deduce from the hats in front of him what the color of his own hat is! For example, if Contestant 1 shouts “Black!” and Contestant 2 sees an even number of black hats in front of him, he can deduce that his own hat is black because the total number of black hats Contestant 1 sees is odd. From information given in Contestant 1’s and Contestant 2’s answers, Contestant 3 can determine his hat’s color via a similar parity argument, and so on. At least $999 will be donated to charity. How about Problem 5? One solution requires knowledge of how to represent numbers in binary. Let’s say you owe your friend$3,761.50, and want to pay him using pennies and dimes, as well as $1,$10, $100, and hypothetical$1,000 bills. How would you pay him using the least number of monetary tokens? The answer to this is easy – we all learned about the hundredths’ place, the tenths’ place, ones’ place, the tens’ place, the hundreds’ place, and so on in elementary school. The digit in the ones’ place tells us how many $1 bills we need to give our friend, the digit in the tens’ place tells us how many$10 bills we need to give our friend, and so on. Written more suggestively,

3,761 = 3*103 + 7*102 + 6*101 + 1*100 + 5*10-1 + 0*10-2

Why does the number 10 appear so significant in the above equation? In the above equation, 10 is called the “base.” In base 10, we write every number as the sum of whole multiples of powers of 10. Notice that none of the bold numbers – the digits – can be greater than or equal to the base (10); they must be between 0 and 9. If one of the bold digits was greater than 9, we could just use a monetary token of higher value to reduce the total number of bills and coins we need to repay our friend. This leads to:

Rule #1: The value of each digit must be less than the base.

Could we use a number other than 10 as our base? Let’s try using 2! I can think of at least one way to write 3,761.50 as the sum of multiples of powers of two:

3,761.50 = 1*211 + 0*210 + 0*29 + 0*28 + 0*27 + 0*26 + 53*251*24 + 0*23 + 0*22 + 0*21 + 1*20 + 1*2-1

# More Brainteasers

As promised, I’m back to tell you more about myself and tickle your brain! I’m terribly sorry for giving such a short description of my background in my last post. If I had to describe myself in another $\leq 5$ words, I’d write: “Breakdancing, bodybuilding physicist… Ladies: single.”

Problem 1: A thousand balloons numbered 1, 2, … , 1000 are arranged in a circle. Starting with balloon 1, every alternating balloon is popped. So balloon 1 is popped, balloon 3 is popped, balloon 5 is popped, and so on until balloon 999 is popped. But the process does not stop here! We keep going around the circle with the remaining balloons and pop every other one. So next balloon 2 is popped, balloon 4 is skipped, balloon 6 is popped, and so on. This process continues until only one balloon is left remaining; which is the last balloon standing?

A thousand red balloons numbered from 1 to 1000. Starting at the gold star, we pop every other balloon while traveling clockwise. Which is the last balloon remaining?

It is of course easy to solve Problem 1 via brute force, but can you think of a more elegant solution? Do you really need to go around the circle $log(n)$ times? If you get stuck, try working on Problem 2 for a while:

Problem 2: A thousand people stand in single file on a game show. Each person is wearing a hat which is either black or white. Each person can see the hats of all the people in front of them in line but they cannot see their own hat. Starting with the person at the back of the line and progressing forward, the game show host will ask, “What color is your hat?” Each contestant is only permitted to answer “black” or “white.” For each correct answer, the host will donate \$1 to charity. If the contestants are allowed to discuss a strategy before they are lined up and given their hats, how much can they guarantee will be donated to charity?

If you managed to solve Problem 2, Problem 3 should be easy.

Problem 3: Now each person is given a hat which is one of $n$ colors, and is allowed to answer the host’s question by giving any of the $n$ colors. How much can the contestants guarantee will be donated to charity?

Problem 4: You are given ten coin-making machines which produce coins weighing 2 grams each. Each coin-maker can produce infinitely many coins. One of the ten machines, however, is defective and produces coins weighing 1 gram each. You are also given a scientific balance (with a numerical output). How many times must you use the balance to determine which machine is defective? What if the weight of the coins produced by the rogue machine is unknown?

I happen to be very good personal friends with Count von Count. One day as we were walking through Splash Castle, the Count told me he had passed his arithmetic class and was throwing a graduation party! Alas, before he could host the get-together, he needed to solve a problem on injective mappings from powersets to subsets of the natural numbers…

Problem 5: The Count has a thousand bottles of apple juice for his party, but one of the bottles contains spoiled juice! This spoiled juice induces tummy aches roughly a day after being consumed, and the Count wants to avoid lawsuits brought on by the innocent patrons of Sesame Place. Luckily, there is just enough time before the party for ten of the Count’s friends to perform exactly one round of taste-testing, during which they can drink from as many bottles as they please. How can the ten friends determine which bottle is both tummy ache- and lawsuit-inducing? You can assume Count’s friends have promised not to sue him if they get sick.

Please let me know what you think of these problems in the comments! Too easy? Too hard? Want more? Tell me so!

# Squeezing light using mechanical motion

This post is about generating a special type of light, squeezed light, using a mechanical resonator. But perhaps more importantly, it’s about an experiment (Caltech press release can be found here) that is very close to my heart: an experiment that brings to an end my career as a graduate student at Caltech and the IQIM, while paying homage to nearly four decades of work done by those before me at this institute.

The Quantum Noise of Light

First of all, what is squeezed light? It would be silly of me to imagine that I can provide a more clear and thorough explanation than what Jeff Kimble gave twenty years ago in Caltech’s Engineering and Science magazine. Instead, I’ll try to present what squeezing is in the context of optomechanics.

Quantization of light makes it noisy. Imagine a steady stream of water hitting a plate, and rolling off of it smoothly. The stream would indeed impart a steady force on the plate, but wouldn’t really cause it to “shake” around much. The plate would sense a steady pressure. This is what the classical theory of light, as proposed by James Clerk Maxwell, predicts. The effect is called radiation pressure. In the early 20th century, a few decades after this prediction, quantum theory came along and told us that “light is made of photons”. More or less, this means that a measurement capable of measuring the energy, power, or pressure imparted by light, if sensitive enough, will detect “quanta”, as if light were composed of particles. The force felt by a mirror is exactly this sort of measurement. To make sense of this, we can replace that mental image of a stream hitting a plate with one of the little raindrops hitting it, where each raindrop is a photon. Since the photons are coming in one at a time, and imparting their momentum all at once in little packets, they generate a new type of noise due to their random arrival times. This is called shot-noise (since the photons act as little “shots”). Since shot-noise is being detected here by the sound it generates due to the pressure imparted by light, we call it “Radiation Pressure Shot-Noise” (RPSN).

# On the importance of choosing a convenient basis

The benefits of Caltech’s proximity to Hollywood don’t usually trickle down to measly grad students like myself, except in the rare occasions when we befriend the industry’s technical contingent. One of my friends is a computer animator for Disney, which means that she designs algorithms enabling luxuriously flowing hair or trees with realistic lighting or feathers that have gorgeous texture, for movies like Wreck-it Ralph. Empowering computers to efficiently render scenes with these complicated details is trickier than you’d think and it requires sophisticated new mathematics. Fascinating conversations are one of the perks of having friends like this. But so are free trips to Disneyland! A couple nights ago, while standing in line for The Tower of Terror, I asked her what’s she’s currently working on. She’s very smart, as can be evidenced by her BS/MS in Computer Science/Mathematics from MIT, but she asked me if I “know about spherical harmonics.” Asking this to an aspiring quantum mechanic is like asking an auto mechanic if they know how to use a monkey wrench. She didn’t know what she was getting herself into!

IQIM, LIGO, Disney

Along with this spherical harmonics conversation, I had a few other incidents last week that hammered home the importance of choosing a convenient basis when solving a scientific problem. First, my girlfriend works on LIGO and she’s currently writing her thesis. LIGO is a huge collaboration involving hundreds of scientists, and naturally, nobody there knows the detailed inner-workings of every subsystem. However, when it comes to writing the overview section of ones thesis, you need to at least make a good faith attempt to understand the whole behemoth. Anyways, my girlfriend recently asked if I know how the wavelet transform works. This is another example of a convenient basis, one that is particularly suited for analyzing abrupt changes, such as detecting the gravitational waves that would be emitted during the final few seconds of two black holes merging (ring-down). Finally, for the past couple weeks, I’ve been trying to understand entanglement entropy in quantum field theories. Most of the calculations that can be carried out explicitly are for the special subclass of quantum field theories called “conformal field theories,” which in two dimensions have a very convenient ‘basis’, the Virasoro algebra.

So why does a Disney animator care about spherical harmonics? It turns out that every frame that goes into one of Disney’s movies needs to be digitally rendered using a powerful computing cluster. The animated film industry has traded the painstaking process of hand-animators drawing every single frame, for the almost equally time-consuming process of computer clusters generating every frame. It doesn’t look like strong AI will be available in our immediate future, and in the meantime, humans are still much better than computers at detecting patterns and making intuitive judgements about the ‘physical correctness of an image.’ One of the primary advantages of computer animation is that an animator shouldn’t need to shade in every pixel of every frame — some of this burden should fall on computers. Let’s imagine a thought experiment. An animator wants to get the lighting correct for a nighttime indoor shot. They should be able to simply place the moon somewhere out of the shot, so that its glow can penetrate through the windows. They should also be able to choose from a drop down menu and tell the computer that a hand drawn lightbulb is a ‘light source.’ The computer should then figure out how to make all of the shadows and brightness appear physically correct. Another example of a hard problem is that an animator should be able to draw a character, then tell the computer that the hair they drew is ‘hair’, so that as the character moves through scenes, the physics of the hair makes sense. Programming computers do these things autonomously is harder than it sounds.

In the lighting example, imagine you want to get the lighting correct in a forest shot with complicated pine trees and leaf structures. The computer would need to do the ray-tracing for all of the photons emanating from the different light sources, and then the second-order effects as these photons reflect, and then third-order effects, etc. It’s a tall order to make the scene look accurate to the human eyeball/brain. Instead of doing all of this ray-tracing, it’s helpful to choose a convenient basis in order to dramatically speed up the processing. Instead of the complicated forest example, let’s imagine you are working with a tree from Super Mario Bros. Imagine drawing a sphere somewhere in the middle of this and then defining a ‘height function’, which outputs the ‘elevation’ of the tree foliage over each point on the sphere. I tried to use suggestive language, so that you’d draw an analogy to thinking of Earth’s ‘height function’ as the elevation of mountains and the depths of trenches over the sphere, with sea-level as a baseline. An example of how you could digitize this problem for a tree or for the earth is by breaking up the sphere into a certain number of pixels, maybe one per square meter for the earth (5*10^14 square meters gives approximately 2^49 pixels), and then associating an integer height value between [-2^15,2^15] to each pixel. This would effectively digitize the height map of the earth. In this case, keeping track of the elevation to approximately the meter level. But this leaves us with a huge amount of information that we need to store, and then process. We’d have to keep track of the height value for each pixel, giving us approximately 2^49*2^16=2^65 bits=4 exabytes that we’d have to keep track of. And this is for an easy static problem with only meter resolution! We can store this information much more efficiently using spherical harmonics.

There are many ways to think about spherical harmonics. Basically, they’re functions which map points on the sphere to real numbers $Y_l^m: (\theta,\phi) \mapsto Y_l^m(\theta,\phi)\in\mathbb{R}$, such that they satisfy a few special properties. They are orthogonal, meaning that if you multiply two different spherical harmonics together and then integrate over the sphere, then you get zero. If you square one of the functions and then integrate over the sphere, you get a finite, nonzero value. This means that they are orthogonal functions. They also span the space of all height functions that one could define over the sphere. This means that for a planet with an arbitrarily complicated topography, you would be able to find some weighted combination of different spherical harmonics which perfectly describes that planet’s topography. These are the key properties which make a set of functions a basis: they span and are orthogonal (this is only a heuristic). There is also a natural way to think about the light that hits the tree. We can use the same sphere and simply calculate the light rays as they would hit the ideal sphere. With these two different ‘height functions’, it’s easy to calculate the shadows and brightness inside the tree. You simply convolve the two functions, which is a fast operation on a computer. It also means that if the breeze slightly changes the shape of the tree, or if the sun moves a little bit, then it’s very easy to update the shading. Implicit in what I just said, using spherical harmonics allows us to efficiently store this height map. I haven’t calculated this on a computer, but it doesn’t seem totally crazy to think that we’d be able to store the topography of the earth to a reasonable accuracy, with 100 nonzero coefficients of the spherical harmonics to 64 bits of precision, 2^7*2^6= 2^13 << 2^65. Where does this cost savings come from? It comes from the fact that the spherical harmonics are a convenient basis, which naturally encode the types of correlations we see in Earth’s topography — if you’re standing at an elevation of 2000m, the area within ten meters is probably at a similar elevation. Cliffs are what break this basis — but are what the wavelet basis was designed to handle.

I’ve only described a couple bases in this post and I’ve neglected to mention some of the most famous examples! This includes the Fourier basis, which was designed to encode periodic signals, such as music and radio waves. I also have not gone into any detail about the Virasoro algebra, which I mentioned at the beginning of this post, and I’ve been using it heavily for the past few weeks. For the sake of diversity, I’ll spend a few sentences whetting your apetite. Complex analysis is primarily the study of analytic functions. In two dimensions, these analytic functions “preserve angles.” This means that if you have two curves which intersect at a point with angle $\theta$, then after using an analytic function to map these curves to their image, also in the complex plane, then the angle between the curves will still be $\theta.$ An especially convenient basis for the analytic functions in two-dimensions ($\{f: \mathbb{C} \to \mathbb{C}\}$, where $f(z) = \sum_{n=0}^{\infty} a_nz^n$) is given by the set of functions $\{l_n = -z^{n+1}\partial_z\}$. As always, I’m not being exactly precise, but this is a ‘basis’ because we can encode all the information describing an infinitesimal two-dimensional angle-preserving map using these elements. It turns out to have incredibly special properties, including that its quantum cousin yields something called the “central charge” which has deep ramifications in physics, such as being related to the c-theorem. Conformal field theories are fascinating because they describe the physics of phase transitions. Having a convenient basis in two-dimensions is a large part of why we’ve been able to make progress in our understanding of two-dimensional phase transitions (more important is that the 2d conformal symmetry group is infinite-dimensional, but that’s outside the scope of this post.) Convenient bases are also important for detecting gravitational waves, making incredible movies and striking up nerdy conversations in long lines at Disneyland!

# Monopoles passing through Flatland!

Like many mathematically inclined teenagers, I was charmed when I first read the book Flatland by Edwin Abbott Abbott.* It’s a story about a Sphere who visits a two-dimensional world and tries to awaken its inhabitants to the existence of a third dimension. As perceived by Flatlanders, the Sphere is a circle which appears as a point, grows to maximum size, then shrinks and disappears.

My memories of Flatland were aroused as I read a delightful recent paper by Max Metlitski, Charlie Kane, and Matthew Fisher about magnetic monopoles and three-dimensional bosonic topological insulators. To explain why, I’ll need to recall a few elements of the theory of monopoles and of topological insulators, before returning to the connection between the two and why that reminds me of Flatland.

Flatlanders, confined to the two-dimensional surface of a topological insulator, are convinced by a magnetic monopole that a third dimension must exist.

Monopoles

Paul Dirac was no ordinary genius. Aside from formulating relativistic electron theory and predicting the existence of antimatter, Dirac launched the quantum theory of magnetic monopoles in a famous 1931 paper. Dirac envisioned a magnetic monopole as a semi-infinitely long, infinitesimally thin string of magnetic flux, such that the end of the string, where the flux spills out, seems to be a magnetic charge. For this picture to make sense, the string should be invisible. Dirac pointed out that an electron with electric charge e, transported around a string carrying flux $\Phi$, could detect the string (via what later came to be called the Aharonov-Bohm effect) unless the flux is an integer multiple of $2\pi\hbar /e$, where $\hbar$ is Planck’s constant. Conversely, in order for the string to be invisible, if a magnetic monopole exists with magnetic charge $g_D = 2\pi\hbar /e$, then all electric charges must be integer multiples of e. Thus the existence of magnetic monopoles (which have never been observed) could explain quantization of electric charge (which has been observed).

Captivated by the beauty of his own proposal, Dirac concluded his paper by remarking, “One would be surprised if Nature had made no use of it.”

Our understanding of quantized magnetic monopoles advanced again in 1979 when another extraordinary physicist, Edward Witten, discussed a generalization of Dirac’s quantization condition. Witten noted that the Lagrange density of electrodynamics could contain a term of the form

$\frac{\theta e^2\hbar}{4\pi^2}~\vec{E}\cdot\vec{B},$

where $\vec{E}$ is the electric field and $\vec{B}$ is the magnetic field. This “$\theta$ term” may also be expressed as

$\frac{\theta e^2\hbar}{8\pi^2}~ \partial^\mu\left(\epsilon_{\mu\nu\lambda\sigma}A^\nu\partial^\lambda A^\sigma \right),$

where A is the vector potential, and hence is a total derivative which makes no contribution to the classical field equations of electrodynamics. But Witten realized that it can have important consequences for the quantum properties of magnetic monopoles. Specifically, the $\theta$ term modifies the field momentum conjugate to the vector potential, which becomes

$\vec{E}+\frac{\theta e^2\hbar}{4\pi^2}\vec{B}.$

Because the Gauss law condition satisfied by physical quantum states is altered, for a monopole with magnetic charge $m g_D$ , where $g_D$ is Dirac’s minimal charge $2\pi\hbar /e$ and m is an integer, the allowed values of the electric charge become

$q = e\left( n - \frac{\theta m}{2\pi}\right),$

where n is an integer. This spectrum of allowed charges remains invariant if $\theta$ advances by $2\pi$, suggesting that the parameter $\theta$ is actually an angular variable with period $2\pi$. This periodicity of $\theta$ can be readily verified in a theory admitting fermions with the minimal charge e. But if the charged particles are bosons then $\theta$ turns out to be a periodic variable with period $4\pi$ instead.

That $\theta$ has a different period for a bosonic theory than a fermionic one has an interesting interpretation. As Goldhaber noticed in 1976, dyons carrying both magnetic and electric charge can exhibit statistical transmutation. That is, in a purely bosonic theory, a dyon with magnetic charge $g_D= 2\pi\hbar/e$ and electric charge ne is a fermion if n is an odd integer — when two dyons are exchanged, transport of each dyon’s electric charge in the magnetic field of the other dyon induces a sign change in the wave function. In a fermionic theory the story is different; now we can think of the dyon as a fermionic electric charge bound to a bosonic monopole. There are two canceling contributions to the exchange phase of the dyon, which is therefore a boson for any integer value of n, whether even or odd.

As $\theta$ smoothly increases from 0 to $2\pi$, the statistics (whether bosonic or fermionic) of a dyon remains fixed even as the dyon’s electric charge increases by e. For the bosonic theory with $\theta = 2\pi$, then, dyons with magnetic charge $g_D$ and electric charge ne are bosons for n odd and fermions for n even, the opposite of what happens when $\theta=0$. For the bosonic theory, unlike the fermionic theory, we need to increase $\theta$ by $4\pi$ for the physics of dyons to be fully invariant.

In 1979 Ed Witten was a postdoc at Harvard, where I was a student, though he was visiting CERN for the summer when he wrote his paper about the $\theta$-dependent monopole charge. I always read Ed’s papers carefully, but I gave special scrutiny to this one because magnetic monopoles were a pet interest of mine. At the time, I wondered whether the Witten effect might clarify how to realize the $\theta$ parameter in a lattice gauge theory. But it certainly did not occur to me that the $\theta$-dependent electric charge of a magnetic monopole could have important implications for quantum condensed matter physics. Theoretical breakthroughs often have unexpected consequences, which may take decades to emerge.

Symmetry-protected topological phases

Okay, now let’s talk about topological insulators, a very hot topic in condensed matter physics these days. Actually, a topological insulator is a particular instance of a more general concept called a symmetry-protected topological phase of matter (or SPT phase). Consider a d-dimensional hunk of material with a (d-1)-dimensional boundary. If the material is in an SPT phase, then the physics of the d-dimensional bulk is boring — it’s just an insulator with an energy gap, admitting no low-energy propagating excitations. But the physics of the (d-1)-dimensional edge is exotic and exciting — for example the edge might support “gapless” excitations of arbitrarily low energy which can conduct electricity. The exotica exhibited by the edge is a consequence of a symmetry, and is destroyed if the symmetry is broken either explicitly or spontaneously; that is why we say the phase is “symmetry protected.”

The low-energy edge excitations can be described by a (d-1)-dimensional effective field theory. But for a typical SPT phase, this effective field theory is what we call anomalous, which means that for one reason or another the theory does not really make sense. The anomaly tells us something interesting and important, namely that the (d-1)-dimensional theory cannot be really, truly (d-1) dimensional; it can arise only at the edge of a higher-dimensional system.

This phenomenon, in which the edge does not make sense by itself without the bulk, is nicely illustrated by the integer quantum Hall effect, which occurs in a two-dimensional electron system in a high magnetic field and at low temperature, if the sample is sufficiently clean so that the electrons are highly mobile and rarely scattered by impurities. In this case the relevant symmetry is electron number, or equivalently the electric charge. At the one-dimensional edge of a two-dimensional quantum Hall sample, charge carriers move in only one direction — to the right, say, but not to the left. A theory with such chiral electric charges does not really make sense. One problem is that electric charge is not conserved — an electric field along the edge causes charge to be locally created, which makes the theory inconsistent.

The way the theory resolves this conundrum is quite remarkable. A two-dimensional strip of quantum Hall fluid has two edges, one at the top, the other at the bottom. While the top edge has only right-moving excitations, the bottom edge has only left-moving excitations. When electric charge appears on the top edge, it is simultaneously removed from the bottom edge. Rather miraculously, charge can be conveyed across the bulk from one edge to the other, even though the bulk does not have any low-energy excitations at all.

I first learned about this interplay of edge and bulk physics from a beautiful 1985 paper by Curt Callan and Jeff Harvey. They explained very lucidly how an edge theory with an anomaly and a bulk theory with an anomaly can fit together, with each solving the other’s problems. Curiously, the authors did not mention any connection with the quantum Hall effect, which had been discovered five years earlier, and I didn’t appreciate the connection myself until years later.

Topological insulators

In the case of topological insulators, the symmetries which protect the gapless edge excitations are time-reversal invariance and conserved particle number, i.e. U(1) symmetry. Though the particle number might not be coupled to an electromagnetic gauge field, it is instructive for the purpose of understanding the properties of the symmetry-protected phase to imagine that the U(1) symmetry is gauged, and then to consider the potential anomalies that could afflict this gauge symmetry. The first topological insulators conceived by theorists were envisioned as systems of non-interacting electrons whose properties were relatively easy to understand using band theory. But it was not so clear at first how interactions among the electrons might alter their exotic behavior. The wonderful thing about anomalies is that they are robust with respect to interactions. In many cases we can infer the features of anomalies by studying a theory of non-interacting particles, assured that these features survive even when the particles interact.

As have many previous authors, Metlitski et al. argue that when we couple the conserved particle number to a U(1) gauge field, the effective theory describing the bulk physics of a topological insulator in three dimensions may contain a $\theta$ term. But wait … since the electric field is even under time reversal and the magnetic field is odd, the $\theta$ term is T-odd; under T, $\theta$ is mapped to $-\theta$, so T seems to be violated if $\theta$ has any nonzero value. Except … we have to remember that $\theta$ is really a periodic variable. For a fermionic topological insulator the period is $2\pi$; therefore the theory with $\theta = \pi$ is time reversal invariant; $\theta = \pi$ maps to $\theta = -\pi$ under T, which is equivalent to a rotation of $\theta$ by $2\pi$. For a bosonic topological insulator the period is $4\pi$, which means that $\theta = 2\pi$ is the nontrivial T-invariant value.

If we say that a “trivial” insulator (e.g., the vacuum) has $\theta = 0$, then we may say that a bulk material with $\theta = \pi$ (fermionic case) or $\theta = 2\pi$ (bosonic case) is a “nontrivial” (a.k.a. topological) insulator. At the edge of the sample, where bulk material meets vacuum, $\theta$ must rotate suddenly by $\pi$ (fermions) or by $2\pi$ (bosons). The exotic edge physics is a consequence of this abrupt change in $\theta$.

Monopoles in Flatland

To understand the edge physics, and in particular to grasp how fermionic and bosonic topological insulators differ, Metlitski et al. invite us to imagine a magnetic monopole with magnetic charge $g_D$ passing through the boundary between the bulk and the surrounding vacuum. To the Flatlanders confined to the surface of the bulk sample, the passing monopole induces a sudden change in the magnetic flux through the surface by a single flux quantum $g_D$, which could arise due to a quantum tunneling event. What does the Flatlander see?

In a fermionic topological insulator, there is a monopole that carries charge e/2 when inside the sample (where $\theta=-\pi$) and charge 0 when outside (where $\theta=0$). Since electric charge is surely conserved in the full three-dimensional theory, the change in the monopole’s charge must be compensated by a corresponding change in the charge residing on the surface. Flatlanders are puzzled to witness a spontaneously arising excitation with charge e/2. This is an anomaly — electric charge conservation is violated, which can only make sense if Flatlanders are confined to a surface in a higher-dimensional world. Though unable to escape their surface world, the Flatlanders can be convinced by the Monopole that an extra dimension must exist.

In a bosonic topological insulator, the story is somewhat different: there is a monopole that carries electric charge 0 when inside the sample (where $\theta=-2\pi$) and charge -e when outside (where $\theta=0$). In this case, though, there are bosonic charge-e particles living on the surface. A monopole can pick up a charged particle as it passes through Flatland, so that its charge is 0 both inside the bulk sample and outside in the vacuum. Flatlanders are happy — electric charge is conserved!

But hold on … there’s still something wrong. Inside the bulk (where $\theta= -2\pi$) a monopole with electric charge 0 is a fermion, while outside in the vacuum (where $\theta = 0$) it is a boson. In the three-dimensional theory it is not possible for any local process to create an isolated fermion, so if the fermionic monopole becomes a bosonic monople as it passes through Flatland, it must leave a fermion behind. Flatlanders are puzzled to witness a spontaneously arising fermion. This is an anomaly — conservation of fermionic parity is violated, which can only make sense if Flatlanders are confined to a surface in a higher-dimensional world. Once again, the clever residents of Flatland learn from the Monopole about an extra spatial dimension, without ever venturing outside their two-dimensional home.

Topological order gets edgy

This post is already pretty long and I should wrap it up. Before concluding I’ll remark that the theory of symmetry-protected phases has been developing rapidly in recent months.

In particular, a new idea, introduced last fall by Vishwanath and Senthil, has been attracting increasing attention. While in most previously studied SPT phases the unbroken symmetry protects gapless excitations confined to the edge of the sample, Vishwanath and Senthil pointed out another possibility — a gapped edge exhibiting topological order. The surface can support anyons with exotic braiding statistics.

Here, too, anomalies are central to the discussion. While anyons in two-dimensional media are already a much-studied subject, the anyon models that can be realized at the edges of three-dimensional SPT phases are different than anyon models realized in really, truly two-dimensional systems. What’s new are not the braiding properties of the anyons, but rather how the anyons transform under the symmetry. Flatlanders who study the symmetry realization in their gapped two-dimensional world should be able to infer the existence of the three-dimensional bulk.

The pace of discovery picked up this month when four papers appeared simultaneously on the preprint arXiv, by Metlitski-Kane-Fisher, Chen-Fidkowski-Vishwanath, Bonderson-Nayak-Qi, and Wang-Potter-Senthil, all proposing and analyzing models of SPT phases with gapped edges. It remains to be seen, though, whether this physics will be realized in actual materials.

Are we on the edge?

In Flatland, our two-dimensional friend, finally able to perceive the third dimension thanks to the Sphere’s insistent tutelage, begs to enter a world of still higher dimensions, “where thine own intestines, and those of kindred Spheres, will lie exposed to … view.” The Sphere is baffled by the Flatlander’s request, protesting, “There is no such land. The very idea of it is utterly inconceivable.”

Let’s not be so dogmatic as the Sphere. The lessons learned from the quantum Hall effect and the topological insulator have prepared us to take the next step, envisioning our own three-dimensional world as the edge of a higher-dimensional bulk system. The existence of an unseen bulk may be inferred in the future by us edgelings, if experimental explorations of our three-dimensional effective theory reveal anomalies begging for an explanation.

Perhaps we are on the edge … of a great discovery. At least it’s conceivable.

*Disclaimer: The gender politics of Flatland, to put it mildly, is outdated and offensive. I don’t wish to endorse the idea that women are one dimensional! I included the reference to Flatland because the imagery of two-dimensional beings struggling to imagine the third dimension is a perfect fit to the scientific content of this post.

# We are all Wilsonians now

Ken Wilson

Ken Wilson passed away on June 15 at age 77. He changed how we think about physics.

Renormalization theory, first formulated systematically by Freeman Dyson in 1949, cured the flaws of quantum electrodynamics and turned it into a precise computational tool. But the subject seemed magical and mysterious. Many physicists, Dirac prominently among them, questioned whether renormalization rests on a sound foundation.

Wilson changed that.

The renormalization group concept arose in an extraordinary paper by Gell-Mann and Low in 1954. It was embraced by Soviet physicists like Bogoliubov and Landau, and invoked by Landau to challenge the consistency of quantum electrodynamics. But it was an abstruse and inaccessible topic, as is well illustrated by the baffling discussion at the very end of the two-volume textbook by Bjorken and Drell.

Wilson changed that, too.

Ken Wilson turned renormalization upside down. Dyson and others had worried about the “ultraviolet divergences” occurring in Feynman diagrams. They introduced an artificial cutoff on integrations over the momenta of virtual particles, then tried to show that all the dependence on the cutoff can be eliminated by expressing the results of computations in terms of experimentally accessible quantities. It required great combinatoric agility to show this trick works in electrodynamics. In other theories, notably including general relativity, it doesn’t work.

Wilson adopted an alternative viewpoint. Take the short-distance cutoff seriously, he said, regarding it as part of the physical formulation of the field theory. Now ask what physics looks like at distances much larger than the cutoff. Wilson imagined letting the short-distance cutoff grow, while simultaneously adjusting the theory to preserve its low-energy predictions. This procedure sounds complicated, but Wilson discovered something wonderful — for the purpose of computing low-energy processes the theory becomes remarkably simple, completely characterized by just a few (renormalized) parameters. One recovers Dyson’s results plus much more, while also acquiring a rich and visually arresting physical picture of what is going on.

When I started graduate school in 1975, Wilson, not yet 40, was already a legend. Even Sidney Coleman, for me the paragon of razor sharp intellect, seemed to regard Wilson with awe. (They had been contemporaries at Caltech, both students of Murray Gell-Mann.) It enhanced the legend that Wilson had been notoriously slow to publish. He spent years pondering the foundations of quantum field theory before finally unleashing a torrent of revolutionary papers in the early 70s. Cornell had the wisdom to grant tenure despite Wilson’s unusually low productivity during the 60s.

As a student, I spent countless hours struggling through Wilson’s great papers, some of which were quite difficult. One introduced me to the operator product expansion, which became a workhorse of high-energy scattering theory and the foundation of conformal field theory. Another considered all the possible ways that renormalization group fixed points could control the high-energy behavior of the strong interactions. Conspicuously missing from the discussion was what turned out to be the correct idea — asymptotic freedom. Wilson had not overlooked this possibility; instead he “proved” it to be impossible. The proof contains a subtle error. Wilson analyzed charge renormalization invoking both Lorentz covariance and positivity of the Hilbert space metric, forgetting that gauge theories admit no gauge choice with both properties. Even Ken Wilson made mistakes.

Wilson also formulated the strong-coupling expansion of lattice gauge theory, and soon after pioneered the Euclidean Monte Carlo method for computing the quantitative non-perturbative predictions of quantum chromodynamics, which remains today an extremely active and successful program. But of the papers by Wilson I read while in graduate school, the most exciting by far was this one about the renormalization group. Toward the end of the paper Wilson discussed how to formulate the notion of the “continuum limit” of a field theory with a cutoff. Removing the short-distance cutoff is equivalent to taking the limit in which the correlation length (the inverse of the renormalized mass) is infinitely long compared to the cutoff — the continuum limit is a second-order phase transition. Wilson had finally found the right answer to the decades-old question, “What is quantum field theory?” And after reading his paper, I knew the answer, too! This Wilsonian viewpoint led to further deep insights mentioned in the paper, for example that an interacting self-coupled scalar field theory is unlikely to exist (i.e. have a continuum limit) in four spacetime dimensions.

Wilson’s mastery of quantum field theory led him to another crucial insight in the 1970s which has profoundly influenced physics in the decades since — he denigrated elementary scalar fields as unnatural. I learned about this powerful idea from an inspiring 1979 paper not by Wilson, but by Lenny Susskind. That paper includes a telltale acknowledgment: “I would like to thank K. Wilson for explaining the reasons why scalar fields require unnatural adjustments of bare constants.”

Susskind, channeling Wilson, clearly explains a glaring flaw in the standard model of particle physics — ensuring that the Higgs boson mass is much lighter than the Planck (i.e., cutoff) scale requires an exquisitely careful tuning of the theory’s bare parameters. Susskind proposed to banish the Higgs boson in favor of Technicolor, a new strong interaction responsible for breaking the electroweak gauge symmetry, an idea I found compelling at the time. Technicolor fell into disfavor because it turned out to be hard to build fully realistic models, but Wilson’s complaint about elementary scalars continued to drive the quest for new physics beyond the standard model, and in particular bolstered the hope that low-energy supersymmetry (which eases the fine tuning problem) will be discovered at the Large Hadron Collider. Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?

Wilson’s great legacy is that we now regard nearly every quantum field theory as an effective field theory. We don’t demand or expect that the theory will continue working at arbitrarily short distances. At some stage it will break down and be replaced by a more fundamental description. This viewpoint is now so deeply ingrained in how we do physics that today’s students may be surprised to hear it was not always so. More than anyone else, we have Ken Wilson to thank for this indispensable wisdom. Few ideas have changed physics so much.

# Quantum Matter Animated!

by Jorge Cham

What does it mean for something to be Quantum? I have to confess, I don’t know. My Ph.D was in Robotics and Kinematics, so my neurons are deeply trained to think in terms of classical dynamics. I asked my siblings (two engineers and one architect) what comes to mind for them when they hear the word Quantum, what they remember from college physics, and here is what they said:

- “Quantum Leap!” (the late 80’s TV show)

- “Quantum of Solace!” (the James Bond movie which, incidentally, was filmed in my home country of Panama, even though the movie was set in Bolivia)

- “I don’t remember anything I learned in college”

- “Light acting as a particle instead of a wave?”

The third answer came from my sister, who went to MIT. The fourth came from my brother, who went to Stanford (+1 point for Stanford!).

I also asked my spouse what comes to mind for her. She said, “Quantum Computing: it’s the next big advance in computers. Transistors the size of atoms.” Clearly, I married someone smarter than me (she also went to Stanford). When I asked if she knew how they worked, she said, “I don’t know how it works.” She also said, “Quantum is related to how time moves more slowly as you approach the speed of light, right?” Nice try, but that’s Relativity (-1 point for Stanford!).

I think the word Quantum has a special power in our collective consciousness. It’s used to convey science-iness, technology, the weirdness of the Physical world. If you Google “Quantum”, most of the top hits are for technology companies that have nothing to do with Quantum Physics (including Quantum Fishing Tackles. I suppose that half the time, you pull up a dead fish).

It’s one of those words that a lot of people have heard of, but very few really understand what it means. Which is why I was excited when Spiros Michalakis and IQIM approached me to produce a series of animations that explore and explain Quantum Information and Matter. Like my previous videos (The Higgs Boson, Dark Matter, Exoplanets), I’d have the chance to interview experts in this field and use their expertise and their voices to learn and to help others learn what amazing things lie just around the corner, beyond our classical understanding of the Universe.

This will be a big Leap for me (I’m trying to avoid the obvious pun), and a journey of exploration. The first installment goes live today, and you can watch it below. Like Schrödinger’s box, I don’t know what we’ll discover with these videos, but I know there are exciting possibilities inside. This is also going to be a BIG challenge. Understanding and putting Quantum concepts in visual form will be hard. I mean, Hair-pulling hard. Fortunately, I’ve discovered there’s a remedy for that.

Watch the first installment of this series:

Jorge Cham is the creator of Piled Higher and Deeper (www.phdcomics.com).

CREDITS:

Featuring: Amir Safavi-Naeini and Oskar Painter http://copilot.caltech.edu/

Produced in Partnership with the Institute for Quantum Information and Matter (http://iqim.caltech.edu) at Caltech with funding provided by the National Science Foundation.

Transcription: Noel Dilworth
Thanks to: Spiros Michalakis, John Preskill and Bert Painter

# Entanglement = Wormholes

One of the most enjoyable and inspiring physics papers I have read in recent years is this one by Mark Van Raamsdonk. Building on earlier observations by Maldacena and by Ryu and Takayanagi. Van Raamsdonk proposed that quantum entanglement is the fundamental ingredient underlying spacetime geometry.* Since my first encounter with this provocative paper, I have often mused that it might be a Good Thing for someone to take Van Raamsdonk’s idea really seriously.

Now someone has.

I love wormholes. (Who doesn’t?) Picture two balls, one here on earth, the other in the Andromeda galaxy. It’s a long trip from one ball to the other on the background space, but there’s a shortcut:You can walk into the ball on earth and moments later walk out of the ball in Andromeda. That’s a wormhole.

I’ve mentioned before that John Wheeler was one of my heros during my formative years. Back in the 1950s, Wheeler held a passionate belief that “everything is geometry,” and one particularly intriguing idea he called “charge without charge.” There are no pointlike electric charges, Wheeler proclaimed; rather, electric field lines can thread the mouth of a wormhole. What looks to you like an electron is actually a tiny wormhole mouth. If you were small enough, you could dive inside the electron and emerge from a positron far away. In my undergraduate daydreams, I wished this idea could be true.

But later I found out more about wormholes, and learned about “topological censorship.” It turns out that if energy is nonnegative, Einstein’s gravitational field equations prevent you from traversing a wormhole — the throat always pinches off (or becomes infinitely long) before you get to the other side. It has sometimes been suggested that quantum effects might help to hold the throat open (which sounds like a good idea for a movie), but today we’ll assume that wormholes are never traversable no matter what you do.

Love in a wormhole throat: Alice and Bob are in different galaxies, but each lives near a black hole, and their black holes are connected by a wormhole. If both jump into their black holes, they can enjoy each other’s company for a while before meeting a tragic end.

Are wormholes any fun if we can never traverse them? The answer might be yes if two black holes are connected by a wormhole. Then Alice on earth and Bob in Andromeda can get together quickly if each jumps into a nearby black hole. For solar mass black holes Alice and Bob will have only 10 microseconds to get acquainted before meeting their doom at the singularity. But if the black holes are big enough, Alice and Bob might have a fulfilling relationship before their tragic end.

This observation is exploited in a recent paper by Juan Maldacena and Lenny Susskind (MS) in which they reconsider the AMPS puzzle (named for Almheiri, Marolf, Polchinski, and Sully). I wrote about this puzzle before, so I won’t go through the whole story again. Here’s the short version: while classical correlations can easily be shared by many parties, quantum correlations are harder to share. If Bob is highly entangled with Alice, that limits his ability to entangle with Carrie, and if he entangles with Carrie instead he can’t entangle with Alice. Hence we say that entanglement is “monogamous.” Now, if, as most of us are inclined to believe, information is “scrambled” but not destroyed by an evaporating black hole, then the radiation emitted by an old black hole today should be highly entangled with radiation emitted a long time ago. And if, as most of us are inclined to believe, nothing unusual happens (at least not right away) to an observer who crosses the event horizon of a black hole, then the radiation emitted today should be highly entangled with stuff that is still inside the black hole. But we can’t have it both ways without violating the monogamy of entanglement!

The AMPS puzzle invites audacious reponses, and AMPS were suitably audacious. They proposed that an old black hole has no interior — a freely falling observer meets her doom right at the horizon rather than at a singularity deep inside.

MS are also audacious, but in a different way. They helpfully summarize their key point succinctly in a simple equation:

ER = EPR

Here, EPR means Einstein-Podolsky-Rosen, whose famous paper highlighted the weirdness of quantum correlations, while ER means Einstein-Rosen (sorry, Podolsky), who discovered wormhole solutions to the Einstein equations. (Both papers were published in 1935.) MS (taking Van Raamsdonk very seriously) propose that whenever any two quantum subsystems are entangled they are connected by a wormhole. In many cases, these wormholes are highly quantum mechanical, but in some cases (where the quantum system under consideration has a weakly coupled “gravitational dual”), the wormhole can have a smooth geometry like the one ER described. That wormholes are not traversable is important for the consistency of ER = EPR: just as Alice cannot use their shared entanglement to send a message to Bob instantaneously, so she is unable to send Bob a message through their shared wormhole.

AMPS imagined that Alice could distill qubit C from the black hole’s early radiation and carry it back to the black hole, successfully verifying its entanglement with another qubit B distilled from the recent radiation. Monogamy then ensures that qubit B cannot be entangled with qubit A behind the horizon. Hence when Alice falls through the horizon she will not observe the quiescent vacuum state in which A and B are entangled; instead she encounters a high-energy particle. MS agree with this conclusion.

AMPS go on to say that Alice’s actions before entering the black hole could not have created that energetic particle; it must have been there all along, one of many such particles constituting a seething firewall.

Here MS disagree. They argue that the excitation encountered by Alice as she crosses the horizon was actually created by Alice herself when she interacted with qubit C. How could Alice’s actions, executed far, far away from the black hole, dramatically affect the state of the black hole’s interior? Because C and A are connected by a wormhole!

The ER = EPR conjecture seems to allow us to view the early radiation with which the black hole is entangled as a complementary description of the black hole interior. It’s not clear yet whether this picture works in detail, and even if it does there could still be firewalls; maybe in some sense the early radiation is connected to the black hole via a wormhole, yet this wormhole is wildly fluctuating rather than a smooth geometry. Still, MS provide a promising new perspective on a deep problem.

As physicists we often rely on our sense of smell in judging scientific ideas, and earlier proposed resolutions of the AMPS puzzle (like firewalls) did not smell right. At first whiff, ER = EPR may smell fresh and sweet, but it will have to ripen on the shelf for a while. If this idea is on the right track, there should be much more to say about it. For now, wormhole lovers can relish the possibilities.

Eventually, Wheeler discarded “everything is geometry” in favor of an ostensibly deeper idea: “everything is information.” It would be a fitting vindication of Wheeler’s vision if everything in the universe, including wormholes, is made of quantum correlations.

*Update: Commenter JM reminded me to mention Brian Swingle’s beautiful 2009 paper, which preceded Van Raamsdonk’s and proposed a far-reaching connection between quantum entanglement and spacetime geometry.