# John Preskill and the dawn of the entanglement frontier

Editor’s Note: John Preskill’s recent election to the National Academy of Sciences generated a lot of enthusiasm among his colleagues and students. In an earlier post today, famed Stanford theoretical physicist, Leonard Susskind, paid tribute to John’s early contributions to physics ranging from magnetic monopoles to the quantum mechanics of black holes. In this post, Daniel Gottesman, a faculty member at the Perimeter Institute, takes us back to the formative years of the Institute for Quantum Information at Caltech, the precursor to IQIM and a world-renowned incubator for quantum information and quantum computation research. Though John shies away from the spotlight, we, at IQIM, believe that the integrity of his character and his role as a mentor and catalyst for science are worthy of attention and set a good example for current and future generations of theoretical physicists.

Preskill’s legacy may well be the incredible number of preeminent research scientists in quantum physics he has mentored throughout his extraordinary career.

When someone wins a big award, it has become traditional on this blog for John Preskill to write something about them. The system breaks down, though, when John is the one winning the award. Therefore I’ve been brought in as a pinch hitter (or should it be pinch lionizer?).

The award in this case is that John has been elected to the National Academy of Sciences, along with Charlie Kane and a number of other people that don’t work on quantum information. Lenny Susskind has already written about John’s work on other topics; I will focus on quantum information.

On the research side of quantum information, John is probably best known for his work on fault-tolerant quantum computation, particularly topological fault tolerance. John jumped into the field of quantum computation in 1994 in the wake of Shor’s algorithm, and brought me and some of his other grad students with him. It was obvious from the start that error correction was an important theoretical challenge (emphasized, for instance, by Unruh), so that was one of the things we looked at. We couldn’t figure out how to do it, but some other people did. John and I embarked on a long drawn-out project to get good bounds on the threshold error rate. If you can build a quantum computer with an error rate below the threshold value, you can do arbitrarily large quantum computations. If not, then errors will eventually overwhelm you. Early versions of my project with John suggested that the threshold should be about $10^{-4}$, and the number began floating around (somewhat embarrassingly) as the definitive word on the threshold value. Our attempts to bound the higher-order terms in the computation became rather grotesque, and the project proceeded very slowly until a new approach and the recruitment of Panos Aliferis finally let us finish a paper with a rigorous proof of a slightly lower threshold value.

Meanwhile, John had also been working on topological quantum computation. John has already written about his excitement when Kitaev visited Caltech and talked about the toric code. The two of them, plus Eric Dennis and Andrew Landahl, studied the application of this code for fault tolerance. If you look at the citations of this paper over time, it looks rather … exponential. For a while, topological things were too exotic for most quantum computer people, but over time, the virtues of surface codes have become obvious (apparently high threshold, convenient for two-dimensional architectures). It’s become one of the hot topics in recent years and there are no signs of flagging interest in the community.

John has also made some important contributions to security proofs for quantum key distribution, known to the cognoscenti just by its initials. QKD allows two people (almost invariably named Alice and Bob) to establish a secret key by sending qubits over an insecure channel. If the eavesdropper Eve tries to live up to her name, her measurements of the qubits being transmitted will cause errors revealing her presence. If Alice and Bob don’t detect the presence of Eve, they conclude that she is not listening in (or at any rate hasn’t learned much about the secret key) and therefore they can be confident of security when they later use the secret key to encrypt a secret message. With Peter Shor, John gave a security proof of the best-known QKD protocol, known as the “Shor-Preskill” proof. Sometimes we scientists lack originality in naming. It was not the first proof of security, but earlier ones were rather complicated. The Shor-Preskill proof was conceptually much clearer and made a beautiful connection between the properties of quantum error-correcting codes and QKD. The techniques introduced in their paper got adopted into much later work on quantum cryptography.

Collaborating with John is always an interesting experience. Sometimes we’ll discuss some idea or some topic and it will be clear that John does not understand the idea clearly or knows little about the topic. Then, a few days later we discuss the same subject again and John is an expert, or at least he knows a lot more than me. I guess this ability to master
topics quickly is why he was always able to answer Steve Flammia’s random questions after lunch. And then when it comes time to write the paper … John will do it. It’s not just that he will volunteer to write the first draft — he keeps control of the whole paper and generally won’t let you edit the source, although of course he will incorporate your comments. I think this habit started because of incompatibilities between the TeX editor he was using and any other program, but he maintains it (I believe) to make sure that the paper meets his high standards of presentation quality.

This also explains why John has been so successful as an expositor. His
lecture notes for the quantum computation class at Caltech are well-known. Despite being incomplete and not available on Amazon, they are probably almost as widely read as the standard textbook by Nielsen and Chuang.

Before IQIM, there was IQI, and before that was QUIC.

He apparently is also good at writing grants. Under his leadership and Jeff Kimble’s, Caltech has become one of the top places for quantum computation. In my last year of graduate school, John and Jeff, along with Steve Koonin, secured the QUIC grant, and all of a sudden Caltech had money for quantum computation. I got a research assistantship and could write my thesis without having to worry about TAing. Postdocs started to come — first Chris Fuchs, then a long stream of illustrious others. The QUIC grant grew into IQI, and that eventually sprouted an M and drew in even more people. When I was a student, John’s group was located in Lauritsen with the particle theory group. We had maybe three grad student offices (and not all the students were working on quantum information), plus John’s office. As the Caltech quantum effort grew, IQI acquired territory in another building, then another, and then moved into a good chunk of the new Annenberg building. Without John’s efforts, the quantum computing program at Caltech would certainly be much smaller and maybe completely lacking a theory side. It’s also unlikely this blog would exist.

The National Academy has now elected John a member, probably more for his research than his twitter account (@preskill), though I suppose you never know. Anyway, congratulations, John!

-D. Gottesman

# Of magnetic monopoles and fast-scrambling black holes

Editor’s Note: On April 29th, 2014, the National Academy of Sciences announced the new electees to the prestigious organization. This was an especially happy occasion for everyone here at IQIM, since the new members included our very own John Preskill, Richard P. Feynman Professor of Theoretical Physics and regular blogger on this site. A request was sent to Leonard Susskind, a close friend and collaborator of John’s, to take a trip down memory lane and give the rest of us a glimpse of some of John’s early contributions to Physics. John, congratulations from all of us here at IQIM.

John Preskill was elected to the National Academy of Sciences, an event long overdue. Perhaps it took longer than it should have because there is no way to pigeon-hole him; he is a theoretical physicist, and that’s all there is to it.

John has long been one of my heroes in theoretical physics. There is something very special about his work. It has exceptional clarity, it has vision, it has integrity—you can count on it. And sometimes it has another property: it can surprise. The first time I heard his name come up, sometime around 1979, I was not only surprised; I was dismayed. A student whose name I had never heard of, had uncovered a serious clash between two things, both of which I deeply wanted to believe in. One was the Big-Bang theory and the other was the discovery of grand unified particle theories. Unification led to the extraordinary prediction that Dirac’s magnetic monopoles must exist, at least in principle. The Big-Bang theory said they must exist in fact. The extreme conditions at the beginning of the universe were exactly what was needed to create loads of monopoles; so many that they would flood the universe with too much mass. John, the unknown graduate student, did a masterful analysis. It left no doubt that something had to give. Cosmology gave. About a year later, inflationary cosmology was discovered by Guth who was in part motivated by Preskill’s monopole puzzle.

John’s subsequent career as a particle physicist was marked by a number of important insights which often had that surprising quality. The cosmology of the invisible axion was one. Others had to do with very subtle and counterintuitive features of quantum field theory, like the existence of “Alice strings”. In the very distant past, Roger Penrose and I had a peculiar conversation about possible generalizations of the Aharonov-Bohm effect. We speculated on all sorts of things that might happen when something is transported around a string. I think it was Roger who got excited about the possibilities that might result if a topological defect could change gender. Alice strings were not quite that exotic, only electric charge flips, but nevertheless it was very surprising.

John of course had a long standing interest in the quantum mechanics of black holes: I will quote a passage from a visionary 1992 review paper, “Do Black Holes Destroy Information?

“I conclude that the information loss paradox may well presage a revolution in fundamental physics.”

At that time no one knew the answer to the paradox, although a few of us, including John, thought the answer was that information could not be lost. But almost no one saw the future as clearly as John did. Our paths crossed in 1993 in a very exciting discussion about black holes and information. We were both thinking about the same thing, now called black hole complementarity. We were concerned about quantum cloning if information is carried by Hawking radiation. We thought we knew the answer: it takes too long to retrieve the information to then be able to jump into the black hole and discover the clone. This is probably true, but at that time we had no idea how close a call this might be.

It took until 2007 to properly formulate the problem. Patrick Hayden and John Preskill utterly surprised me, and probably everyone else who had been thinking about black holes, with their now-famous paper “Black Holes as Mirrors.” In a sense, this paper started a revolution in applying the powerful methods of quantum information theory to black holes.

We live in the age of entanglement. From quantum computing to condensed matter theory, to quantum gravity, entanglement is the new watchword. Preskill was in the vanguard of this revolution, but he was also the teacher who made the new concepts available to physicists like myself. We can now speak about entanglement, error correction, fault tolerance, tensor networks and more. The Preskill lectures were the indispensable source of knowledge and insight for us.

Congratulations John. And congratulations NAS.

-L. S.

# Tsar Nikita and His Scientists

Once upon a time, a Russian tsar named Nikita had forty daughters:

Every one from top to toe
Was a captivating creature,
Perfect—but for one lost feature.

So wrote Alexander Pushkin, the 19th-century Shakespeare who revolutionized Russian literature. In a rhyme, Pushkin imagined forty princesses born without “that bit” “[b]etween their legs.” A courier scours the countryside for a witch who can help. By summoning the devil in the woods, she conjures what the princesses lack into a casket. The tsar parcels out the casket’s contents, and everyone rejoices.

“[N]onsense,” Pushkin calls the tale in its penultimate line. A “joke.”

The joke has, nearly two centuries later, become reality. Researchers have grown vaginas in a lab and implanted them into teenage girls. Thanks to a genetic defect, the girls suffered from Mayer-Rokitansky-Küster-Hauser (MRKH) syndrome: Their vaginas and uteruses had failed to grow to maturity or at all. A team at Wake Forest and in Mexico City took samples of the girls’ cells, grew more cells, and combined their harvest with vagina-shaped scaffolds. Early in the 2000s, surgeons implanted the artificial organs into the girls. The patients, the researchers reported in the journal The Lancet last week, function normally.

I don’t usually write about reproductive machinery. But the implants’ resonance with “Tsar Nikita” floored me. Scientists have implanted much of Pushkin’s plot into labs. The sexually deficient girls, the craftsperson, the replacement organs—all appear in “Tsar Nikita” as in The Lancet. In poetry as in science fiction, we read the future.

Though threads of Pushkin’s plot survive, society’s view of the specialist has progressed. “Deep [in] the dark woods” lives Pushkin’s witch. Upon summoning the devil, she locks her cure in a casket. Today’s vagina-implanters star in headlines. The Wall Street Journal highlighted the implants in its front section. Unless the patients’ health degrades, the researchers will likely list last week’s paper high on their CVs and websites.

Much as Dr. Atlántida Raya-Rivera, the paper’s lead author, differs from Pushkin’s witch, the visage of Pushkin’s magic wears the nose and eyebrows of science. When tsars or millenials need medical help, they seek knowledge-keepers: specialists, a fringe of society. Before summoning the devil, the witch “[l]ocked her door . . . Three days passed.” I hide away to calculate and study (though days alone might render me more like the protagonist in another Russian story, Chekhov’s “The Bet”). Just as the witch “stocked up coal,” some students stockpile Red Bull before hitting the library. Some habits, like the archetype of the wise woman, refuse to die.

From a Russian rhyme, the bones of “Tsar Nikita” have evolved into cutting-edge science. Pushkin and the implants highlight how attitudes toward knowledge have changed, offering a lens onto science in culture and onto science culture. No wonder readers call Pushkin “timeless.”

But what would he have rhymed with “Mayer-Rokitansky-Küster-Hauser”?

“Tsar Nikita” has many nuances—messages about censorship, for example—that I didn’t discuss. To the intrigued, I recommend The Queen of Spades: And selected works, translated by Anthony Briggs and published by Pushkin Press.

# IQIM Presents …”my father”

Debaleena Nandi at Caltech

Following the IQIM teaser, which was made with the intent of creating a wider perspective of the scientist, to highlight the normalcy behind the perception of brilliance and to celebrate the common human struggles to achieve greatness, we decided to do individual vignettes of some of the characters you saw in the video.

We start with Debaleena Nandi, a grad student in Prof Jim Eisenstein’s lab, whose journey from Jadavpur University in West Bengal, India to the graduate school and research facility at the Indian institute of Science, Bangalore, to Caltech has seen many obstacles. We focus on the essentials of an environment needed to manifest the quest for “the truth” as Debaleena says. We start with her days as a child when her double-shift working father sat by her through the days and nights that she pursued her homework.

She highlights what she feels is the only way to growth; working on what is lacking, to develop that missing tool in your skill set, that asset that others might have by birth but you need to inspire by hard work.

Debaleena’s motto: to realize and face your shortcomings is the only way to achievement.

As we build Debaleena up, we also build up the identity of Caltech through its breathtaking architecture that oscillates from Spanish to Goth to modern. Both Debaleena and Caltech are revealed slowly, bit by bit.

This series is about dissecting high achievers, seeing the day to day steps, the bit by bit that adds up to the more often than not, overwhelming, impressive presence of Caltech’s science. We attempt to break it down in smaller vignettes that help us appreciate the amount of discipline, intent and passion that goes into making cutting edge researchers.

Presenting the emotional alongside the rational is something this series aspires to achieve. It honors and celebrates human limitations surrounding limitless boundaries, discoveries and possibilities.

Stay tuned for more vignettes in the IQIM Presents “My _______” Series.

But for now, here is the video. Watch, like and share!

(C) Parveen Shah Production 2014

# Inflation on the back of an envelope

Last Monday was an exciting day!

After following the BICEP2 announcement via Twitter, I had to board a transcontinental flight, so I had 5 uninterrupted hours to think about what it all meant. Without Internet access or references, and having not thought seriously about inflation for decades, I wanted to reconstruct a few scraps of knowledge needed to interpret the implications of r ~ 0.2.

I did what any physicist would have done … I derived the basic equations without worrying about niceties such as factors of 3 or $2 \pi$. None of what I derived was at all original —  the theory has been known for 30 years — but I’ve decided to turn my in-flight notes into a blog post. Experts may cringe at the crude approximations and overlooked conceptual nuances, not to mention the missing references. But some mathematically literate readers who are curious about the implications of the BICEP2 findings may find these notes helpful. I should emphasize that I am not an expert on this stuff (anymore), and if there are serious errors I hope better informed readers will point them out.

By tradition, careless estimates like these are called “back-of-the-envelope” calculations. There have been times when I have made notes on the back of an envelope, or a napkin or place mat. But in this case I had the presence of mind to bring a notepad with me.

Notes from a plane ride

According to inflation theory, a nearly homogeneous scalar field called the inflaton (denoted by $\phi$)  filled the very early universe. The value of $\phi$ varied with time, as determined by a potential function $V(\phi)$. The inflaton rolled slowly for a while, while the dark energy stored in $V(\phi)$ caused the universe to expand exponentially. This rapid cosmic inflation lasted long enough that previously existing inhomogeneities in our currently visible universe were nearly smoothed out. What inhomogeneities remained arose from quantum fluctuations in the inflaton and the spacetime geometry occurring during the inflationary period.

Gradually, the rolling inflaton picked up speed. When its kinetic energy became comparable to its potential energy, inflation ended, and the universe “reheated” — the energy previously stored in the potential $V(\phi)$ was converted to hot radiation, instigating a “hot big bang”. As the universe continued to expand, the radiation cooled. Eventually, the energy density in the universe came to be dominated by cold matter, and the relic fluctuations of the inflaton became perturbations in the matter density. Regions that were more dense than average grew even more dense due to their gravitational pull, eventually collapsing into the galaxies and clusters of galaxies that fill the universe today. Relic fluctuations in the geometry became gravitational waves, which BICEP2 seems to have detected.

Both the density perturbations and the gravitational waves have been detected via their influence on the inhomogeneities in the cosmic microwave background. The 2.726 K photons left over from the big bang have a nearly uniform temperature as we scan across the sky, but there are small deviations from perfect uniformity that have been precisely measured. We won’t worry about the details of how the size of the perturbations is inferred from the data. Our goal is to achieve a crude understanding of how the density perturbations and gravitational waves are related, which is what the BICEP2 results are telling us about. We also won’t worry about the details of the shape of the potential function $V(\phi)$, though it’s very interesting that we might learn a lot about that from the data.

Exponential expansion

Einstein’s field equations tell us how the rate at which the universe expands during inflation is related to energy density stored in the scalar field potential. If a(t) is the “scale factor” which describes how lengths grow with time, then roughly

$\left(\frac{\dot a}{a}\right)^2 \sim \frac{V}{m_P^2}$.

Here $\dot a$ means the time derivative of the scale factor, and $m_P = 1/\sqrt{8 \pi G} \approx 2.4 \times 10^{18}$ GeV is the Planck scale associated with quantum gravity. (G is Newton’s gravitational constant.) I’ve left our a factor of 3 on purpose, and I used the symbol ~ rather than = to emphasize that we are just trying to get a feel for the order of magnitude of things. I’m using units in which Planck’s constant $\hbar$ and the speed of light c are set to one, so mass, energy, and inverse length (or inverse time) all have the same dimensions. 1 GeV means one billion electron volts, about the mass of a proton.

(To persuade yourself that this is at least roughly the right equation, you should note that a similar equation applies to an expanding spherical ball of radius a(t) with uniform mass density V. But in the case of the ball, the mass density would decrease as the ball expands. The universe is different — it can expand without diluting its mass density, so the rate of expansion $\dot a / a$ does not slow down as the expansion proceeds.)

During inflation, the scalar field $\phi$ and therefore the potential energy $V(\phi)$ were changing slowly; it’s a good approximation to assume $V$ is constant. Then the solution is

$a(t) \sim a(0) e^{Ht},$

where $H$, the Hubble constant during inflation, is

$H \sim \frac{\sqrt{V}}{m_P}.$

To explain the smoothness of the observed universe, we require at least 50 “e-foldings” of inflation before the universe reheated — that is, inflation should have lasted for a time at least $50 H^{-1}$.

Slow rolling

During inflation the inflaton $\phi$ rolls slowly, so slowly that friction dominates inertia — this friction results from the cosmic expansion. The speed of rolling $\dot \phi$ is determined by

$H \dot \phi \sim -V'(\phi).$

Here $V'(\phi)$ is the slope of the potential, so the right-hand side is the force exerted by the potential, which matches the frictional force on the left-hand side. The coefficient of $\dot \phi$ has to be $H$ on dimensional grounds. (Here I have blown another factor of 3, but let’s not worry about that.)

Density perturbations

The trickiest thing we need to understand is how inflation produced the density perturbations which later seeded the formation of galaxies. There are several steps to the argument.

Quantum fluctuations of the inflaton

As the universe inflates, the inflaton field is subject to quantum fluctuations, where the size of the fluctuation depends on its wavelength. Due to inflation, the wavelength increases rapidly, like $e^{Ht}$, and once the wavelength gets large compared to $H^{-1}$, there isn’t enough time for the fluctuation to wiggle — it gets “frozen in.” Much later, long after the reheating of the universe, the oscillation period of the wave becomes comparable to the age of the universe, and then it can wiggle again. (We say that the fluctuations “cross the horizon” at that stage.) Observations of the anisotropy of the microwave background have determined how big the fluctuations are at the time of horizon crossing. What does inflation theory say about that?

Well, first of all, how big are the fluctuations when they leave the horizon during inflation? Then the wavelength is $H^{-1}$ and the universe is expanding at the rate $H$, so $H$ is the only thing the magnitude of the fluctuations could depend on. Since the field $\phi$ has the same dimensions as $H$, we conclude that fluctuations have magnitude

$\delta \phi \sim H.$

From inflaton fluctuations to density perturbations

Reheating occurs abruptly when the inflaton field reaches a particular value. Because of the quantum fluctuations, some horizon volumes have larger than average values of $\phi$ and some have smaller than average values; hence different regions reheat at slightly different times. The energy density in regions that reheat earlier starts to be reduced by expansion (“red shifted”) earlier, so these regions have a smaller than average energy density. Likewise, regions that reheat later start to red shift later, and wind up having larger than average density.

When we compare different regions of comparable size, we can find the typical (root-mean-square) fluctuations $\delta t$ in the reheating time, knowing the fluctuations in $\phi$ and the rolling speed $\dot \phi$:

$\delta t \sim \frac{\delta \phi}{\dot \phi} \sim \frac{H}{\dot\phi}.$

Small fractional fluctuations in the scale factor $a$ right after reheating produce comparable small fractional fluctuations in the energy density $\rho$. The expansion rate right after reheating roughly matches the expansion rate $H$ right before reheating, and so we find that the characteristic size of the density perturbations is

$\delta_S\equiv\left(\frac{\delta \rho}{\rho}\right)_{hor} \sim \frac{\delta a}{a} \sim \frac{\dot a}{a} \delta t\sim \frac{H^2}{\dot \phi}.$

The subscript hor serves to remind us that this is the size of density perturbations as they cross the horizon, before they get a chance to grow due to gravitational instabilities. We have found our first important conclusion: The density perturbations have a size determined by the Hubble constant $H$ and the rolling speed $\dot \phi$ of the inflaton, up to a factor of order one which we have not tried to keep track of. Insofar as the Hubble constant and rolling speed change slowly during inflation, these density perturbations have a strength which is nearly independent of the length scale of the perturbation. From here on we will denote this dimensionless scale of the fluctuations by $\delta_S$, where the subscript $S$ stands for “scalar”.

Perturbations in terms of the potential

Putting together $\dot \phi \sim -V' / H$ and $H^2 \sim V/{m_P}^2$ with our expression for $\delta_S$, we find

$\delta_S^2 \sim \frac{H^4}{\dot\phi^2}\sim \frac{H^6}{V'^2} \sim \frac{1}{{m_P}^6}\frac{V^3}{V'^2}.$

The observed density perturbations are telling us something interesting about the scalar field potential during inflation.

Gravitational waves and the meaning of r

The gravitational field as well as the inflaton field is subject to quantum fluctuations during inflation. We call these tensor fluctuations to distinguish them from the scalar fluctuations in the energy density. The tensor fluctuations have an effect on the microwave anisotropy which can be distinguished in principle from the scalar fluctuations. We’ll just take that for granted here, without worrying about the details of how it’s done.

While a scalar field fluctuation with wavelength $\lambda$ and strength $\delta \phi$ carries energy density $\sim \delta\phi^2 / \lambda^2$, a fluctuation of the dimensionless gravitation field $h$ with wavelength $\lambda$ and strength $\delta h$ carries energy density $\sim m_P^2 \delta h^2 / \lambda^2$. Applying the same dimensional analysis we used to estimate $\delta \phi$ at horizon crossing to the rescaled field $h/m_P$, we estimate the strength $\delta_T$ of the tensor fluctuations as

$\delta_T^2 \sim \frac{H^2}{m_P^2}\sim \frac{V}{m_P^4}.$

From observations of the CMB anisotropy we know that $\delta_S\sim 10^{-5}$, and now BICEP2 claims that the ratio

$r = \frac{\delta_T^2}{\delta_S^2}$

is about $r\sim 0.2$ at an angular scale on the sky of about one degree. The conclusion (being a little more careful about the O(1) factors this time) is

$V^{1/4} \sim 2 \times 10^{16}~GeV \left(\frac{r}{0.2}\right)^{1/4}.$

This is our second important conclusion: The energy density during inflation defines a mass scale, which turns our to be $2 \times 10^{16}~GeV$ for the observed value of $r$. This is a very interesting finding because this mass scale is not so far below the Planck scale, where quantum gravity kicks in, and is in fact pretty close to theoretical estimates of the unification scale in supersymmetric grand unified theories. If this mass scale were a factor of 2 smaller, then $r$ would be smaller by a factor of 16, and hence much harder to detect.

Rolling, rolling, rolling, …

Using $\delta_S^2 \sim H^4/\dot\phi^2$, we can express $r$ as

$r = \frac{\delta_T^2}{\delta_S^2}\sim \frac{\dot\phi^2}{m_P^2 H^2}.$

It is convenient to measure time in units of the number $N = H t$ of e-foldings of inflation, in terms of which we find

$\frac{1}{m_P^2} \left(\frac{d\phi}{dN}\right)^2\sim r;$

Now, we know that for inflation to explain the smoothness of the universe we need $N$ larger than 50, and if we assume that the inflaton rolls at a roughly constant rate during $N$ e-foldings, we conclude that, while rolling, the change in the inflaton field is

$\frac{\Delta \phi}{m_P} \sim N \sqrt{r}.$

This is our third important conclusion — the inflaton field had to roll a long, long, way during inflation — it changed by much more than the Planck scale! Putting in the O(1) factors we have left out reduces the required amount of rolling by about a factor of 3, but we still conclude that the rolling was super-Planckian if $r\sim 0.2$. That’s curious, because when the scalar field strength is super-Planckian, we expect the kind of effective field theory we have been implicitly using to be a poor approximation because quantum gravity corrections are large. One possible way out is that the inflaton might have rolled round and round in a circle instead of in a straight line, so the field strength stayed sub-Planckian even though the distance traveled was super-Planckian.

Spectral tilt

As the inflaton rolls, the potential energy, and hence also the Hubble constant $H$, change during inflation. That means that both the scalar and tensor fluctuations have a strength which is not quite independent of length scale. We can parametrize the scale dependence in terms of how the fluctuations change per e-folding of inflation, which is equivalent to the change per logarithmic length scale and is called the “spectral tilt.”

To keep things simple, let’s suppose that the rate of rolling is constant during inflation, at least over the length scales for which we have data. Using $\delta_S^2 \sim H^4/\dot\phi^2$, and assuming $\dot\phi$ is constant, we estimate the scalar spectral tilt as

$-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim - \frac{4 \dot H}{H^2}.$

Using $\delta_T^2 \sim H^2/m_P^2$, we conclude that the tensor spectral tilt is half as big.

From $H^2 \sim V/m_P^2$, we find

$\dot H \sim \frac{1}{2} \dot \phi \frac{V'}{V} H,$

and using $\dot \phi \sim -V'/H$ we find

$-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim \frac{V'^2}{H^2V}\sim m_P^2\left(\frac{V'}{V}\right)^2\sim \left(\frac{V}{m_P^4}\right)\left(\frac{m_P^6 V'^2}{V^3}\right)\sim \delta_T^2 \delta_S^{-2}\sim r.$

Putting in the numbers more carefully we find a scalar spectral tilt of $r/4$ and a tensor spectral tilt of $r/8$.

This is our last important conclusion: A relatively large value of $r$ means a significant spectral tilt. In fact, even before the BICEP2 results, the CMB anisotropy data already supported a scalar spectral tilt of about .04, which suggested something like $r \sim .16$. The BICEP2 detection of the tensor fluctuations (if correct) has confirmed that suspicion.

Summing up

If you have stuck with me this far, and you haven’t seen this stuff before, I hope you’re impressed. Of course, everything I’ve described can be done much more carefully. I’ve tried to convey, though, that the emerging story seems to hold together pretty well. Compared to last week, we have stronger evidence now that inflation occurred, that the mass scale of inflation is high, and that the scalar and tensor fluctuations produced during inflation have been detected. One prediction is that the tensor fluctuations, like the scalar ones, should have a notable spectral tilt, though a lot more data will be needed to pin that down.

I apologize to the experts again, for the sloppiness of these arguments. I hope that I have at least faithfully conveyed some of the spirit of inflation theory in a way that seems somewhat accessible to the uninitiated. And I’m sorry there are no references, but I wasn’t sure which ones to include (and I was too lazy to track them down).

It should also be clear that much can be done to sharpen the confrontation between theory and experiment. A whole lot of fun lies ahead.

Okay, here’s a good reference, a useful review article by Baumann. (I found out about it on Twitter!)

From Baumann’s lectures I learned a convenient notation. The rolling of the inflaton can be characterized by two “potential slow-roll parameters” defined by

$\epsilon = \frac{m_p^2}{2}\left(\frac{V'}{V}\right)^2,\quad \eta = m_p^2\left(\frac{V''}{V}\right).$

Both parameters are small during slow rolling, but the relationship between them depends on the shape of the potential. My crude approximation ($\epsilon = \eta$) would hold for a quadratic potential.

We can express the spectral tilt (as I defined it) in terms of these parameters, finding $2\epsilon$ for the tensor tilt, and $6 \epsilon - 2\eta$ for the scalar tilt. To derive these formulas it suffices to know that $\delta_S^2$ is proportional to $V^3/V'^2$, and that $\delta_T^2$ is proportional to $H^2$; we also use

$3H\dot \phi = -V', \quad 3H^2 = V/m_P^2,$

keeping factors of 3 that I left out before. (As a homework exercise, check these formulas for the tensor and scalar tilt.)

It is also easy to see that $r$ is proportional to $\epsilon$; it turns out that $r = 16 \epsilon$. To get that factor of 16 we need more detailed information about the relative size of the tensor and scalar fluctuations than I explained in the post; I can’t think of a handwaving way to derive it.

We see, though, that the conclusion that the tensor tilt is $r/8$ does not depend on the details of the potential, while the relation between the scalar tilt and $r$ does depend on the details. Nevertheless, it seems fair to claim (as I did) that, already before we knew the BICEP2 results, the measured nonzero scalar spectral tilt indicated a reasonably large value of $r$.

Once again, we’re lucky. On the one hand, it’s good to have a robust prediction (for the tensor tilt). On the other hand, it’s good to have a handle (the scalar tilt) for distinguishing among different inflationary models.

One last point is worth mentioning. We have set Planck’s constant $\hbar$ equal to one so far, but it is easy to put the powers of $\hbar$ back in using dimensional analysis (we’ll continue to assume the speed of light c is one). Since Newton’s constant $G$ has the dimensions of length/energy, and the potential $V$ has the dimensions of energy/volume, while $\hbar$ has the dimensions of energy times length, we see that

$\delta_T^2 \sim \hbar G^2V.$

Thus the production of gravitational waves during inflation is a quantum effect, which would disappear in the limit $\hbar \to 0$. Likewise, the scalar fluctuation strength $\delta_S^2$ is also $O(\hbar)$, and hence also a quantum effect.

Therefore the detection of primordial gravitational waves by BICEP2, if correct, confirms that gravity is quantized just like the other fundamental forces. That shouldn’t be a surprise, but it’s nice to know.

# My 10 biggest thrills

Wow!

Evidence for gravitational waves produced during cosmic inflation. BICEP2 results for the ratio r of gravitational wave perturbations to density perturbations, and the density perturbation spectral tilt n.

Like many physicists, I have been reflecting a lot the past few days about the BICEP2 results, trying to put them in context. Other bloggers have been telling you all about it (here, here, and here, for example); what can I possibly add?

The hoopla this week reminds me of other times I have been really excited about scientific advances. And I recall some wise advice I received from Sean Carroll: blog readers like lists.  So here are (in chronological order)…

My 10 biggest thrills (in science)

This is a very personal list — your results may vary. I’m not saying these are necessarily the most important discoveries of my lifetime (there are conspicuous omissions), just that, as best I can recall, these are the developments that really started my heart pounding at the time.

1) The J/Psi from below (1974)

I was a senior at Princeton during the November Revolution. I was too young to appreciate fully what it was all about — having just learned about the Weinberg-Salam model, I thought at first that the Z boson had been discovered. But by stalking the third floor of Jadwin I picked up the buzz. No, it was charm! The discovery of a very narrow charmonium resonance meant we were on the right track in two ways — charm itself confirmed ideas about the electroweak gauge theory, and the narrowness of the resonance fit in with the then recent idea of asymptotic freedom. Theory triumphant!

2) A magnetic monopole in Palo Alto (1982)

By 1982 I had been thinking about the magnetic monopoles in grand unified theories for a few years. We thought we understood why no monopoles seem to be around. Sure, monopoles would be copiously produced in the very early universe, but then cosmic inflation would blow them away, diluting their density to a hopelessly undetectable value. Then somebody saw one …. a magnetic monopole obediently passed through Blas Cabrera’s loop of superconducting wire, producing a sudden jump in the persistent current. On Valentine’s Day!

According to then current theory, the monopole mass was expected to be about 10^16 GeV (10 million billion times heavier than a proton). Had Nature really been so kind as the bless us with this spectacular message from an staggeringly high energy scale? It seemed too good to be true.

It was. Blas never detected another monopole. As far as I know he never understood what glitch had caused the aberrant signal in his device.

3) “They’re green!” High-temperature superconductivity (1987)

High-temperature superconductors were discovered in 1986 by Bednorz and Mueller, but I did not pay much attention until Paul Chu found one in early 1987 with a critical temperature of 77 K. Then for a while the critical temperature seemed to be creeping higher and higher on an almost daily basis, eventually topping 130K …. one wondered whether it might go up, up, up forever.

It didn’t. Today 138K still seems to be the record.

My most vivid memory is that David Politzer stormed into my office one day with a big grin. “They’re green!” he squealed. David did not mean that high-temperature superconductors would be good for the environment. He was passing on information he had just learned from Phil Anderson, who happened to be visiting Caltech: Chu’s samples were copper oxides.

4) “Now I have mine” Supernova 1987A (1987)

What was most remarkable and satisfying about the 1987 supernova in the nearby Large Magellanic Cloud was that the neutrinos released in a ten second burst during the stellar core collapse were detected here on earth, by gigantic water Cerenkov detectors that had been built to test grand unified theories by looking for proton decay! Not a truly fundamental discovery, but very cool nonetheless.

Soon after it happened some of us were loafing in the Lauritsen seminar room, relishing the good luck that had made the detection possible. Then Feynman piped up: “Tycho Brahe had his supernova, Kepler had his, … and now I have mine!” We were all silent for a few seconds, and then everyone burst out laughing, with Feynman laughing the hardest. It was funny because Feynman was making fun of his own gargantuan ego. Feynman knew a good gag, and I heard him use this line at a few other opportune times thereafter.

5) Science by press conference: Cold fusion (1989)

The New York Times was my source for the news that two chemists claimed to have produced nuclear fusion in heavy water using an electrochemical cell on a tabletop. I was interested enough to consult that day with our local nuclear experts Charlie Barnes, Bob McKeown, and Steve Koonin, none of whom believed it. Still, could it be true?

I decided to spend a quiet day in my office, trying to imagine ways to induce nuclear fusion by stuffing deuterium into a palladium electrode. I came up empty.

My interest dimmed when I heard that they had done a “control” experiment using ordinary water, had observed the same excess heat as with heavy water, and remained just as convinced as before that they were observing fusion. Later, Caltech chemist Nate Lewis gave a clear and convincing talk to the campus community debunking the original experiment.

6) “The face of God” COBE (1992)

I’m often too skeptical. When I first heard in the early 1980s about proposals to detect the anisotropy in the cosmic microwave background, I doubted it would be possible. The signal is so small! It will be blurred by reionization of the universe! What about the galaxy! What about the dust! Blah, blah, blah, …

The COBE DMR instrument showed it could be done, at least at large angular scales, and set the stage for the spectacular advances in observational cosmology we’ve witnessed over the past 20 years. George Smoot infamously declared that he had glimpsed “the face of God.” Overly dramatic, perhaps, but he was excited! And so was I.

7) “83 SNU” Gallex solar neutrinos (1992)

Until 1992 the only neutrinos from the sun ever detected were the relatively high energy neutrinos produced by nuclear reactions involving boron and beryllium — these account for just a tiny fraction of all neutrinos emitted. Fewer than expected were seen, a puzzle that could be resolved if neutrinos have mass and oscillate to another flavor before reaching earth. But it made me uncomfortable that the evidence for solar neutrino oscillations was based on the boron-beryllium side show, and might conceivably be explained just by tweaking the astrophysics of the sun’s core.

The Gallex experiment was the first to detect the lower energy pp neutrinos, the predominant type coming from the sun. The results seemed to confirm that we really did understand the sun and that solar neutrinos really oscillate. (More compelling evidence, from SNO, came later.) I stayed up late the night I heard about the Gallex result, and gave a talk the next day to our particle theory group explaining its significance. The talk title was “83 SNU” — that was the initially reported neutrino flux in Solar Neutrino Units, later revised downward somewhat.

8) Awestruck: Shor’s algorithm (1994)

I’ve written before about how Peter Shor’s discovery of an efficient quantum algorithm for factoring numbers changed my life. This came at a pivotal time for me, as the SSC had been cancelled six months earlier, and I was growing pessimistic about the future of particle physics. I realized that observational cosmology would have a bright future, but I sensed that theoretical cosmology would be dominated by data analysis, where I would have little comparative advantage. So I became a quantum informationist, and have not regretted it.

9) The Higgs boson at last (2012)

The discovery of the Higgs boson was exciting because we had been waiting soooo long for it to happen. Unable to stream the live feed of the announcement, I followed developments via Twitter. That was the first time I appreciated the potential value of Twitter for scientific communication, and soon after I started to tweet.

10) A lucky universe: BICEP2 (2014)

Many past experiences prepared me to appreciate the BICEP2 announcement this past Monday.

I first came to admire Alan Guth‘s distinctive clarity of thought in the fall of 1973 when he was the instructor for my classical mechanics course at Princeton (one of the best classes I ever took). I got to know him better in the summer of 1979 when I was a graduate student, and Alan invited me to visit Cornell because we were both interested in magnetic monopole production  in the very early universe. Months later Alan realized that cosmic inflation could explain the isotropy and flatness of the universe, as well as the dearth of magnetic monopoles. I recall his first seminar at Harvard explaining his discovery. Steve Weinberg had to leave before the seminar was over, and Alan called as Steve walked out, “I was hoping to hear your reaction.” Steve replied, “My reaction is applause.” We all felt that way.

I was at a wonderful workshop in Cambridge during the summer of 1982, where Alan and others made great progress in understanding the origin of primordial density perturbations produced from quantum fluctuations during inflation (Bardeen, Steinhardt, Turner, Starobinsky, and Hawking were also working on that problem, and they all reached a consensus by the end of the three-week workshop … meanwhile I was thinking about the cosmological implications of axions).

I also met Andrei Linde at that same workshop, my first encounter with his mischievous grin and deadpan wit. (There was a delegation of Russians, who split their time between Xeroxing papers and watching the World Cup on TV.) When Andrei visited Caltech in 1987, I took him to Disneyland, and he had even more fun than my two-year-old daughter.

During my first year at Caltech in 1984, Mark Wise and Larry Abbott told me about their calculations of the gravitational waves produced during inflation, which they used to derive a bound on the characteristic energy scale driving inflation, a few times 10^16 GeV. We mused about whether the signal might turn out to be detectable someday. Would Nature really be so kind as to place that mass scale below the Abbott-Wise bound, yet high enough (above 10^16 GeV) to be detectable? It seemed unlikely.

Last week I caught up with the rumors about the BICEP2 results by scanning my Twitter feed on my iPad, while still lying in bed during the early morning. I immediately leapt up and stumbled around the house in the dark, mumbling to myself over and over again, “Holy Shit! … Holy Shit! …” The dog cast a curious glance my way, then went back to sleep.

Like millions of others, I was frustrated Monday morning, trying to follow the live feed of the discovery announcement broadcast from the hopelessly overtaxed Center for Astrophysics website. I was able to join in the moment, though, by following on Twitter, and I indulged in a few breathless tweets of my own.

Many of his friends have been thinking a lot these past few days about Andrew Lange, who had been the leader of the BICEP team (current senior team members John Kovac and Chao-Lin Kuo were Caltech postdocs under Andrew in the mid-2000s). One day in September 2007 he sent me an unexpected email, with the subject heading “the bard of cosmology.” Having discovered on the Internet a poem I had written to introduce a seminar by Craig Hogan, Andrew wrote:

“John,

just came across this – I must have been out of town for the event.

l love it.

it will be posted prominently in our lab today (with “LISA” replaced by “BICEP”, and remain our rallying cry till we detect the B-mode.

have you set it to music yet?

a”

I lifted a couplet from that poem for one of my tweets (while rumors were swirling prior to the official announcement):

We’ll finally know how the cosmos behaves
If we can detect gravitational waves.

Assuming the BICEP2 measurement r ~ 0.2 is really a detection of primordial gravitational waves, we have learned that the characteristic mass scale during inflation is an astonishingly high 2 X 10^16 GeV. Were it a factor of 2 smaller, the signal would have been far too small to detect in current experiments. This time, Nature really is on our side, eagerly revealing secrets about physics at a scale far, far beyond what we will every explore using particle accelerators. We feel lucky.

We physicists can never quite believe that the equations we scrawl on a notepad actually have something to do with the real universe. You would think we’d be used to that by now, but we’re not — when it happens we’re amazed. In my case, never more so than this time.

The BICEP2 paper, a historic document (if the result holds up), ends just the way it should:

“We dedicate this paper to the memory of Andrew Lange, whom we sorely miss.”

# Fundamental Physics Prize Prediction: Green and Schwarz

Michael Green

John Schwarz

The big news today is the announcement of the nominees for the 2014 Fundamental Physics Prize: (1) Michael Green and John Schwarz, for pioneering contributions to string theory, (2) Joseph Polchinski, for discovering the central role of D-branes in string theory, and (3) Andrew Strominger and Cumrun Vafa, for discovering (using D-branes) the microscopic origin of black hole entropy in string theory. As in past years, all the nominees are marvelously deserving. The winner of the $3 million prize will be announced in San Francisco on December 12; the others will receive the$300,000 Physics Frontiers Prize.

I wrote about my admiration for Joe Polchinski when he was nominated last year, and I have also greatly admired the work of Strominger and Vafa for many years. But the story of Green and Schwarz is especially compelling. String theory, which was originally proposed as a theory of the strong interaction, had been an active research area from 1968 through the early 70s. But when asymptotic freedom was discovered in 1973, and quantum chromodynamics became clearly established as the right theory of the strong interaction, interest in string theory collapsed. Even the 1974 proposal by Scherk and Schwarz that string theory is really a compelling candidate for a quantum theory of gravity failed to generate much excitement.

A faithful few continued to develop string theory through the late 70s and early 80’s, particularly Green and Schwarz, who began collaborating in 1979. Together they clarified the different variants of the theory, which they named Types I, IIA, and IIB, and which were later recognized as different solutions to a single underlying theory (sometimes called M-theory). In retrospect, Green and Schwarz were making remarkable progress, but were still largely ignored.

In 1983, Luis Alvarez-Gaume and Edward Witten analyzed the gravitational anomalies that afflict higher dimensional “chiral” theories (in which left-handed and right-handed particles behave differently), and discovered a beautiful cancellation of these anomalies in the Type IIB string theory. But anomalies, which render a theory inconsistent, seemed to be a nail in the coffin of Type I theory, at that time the best hope for uniting gravitation with the other fundamental (gauge) interactions.

Then, working together at the Aspen Center for Physics during the summer of 1984, Green and Schwarz discovered an even more miraculous cancellation of anomalies in Type I string theory, which worked for only one possible gauge group: SO(32). (Within days they and others found that anomalies cancel for E8 X E8 as well, which provided the impetus for the invention of the heterotic string theory.) The anomaly cancellation drove a surge of enthusiasm for string theory as a unified theory of fundamental physics. The transformation of string theory from a backwater to the hottest topic in physics occurred virtually overnight. It was an exciting time.

When John turned 60 in 2001, I contributed a poem to a book assembled in his honor, hoping to capture in the poem the transformation that Green and Schwarz fomented (and also to express irritation about the widespread misspelling of “Schwarz”). I have appended the poem below, along with the photo of myself I included at the time to express my appreciation for strings.

I’ll be delighted if Polchinski, or Strominger and Vafa win the prize — they deserve it. But it will be especially satisfying if Green and Schwarz win. They started it all, and refused to give up.

# To John Schwarz

Thirty years ago or more
John saw what physics had in store.
He had a vision of a string
And focused on that one big thing.

But then in nineteen-seven-three
Were well described by QCD.

The string, it seemed, by then was dead.
But John said: “It’s space-time instead!
The string can be revived again.
Give masses twenty powers of ten!”

Then Dr. Green and Dr. Black,
Writing papers by the stack,
Made One, Two-A, and Two-B glisten.
Why is it none of us would listen?

We said, “Who cares if super tricks
Bring D to ten from twenty-six?
Your theory must have fatal flaws.

If you weren’t there you couldn’t know
The impact of that mightly blow:
“The Green-Schwarz theory could be true —
It works for S-O-thirty-two!”

Then strings of course became the rage
And young folks of a certain age
Could not resist their siren call:
One theory that explains it all.

Because he never would give in,
Pursued his dream with discipline,
John Schwarz has been a hero to me.
So please, don’t spell it with a “t”!

Expressing my admiration for strings in 2001.

# Can a game teach kids quantum mechanics?

Five months ago, I received an email and then a phone call from Google’s Creative Lab Executive Producer, Lorraine Yurshansky. Lo, as she prefers to be called, is not your average thirty year-old. She has produced award-winning short films like Peter at the End (starring Napoleon Dynamite, aka Jon Heder), launched the wildly popular Maker Camp on Google+ and had time to run a couple of New York marathons as a warm-up to all of that. So why was she interested in talking to a quantum physicist?

You may remember reading about Google’s recent collaboration with NASA and D-Wave, on using NASA’s supercomputing facilities along with a D-Wave Two machine to solve optimization problems relevant to both Google (Glass, for example) and NASA (analysis of massive data sets). It was natural for Google, then, to want to promote this new collaboration through a short video about quantum computers. The video appeared last week on Google’s YouTube channel:

This is a very exciting collaboration in my view. Google has opened its doors to quantum computation and this has some powerful consequences. And it is all because of D-Wave. But, let me put my perspective in context, before Scott Aaronson unleashes the hounds of BQP on me.

Two years ago, together with Science magazine’s 2010 Breakthrough of the Year winner, Aaron O’ Connell, we decided to ask Google Ventures for \$10,000,000 dollars to start a quantum computing company based on technology Aaron had developed as a graduate student at John Martini’s group at UCSB. The idea we pitched was that a hand-picked team of top experimentalists and theorists from around the world, would prototype new designs to achieve longer coherence times and greater connectivity between superconducting qubits, faster than in any academic environment. Google didn’t bite. At the time, I thought the reason behind the rejection was this: Google wants a real quantum computer now, not just a 10 year plan of how to make one based on superconducting X-mon qubits that may or may not work.

I was partially wrong. The reason for the rejection was not a lack of proof that our efforts would pay off eventually – it was a lack of any prototype on which Google could run algorithms relevant to their work. In other words, Aaron and I didn’t have something that Google could use right-away. But D-Wave did and Google was already dating D-Wave One for at least three years, before marrying D-Wave Two this May. Quantum computation has much to offer Google, so I am excited to see this relationship blossom (whether it be D-Wave or Pivit Inc that builds the first quantum computer). Which brings me back to that phone call five months ago…

Lorraine: Hi Spiro. Have you heard of Google’s collaboration with NASA on the new Quantum Artificial Intelligence Lab?

Me: Yes. It is all over the news!

Lo: Indeed. Can you help us design a mod for Minecraft to get kids excited about quantum mechanics and quantum computers?

Me: Minecraft? What is Minecraft? Is it like Warcraft or Starcraft?

Lo: (Omg, he doesn’t know Minecraft!?! How old is this guy?) Ahh, yeah, it is a game where you build cool structures by mining different kinds of blocks in this sandbox world. It is popular with kids.

Me: Oh, okay. Let me check out the game and see what I can come up with.

After looking at the game I realized three things:
1. The game has a fan base in the tens of millions.
2. There is an annual convention (Minecon) devoted to this game alone.
3. I had no idea how to incorporate quantum mechanics within Minecraft.

Lo and I decided that it would be better to bring some outside help, if we were to design a new mod for Minecraft. Enter E-Line Media and TeacherGaming, two companies dedicated to making games which focus on balancing the educational aspect with gameplay (which influences how addictive the game is). Over the next three months, producers, writers, game designers and coder-extraordinaire Dan200, came together to create a mod for Minecraft. But, we quickly came to a crossroads: Make a quantum simulator based on Dan200’s popular ComputerCraft mod, or focus on gameplay and a high-level representation of quantum mechanics within Minecraft?

The answer was not so easy at first, especially because I kept pushing for more authenticity (I asked Dan200 to create Hadamard and CNOT gates, but thankfully he and Scot Bayless – a legend in the gaming world – ignored me.) In the end, I would like to think that we went with the best of both worlds, given the time constraints we were operating under (a group of us are attending Minecon 2013 to showcase the new mod in two weeks) and the young audience we are trying to engage. For example, we decided that to prepare a pair of entangled qubits within Minecraft, you would use the Essence of Entanglement, an object crafted using the Essence of Superposition (Hadamard gate, yay!) and Quantum Dust placed in a CNOT configuration on a crafting table (don’t ask for more details). And when it came to Quantum Teleportation within the game, two entangled quantum computers would need to be placed at different parts of the world, each one with four surrounding pylons representing an encoding/decoding mechanism. Of course, on top of each pylon made of obsidian (and its far-away partner), you would need to place a crystal, as the required classical side-channel. As an authorized quantum mechanic, I allowed myself to bend quantum mechanics, but I could not bring myself to mess with Special Relativity.

As the mod launched two days ago, I am not sure how successful it will be. All I know is that the team behind its development is full of superstars, dedicated to making sure that John Preskill wins this bet (50 years from now):

The plan for the future is to upload a variety of posts and educational resources on qcraft.org discussing the science behind the high-level concepts presented within the game, at a level that middle-schoolers can appreciate. So, if you play Minecraft (or you have kids over the age of 10), download qCraft now and start building. It’s a free addition to Minecraft.

# Frontiers of Quantum Information Science

Just a few years ago, if you wanted to look for recent research articles about quantum entanglement, you would check out the quantum physics [quant-ph] archive at arXiv.org. Since 1994, quant-ph has been the central repository for papers about quantum computing and the broader field of quantum information science. But over the past few years there has been a notable change. Increasingly, exciting papers about quantum entanglement are found at the condensed matter [cond-mat] and high energy physics – theory [hep-th] archives.

I don’t know for sure, but that trend may have had something to do with an invitation I received a few months ago from David Gross, to organize the next Jerusalem Winter School in Theoretical Physics. David has been the General Director of the School for, well, I’m not sure how long, but it must be a long time. In the past, the topic of the school has rotated between particle physics, condensed matter physics, and astrophysics. Every year, a group of world-class scientists gives lectures on cutting-edge research for an enthusiastic audience of postdoctoral scholars and advanced graduate students.

David suggested that a good topic for the next school would be “quantum information, broadly envisaged — from quantum computing to strongly correlated electrons.” After some hesitation for family reasons, I embraced this opportunity to amplify David’s message: quantum information has arrived as a major subfield of physics, and its relevance to other areas of physics is becoming broadly appreciated.

I’m not good at organizing things myself, so I recruited two friends who are very good at it to help me: Michael Ben-Or and Patrick Hayden. As the local organizer at The Hebrew University, Michael has to do a lot of the hard work that I’m glad to avoid. We decided to call the school “Frontiers of Quantum Information Science,” and put together a slate of 10 lecturers, which I’m very excited about. The lectures will cover the core areas of quantum information, as well as some of the important ways in which quantum information relates to quantum matter, quantum field theory, and quantum gravity. Each lecturer will give three or four ninety-minute lectures, on these topics:

Scott Aaronson (MIT), Quantum complexity and quantum optics
David DiVincenzo (Aachen), Quantum computing with superconducting circuits
Daniel Harlow (Princeton), Black holes and quantum information
Michal Horodecki (Gdansk), Quantum information and thermodynamics
Stephen Jordan (NIST), Quantum algorithms
Rob Myers (Perimeter), Entanglement in quantum field theory
Renato Renner (ETH), Quantum foundations
Ady Stern (Weizmann), Topological quantum computing
Barbara Terhal (Aachen), Quantum error correction
Frank Verstraete (Vienna), Quantum information and quantum matter

The school will run from 30 December 2013 to 9 January 2014 at the Israel Institute for Advanced Studies at The Hebrew University in Jerusalem. If you are interested in attending, please visit the website for more information and fill out the registration form by November 1. I hope you can come — it’s going to be a lot of fun.

Rereading the first paragraph of this post, I got slightly nervous about whether the trend I described can be documented, so I have done a little bit of research. Going back to 2005, I plotted the number of papers with the word “entanglement” in the title on quant-ph, cond-mat, hep-th, and also the general relativity and quantum cosmology [gr-qc] archive. For 2013, I rescaled the data for the year up to now, taking into account that Sep. 22 is the 265th day of the year. I didn’t make any adjustment for papers being cross-listed on multiple archives.

Here is the data for quant-ph:It’s remarkably flat. Here is the aggregated data for the other three archives:It’s pretty clear that something started to happen around 2010. I realize one could do a much more serious study of this issue, but since I was only willing to spend an hour on it, I feel vindicated.

# Free Feynman!

Last Friday the 13th was a lucky day for those who love physics — The online html version of Volume 1 of the Feynman Lectures on Physics (FLP) was released! Now anyone with Internet access and a web browser can enjoy these unique lectures for free. They look beautiful.

Mike Gottlieb at Caltech on 20 September 2013. He’s the one on the right.

On the day of release, over 86,000 visitors viewed the website, and the Amazon sales rank of the paperback version of FLP leapt over the weekend from 67,000 to 12,000. My tweet about the release was retweeted over 150 times (my most retweets ever).

Free html versions of Volumes 2 and 3 are in preparation. Soon pdf versions of all three volumes will be offered for sale, each available in both desktop and tablet versions at a price comparable to the cost of the paperback editions. All these happy developments resulted from a lot of effort by many people. You can learn about some of the history and the people involved from Kip Thorne’s 2010 preface to the print edition.

A hero of the story is Mike Gottlieb, who spends most of his time in Costa Rica, but passed through Caltech yesterday for a brief visit. Mike entered the University of Maryland to study mathematics at age 15 and at age 16 began a career as a self-employed computer software consultant. In 1999, when Mike was 39,  a chance meeting with Feynman’s friend and co-author Ralph Leighton changed Mike’s life.

At Ralph’s suggestion, Mike read Feynman’s Lectures on Computation. Impressed by Feynman’s insights and engaging presentation style, Mike became eager to learn more about physics; again following Ralph’s suggestion, he decided to master the Feynman Lectures on Physics. Holed up at a rented farm in Costa Rica without a computer, he pored over the lectures for six months, painstakingly compiling a handwritten list of about 200 errata.

Kip’s preface picks up the story at that stage. I won’t repeat all that, except to note two pivotal developments. Rudi Pfeiffer was a postdoc at the University of Vienna in 2006 when, frustrated by the publisher’s resistance to correcting errata that he and others had found, he (later joined by Gottlieb) began converting FLP to LaTeX, the modern computer system for typesetting mathematics. Eventually, all the figures were redrawn in electronic form as scalable vector graphics, paving the way for a “New Millenium Edition” of FLP (published in 2011), as well as other electronically enhanced editions planned for the future. Except that, before all that could happen, Caltech’s Intellectual Property Counsel Adam Cochran had to untangle a thicket of conflicting publishing rights, which I have never been able to understand in detail and therefore will not attempt to explain.

Rudi Pfeiffer and Mike Gottlieb at Caltech in 2008.

The proposal to offer an html version for free has been enthusiastically pursued by Caltech and has received essential financial support from Carver Mead. The task of converting Volume 1 from LaTeX to html was carried out for a fee by Caltech alum Michael Hartl; Gottlieb is doing the conversion himself for the other volumes, which are already far along.

Aside from the pending html editions of Volumes 2 and 3, and the pdf editions of all three volumes, there is another very exciting longer-term project in the works — the html will provide the basis for a Multimedia Edition of FLP. Audio for every one of Feynman’s lectures was recorded, and has been digitally enhanced by Ralph Leighton. In addition, the blackboards were photographed for almost all of the lectures. The audio and photos will be embedded in the Multimedia Edition, possibly accompanied by some additional animations and “Ken Burns style” movies. The audio in particular is great fun, bringing to life Feynman the consummate performer. For the impatient, a multimedia version of six of the lectures is already available as an iBook. To see a quick preview, watch Adam’s TEDxCaltech talk.

Mike Gottlieb has now devoted 13 years of his life to enhancing FLP and bringing the lectures to a broader audience, receiving little monetary compensation. I asked him yesterday about his motivation, and his answer surprised me somewhat. Mike wants to be able to look back at his life feeling that he has made a bigger contribution to the world than merely writing code and making money. He would love to have a role in solving the great open problems in physics, in particular the problem of reconciling general relativity with quantum mechanics, but feels it is beyond his ability to solve those problems himself. Instead, Mike feels he can best facilitate progress in physics by inspiring other very talented young people to become physicists and work on the most important problems. In Mike’s view, there is no better way of inspiring students to pursue physics than broadening access to the Feynman Lectures on Physics!