# Squeezing light using mechanical motion

This post is about generating a special type of light, squeezed light, using a mechanical resonator. But perhaps more importantly, it’s about an experiment (Caltech press release can be found here) that is very close to my heart: an experiment that brings to an end my career as a graduate student at Caltech and the IQIM, while paying homage to nearly four decades of work done by those before me at this institute.

The Quantum Noise of Light

First of all, what is squeezed light? It would be silly of me to imagine that I can provide a more clear and thorough explanation than what Jeff Kimble gave twenty years ago in Caltech’s Engineering and Science magazine. Instead, I’ll try to present what squeezing is in the context of optomechanics.

Quantization of light makes it noisy. Imagine a steady stream of water hitting a plate, and rolling off of it smoothly. The stream would indeed impart a steady force on the plate, but wouldn’t really cause it to “shake” around much. The plate would sense a steady pressure. This is what the classical theory of light, as proposed by James Clerk Maxwell, predicts. The effect is called radiation pressure. In the early 20th century, a few decades after this prediction, quantum theory came along and told us that “light is made of photons”. More or less, this means that a measurement capable of measuring the energy, power, or pressure imparted by light, if sensitive enough, will detect “quanta”, as if light were composed of particles. The force felt by a mirror is exactly this sort of measurement. To make sense of this, we can replace that mental image of a stream hitting a plate with one of the little raindrops hitting it, where each raindrop is a photon. Since the photons are coming in one at a time, and imparting their momentum all at once in little packets, they generate a new type of noise due to their random arrival times. This is called shot-noise (since the photons act as little “shots”). Since shot-noise is being detected here by the sound it generates due to the pressure imparted by light, we call it “Radiation Pressure Shot-Noise” (RPSN).

# On the importance of choosing a convenient basis

The benefits of Caltech’s proximity to Hollywood don’t usually trickle down to measly grad students like myself, except in the rare occasions when we befriend the industry’s technical contingent. One of my friends is a computer animator for Disney, which means that she designs algorithms enabling luxuriously flowing hair or trees with realistic lighting or feathers that have gorgeous texture, for movies like Wreck-it Ralph. Empowering computers to efficiently render scenes with these complicated details is trickier than you’d think and it requires sophisticated new mathematics. Fascinating conversations are one of the perks of having friends like this. But so are free trips to Disneyland! A couple nights ago, while standing in line for The Tower of Terror, I asked her what’s she’s currently working on. She’s very smart, as can be evidenced by her BS/MS in Computer Science/Mathematics from MIT, but she asked me if I “know about spherical harmonics.” Asking this to an aspiring quantum mechanic is like asking an auto mechanic if they know how to use a monkey wrench. She didn’t know what she was getting herself into!

IQIM, LIGO, Disney

Along with this spherical harmonics conversation, I had a few other incidents last week that hammered home the importance of choosing a convenient basis when solving a scientific problem. First, my girlfriend works on LIGO and she’s currently writing her thesis. LIGO is a huge collaboration involving hundreds of scientists, and naturally, nobody there knows the detailed inner-workings of every subsystem. However, when it comes to writing the overview section of ones thesis, you need to at least make a good faith attempt to understand the whole behemoth. Anyways, my girlfriend recently asked if I know how the wavelet transform works. This is another example of a convenient basis, one that is particularly suited for analyzing abrupt changes, such as detecting the gravitational waves that would be emitted during the final few seconds of two black holes merging (ring-down). Finally, for the past couple weeks, I’ve been trying to understand entanglement entropy in quantum field theories. Most of the calculations that can be carried out explicitly are for the special subclass of quantum field theories called “conformal field theories,” which in two dimensions have a very convenient ‘basis’, the Virasoro algebra.

So why does a Disney animator care about spherical harmonics? It turns out that every frame that goes into one of Disney’s movies needs to be digitally rendered using a powerful computing cluster. The animated film industry has traded the painstaking process of hand-animators drawing every single frame, for the almost equally time-consuming process of computer clusters generating every frame. It doesn’t look like strong AI will be available in our immediate future, and in the meantime, humans are still much better than computers at detecting patterns and making intuitive judgements about the ‘physical correctness of an image.’ One of the primary advantages of computer animation is that an animator shouldn’t need to shade in every pixel of every frame — some of this burden should fall on computers. Let’s imagine a thought experiment. An animator wants to get the lighting correct for a nighttime indoor shot. They should be able to simply place the moon somewhere out of the shot, so that its glow can penetrate through the windows. They should also be able to choose from a drop down menu and tell the computer that a hand drawn lightbulb is a ‘light source.’ The computer should then figure out how to make all of the shadows and brightness appear physically correct. Another example of a hard problem is that an animator should be able to draw a character, then tell the computer that the hair they drew is ‘hair’, so that as the character moves through scenes, the physics of the hair makes sense. Programming computers do these things autonomously is harder than it sounds.

In the lighting example, imagine you want to get the lighting correct in a forest shot with complicated pine trees and leaf structures. The computer would need to do the ray-tracing for all of the photons emanating from the different light sources, and then the second-order effects as these photons reflect, and then third-order effects, etc. It’s a tall order to make the scene look accurate to the human eyeball/brain. Instead of doing all of this ray-tracing, it’s helpful to choose a convenient basis in order to dramatically speed up the processing. Instead of the complicated forest example, let’s imagine you are working with a tree from Super Mario Bros. Imagine drawing a sphere somewhere in the middle of this and then defining a ‘height function’, which outputs the ‘elevation’ of the tree foliage over each point on the sphere. I tried to use suggestive language, so that you’d draw an analogy to thinking of Earth’s ‘height function’ as the elevation of mountains and the depths of trenches over the sphere, with sea-level as a baseline. An example of how you could digitize this problem for a tree or for the earth is by breaking up the sphere into a certain number of pixels, maybe one per square meter for the earth (5*10^14 square meters gives approximately 2^49 pixels), and then associating an integer height value between [-2^15,2^15] to each pixel. This would effectively digitize the height map of the earth. In this case, keeping track of the elevation to approximately the meter level. But this leaves us with a huge amount of information that we need to store, and then process. We’d have to keep track of the height value for each pixel, giving us approximately 2^49*2^16=2^65 bits=4 exabytes that we’d have to keep track of. And this is for an easy static problem with only meter resolution! We can store this information much more efficiently using spherical harmonics.

There are many ways to think about spherical harmonics. Basically, they’re functions which map points on the sphere to real numbers $Y_l^m: (\theta,\phi) \mapsto Y_l^m(\theta,\phi)\in\mathbb{R}$, such that they satisfy a few special properties. They are orthogonal, meaning that if you multiply two different spherical harmonics together and then integrate over the sphere, then you get zero. If you square one of the functions and then integrate over the sphere, you get a finite, nonzero value. This means that they are orthogonal functions. They also span the space of all height functions that one could define over the sphere. This means that for a planet with an arbitrarily complicated topography, you would be able to find some weighted combination of different spherical harmonics which perfectly describes that planet’s topography. These are the key properties which make a set of functions a basis: they span and are orthogonal (this is only a heuristic). There is also a natural way to think about the light that hits the tree. We can use the same sphere and simply calculate the light rays as they would hit the ideal sphere. With these two different ‘height functions’, it’s easy to calculate the shadows and brightness inside the tree. You simply convolve the two functions, which is a fast operation on a computer. It also means that if the breeze slightly changes the shape of the tree, or if the sun moves a little bit, then it’s very easy to update the shading. Implicit in what I just said, using spherical harmonics allows us to efficiently store this height map. I haven’t calculated this on a computer, but it doesn’t seem totally crazy to think that we’d be able to store the topography of the earth to a reasonable accuracy, with 100 nonzero coefficients of the spherical harmonics to 64 bits of precision, 2^7*2^6= 2^13 << 2^65. Where does this cost savings come from? It comes from the fact that the spherical harmonics are a convenient basis, which naturally encode the types of correlations we see in Earth’s topography — if you’re standing at an elevation of 2000m, the area within ten meters is probably at a similar elevation. Cliffs are what break this basis — but are what the wavelet basis was designed to handle.

I’ve only described a couple bases in this post and I’ve neglected to mention some of the most famous examples! This includes the Fourier basis, which was designed to encode periodic signals, such as music and radio waves. I also have not gone into any detail about the Virasoro algebra, which I mentioned at the beginning of this post, and I’ve been using it heavily for the past few weeks. For the sake of diversity, I’ll spend a few sentences whetting your apetite. Complex analysis is primarily the study of analytic functions. In two dimensions, these analytic functions “preserve angles.” This means that if you have two curves which intersect at a point with angle $\theta$, then after using an analytic function to map these curves to their image, also in the complex plane, then the angle between the curves will still be $\theta.$ An especially convenient basis for the analytic functions in two-dimensions ($\{f: \mathbb{C} \to \mathbb{C}\}$, where $f(z) = \sum_{n=0}^{\infty} a_nz^n$) is given by the set of functions $\{l_n = -z^{n+1}\partial_z\}$. As always, I’m not being exactly precise, but this is a ‘basis’ because we can encode all the information describing an infinitesimal two-dimensional angle-preserving map using these elements. It turns out to have incredibly special properties, including that its quantum cousin yields something called the “central charge” which has deep ramifications in physics, such as being related to the c-theorem. Conformal field theories are fascinating because they describe the physics of phase transitions. Having a convenient basis in two-dimensions is a large part of why we’ve been able to make progress in our understanding of two-dimensional phase transitions (more important is that the 2d conformal symmetry group is infinite-dimensional, but that’s outside the scope of this post.) Convenient bases are also important for detecting gravitational waves, making incredible movies and striking up nerdy conversations in long lines at Disneyland!

# Monopoles passing through Flatland!

Like many mathematically inclined teenagers, I was charmed when I first read the book Flatland by Edwin Abbott Abbott.* It’s a story about a Sphere who visits a two-dimensional world and tries to awaken its inhabitants to the existence of a third dimension. As perceived by Flatlanders, the Sphere is a circle which appears as a point, grows to maximum size, then shrinks and disappears.

My memories of Flatland were aroused as I read a delightful recent paper by Max Metlitski, Charlie Kane, and Matthew Fisher about magnetic monopoles and three-dimensional bosonic topological insulators. To explain why, I’ll need to recall a few elements of the theory of monopoles and of topological insulators, before returning to the connection between the two and why that reminds me of Flatland.

Flatlanders, confined to the two-dimensional surface of a topological insulator, are convinced by a magnetic monopole that a third dimension must exist.

Monopoles

Paul Dirac was no ordinary genius. Aside from formulating relativistic electron theory and predicting the existence of antimatter, Dirac launched the quantum theory of magnetic monopoles in a famous 1931 paper. Dirac envisioned a magnetic monopole as a semi-infinitely long, infinitesimally thin string of magnetic flux, such that the end of the string, where the flux spills out, seems to be a magnetic charge. For this picture to make sense, the string should be invisible. Dirac pointed out that an electron with electric charge e, transported around a string carrying flux $\Phi$, could detect the string (via what later came to be called the Aharonov-Bohm effect) unless the flux is an integer multiple of $2\pi\hbar /e$, where $\hbar$ is Planck’s constant. Conversely, in order for the string to be invisible, if a magnetic monopole exists with magnetic charge $g_D = 2\pi\hbar /e$, then all electric charges must be integer multiples of e. Thus the existence of magnetic monopoles (which have never been observed) could explain quantization of electric charge (which has been observed).

Captivated by the beauty of his own proposal, Dirac concluded his paper by remarking, “One would be surprised if Nature had made no use of it.”

Our understanding of quantized magnetic monopoles advanced again in 1979 when another extraordinary physicist, Edward Witten, discussed a generalization of Dirac’s quantization condition. Witten noted that the Lagrange density of electrodynamics could contain a term of the form

$\frac{\theta e^2\hbar}{4\pi^2}~\vec{E}\cdot\vec{B},$

where $\vec{E}$ is the electric field and $\vec{B}$ is the magnetic field. This “$\theta$ term” may also be expressed as

$\frac{\theta e^2\hbar}{8\pi^2}~ \partial^\mu\left(\epsilon_{\mu\nu\lambda\sigma}A^\nu\partial^\lambda A^\sigma \right),$

where A is the vector potential, and hence is a total derivative which makes no contribution to the classical field equations of electrodynamics. But Witten realized that it can have important consequences for the quantum properties of magnetic monopoles. Specifically, the $\theta$ term modifies the field momentum conjugate to the vector potential, which becomes

$\vec{E}+\frac{\theta e^2\hbar}{4\pi^2}\vec{B}.$

Because the Gauss law condition satisfied by physical quantum states is altered, for a monopole with magnetic charge $m g_D$ , where $g_D$ is Dirac’s minimal charge $2\pi\hbar /e$ and m is an integer, the allowed values of the electric charge become

$q = e\left( n - \frac{\theta m}{2\pi}\right),$

where n is an integer. This spectrum of allowed charges remains invariant if $\theta$ advances by $2\pi$, suggesting that the parameter $\theta$ is actually an angular variable with period $2\pi$. This periodicity of $\theta$ can be readily verified in a theory admitting fermions with the minimal charge e. But if the charged particles are bosons then $\theta$ turns out to be a periodic variable with period $4\pi$ instead.

That $\theta$ has a different period for a bosonic theory than a fermionic one has an interesting interpretation. As Goldhaber noticed in 1976, dyons carrying both magnetic and electric charge can exhibit statistical transmutation. That is, in a purely bosonic theory, a dyon with magnetic charge $g_D= 2\pi\hbar/e$ and electric charge ne is a fermion if n is an odd integer — when two dyons are exchanged, transport of each dyon’s electric charge in the magnetic field of the other dyon induces a sign change in the wave function. In a fermionic theory the story is different; now we can think of the dyon as a fermionic electric charge bound to a bosonic monopole. There are two canceling contributions to the exchange phase of the dyon, which is therefore a boson for any integer value of n, whether even or odd.

As $\theta$ smoothly increases from 0 to $2\pi$, the statistics (whether bosonic or fermionic) of a dyon remains fixed even as the dyon’s electric charge increases by e. For the bosonic theory with $\theta = 2\pi$, then, dyons with magnetic charge $g_D$ and electric charge ne are bosons for n odd and fermions for n even, the opposite of what happens when $\theta=0$. For the bosonic theory, unlike the fermionic theory, we need to increase $\theta$ by $4\pi$ for the physics of dyons to be fully invariant.

In 1979 Ed Witten was a postdoc at Harvard, where I was a student, though he was visiting CERN for the summer when he wrote his paper about the $\theta$-dependent monopole charge. I always read Ed’s papers carefully, but I gave special scrutiny to this one because magnetic monopoles were a pet interest of mine. At the time, I wondered whether the Witten effect might clarify how to realize the $\theta$ parameter in a lattice gauge theory. But it certainly did not occur to me that the $\theta$-dependent electric charge of a magnetic monopole could have important implications for quantum condensed matter physics. Theoretical breakthroughs often have unexpected consequences, which may take decades to emerge.

Symmetry-protected topological phases

Okay, now let’s talk about topological insulators, a very hot topic in condensed matter physics these days. Actually, a topological insulator is a particular instance of a more general concept called a symmetry-protected topological phase of matter (or SPT phase). Consider a d-dimensional hunk of material with a (d-1)-dimensional boundary. If the material is in an SPT phase, then the physics of the d-dimensional bulk is boring — it’s just an insulator with an energy gap, admitting no low-energy propagating excitations. But the physics of the (d-1)-dimensional edge is exotic and exciting — for example the edge might support “gapless” excitations of arbitrarily low energy which can conduct electricity. The exotica exhibited by the edge is a consequence of a symmetry, and is destroyed if the symmetry is broken either explicitly or spontaneously; that is why we say the phase is “symmetry protected.”

The low-energy edge excitations can be described by a (d-1)-dimensional effective field theory. But for a typical SPT phase, this effective field theory is what we call anomalous, which means that for one reason or another the theory does not really make sense. The anomaly tells us something interesting and important, namely that the (d-1)-dimensional theory cannot be really, truly (d-1) dimensional; it can arise only at the edge of a higher-dimensional system.

This phenomenon, in which the edge does not make sense by itself without the bulk, is nicely illustrated by the integer quantum Hall effect, which occurs in a two-dimensional electron system in a high magnetic field and at low temperature, if the sample is sufficiently clean so that the electrons are highly mobile and rarely scattered by impurities. In this case the relevant symmetry is electron number, or equivalently the electric charge. At the one-dimensional edge of a two-dimensional quantum Hall sample, charge carriers move in only one direction — to the right, say, but not to the left. A theory with such chiral electric charges does not really make sense. One problem is that electric charge is not conserved — an electric field along the edge causes charge to be locally created, which makes the theory inconsistent.

The way the theory resolves this conundrum is quite remarkable. A two-dimensional strip of quantum Hall fluid has two edges, one at the top, the other at the bottom. While the top edge has only right-moving excitations, the bottom edge has only left-moving excitations. When electric charge appears on the top edge, it is simultaneously removed from the bottom edge. Rather miraculously, charge can be conveyed across the bulk from one edge to the other, even though the bulk does not have any low-energy excitations at all.

I first learned about this interplay of edge and bulk physics from a beautiful 1985 paper by Curt Callan and Jeff Harvey. They explained very lucidly how an edge theory with an anomaly and a bulk theory with an anomaly can fit together, with each solving the other’s problems. Curiously, the authors did not mention any connection with the quantum Hall effect, which had been discovered five years earlier, and I didn’t appreciate the connection myself until years later.

Topological insulators

In the case of topological insulators, the symmetries which protect the gapless edge excitations are time-reversal invariance and conserved particle number, i.e. U(1) symmetry. Though the particle number might not be coupled to an electromagnetic gauge field, it is instructive for the purpose of understanding the properties of the symmetry-protected phase to imagine that the U(1) symmetry is gauged, and then to consider the potential anomalies that could afflict this gauge symmetry. The first topological insulators conceived by theorists were envisioned as systems of non-interacting electrons whose properties were relatively easy to understand using band theory. But it was not so clear at first how interactions among the electrons might alter their exotic behavior. The wonderful thing about anomalies is that they are robust with respect to interactions. In many cases we can infer the features of anomalies by studying a theory of non-interacting particles, assured that these features survive even when the particles interact.

As have many previous authors, Metlitski et al. argue that when we couple the conserved particle number to a U(1) gauge field, the effective theory describing the bulk physics of a topological insulator in three dimensions may contain a $\theta$ term. But wait … since the electric field is even under time reversal and the magnetic field is odd, the $\theta$ term is T-odd; under T, $\theta$ is mapped to $-\theta$, so T seems to be violated if $\theta$ has any nonzero value. Except … we have to remember that $\theta$ is really a periodic variable. For a fermionic topological insulator the period is $2\pi$; therefore the theory with $\theta = \pi$ is time reversal invariant; $\theta = \pi$ maps to $\theta = -\pi$ under T, which is equivalent to a rotation of $\theta$ by $2\pi$. For a bosonic topological insulator the period is $4\pi$, which means that $\theta = 2\pi$ is the nontrivial T-invariant value.

If we say that a “trivial” insulator (e.g., the vacuum) has $\theta = 0$, then we may say that a bulk material with $\theta = \pi$ (fermionic case) or $\theta = 2\pi$ (bosonic case) is a “nontrivial” (a.k.a. topological) insulator. At the edge of the sample, where bulk material meets vacuum, $\theta$ must rotate suddenly by $\pi$ (fermions) or by $2\pi$ (bosons). The exotic edge physics is a consequence of this abrupt change in $\theta$.

Monopoles in Flatland

To understand the edge physics, and in particular to grasp how fermionic and bosonic topological insulators differ, Metlitski et al. invite us to imagine a magnetic monopole with magnetic charge $g_D$ passing through the boundary between the bulk and the surrounding vacuum. To the Flatlanders confined to the surface of the bulk sample, the passing monopole induces a sudden change in the magnetic flux through the surface by a single flux quantum $g_D$, which could arise due to a quantum tunneling event. What does the Flatlander see?

In a fermionic topological insulator, there is a monopole that carries charge e/2 when inside the sample (where $\theta=-\pi$) and charge 0 when outside (where $\theta=0$). Since electric charge is surely conserved in the full three-dimensional theory, the change in the monopole’s charge must be compensated by a corresponding change in the charge residing on the surface. Flatlanders are puzzled to witness a spontaneously arising excitation with charge e/2. This is an anomaly — electric charge conservation is violated, which can only make sense if Flatlanders are confined to a surface in a higher-dimensional world. Though unable to escape their surface world, the Flatlanders can be convinced by the Monopole that an extra dimension must exist.

In a bosonic topological insulator, the story is somewhat different: there is a monopole that carries electric charge 0 when inside the sample (where $\theta=-2\pi$) and charge –e when outside (where $\theta=0$). In this case, though, there are bosonic charge-e particles living on the surface. A monopole can pick up a charged particle as it passes through Flatland, so that its charge is 0 both inside the bulk sample and outside in the vacuum. Flatlanders are happy — electric charge is conserved!

But hold on … there’s still something wrong. Inside the bulk (where $\theta= -2\pi$) a monopole with electric charge 0 is a fermion, while outside in the vacuum (where $\theta = 0$) it is a boson. In the three-dimensional theory it is not possible for any local process to create an isolated fermion, so if the fermionic monopole becomes a bosonic monople as it passes through Flatland, it must leave a fermion behind. Flatlanders are puzzled to witness a spontaneously arising fermion. This is an anomaly — conservation of fermionic parity is violated, which can only make sense if Flatlanders are confined to a surface in a higher-dimensional world. Once again, the clever residents of Flatland learn from the Monopole about an extra spatial dimension, without ever venturing outside their two-dimensional home.

Topological order gets edgy

This post is already pretty long and I should wrap it up. Before concluding I’ll remark that the theory of symmetry-protected phases has been developing rapidly in recent months.

In particular, a new idea, introduced last fall by Vishwanath and Senthil, has been attracting increasing attention. While in most previously studied SPT phases the unbroken symmetry protects gapless excitations confined to the edge of the sample, Vishwanath and Senthil pointed out another possibility — a gapped edge exhibiting topological order. The surface can support anyons with exotic braiding statistics.

Here, too, anomalies are central to the discussion. While anyons in two-dimensional media are already a much-studied subject, the anyon models that can be realized at the edges of three-dimensional SPT phases are different than anyon models realized in really, truly two-dimensional systems. What’s new are not the braiding properties of the anyons, but rather how the anyons transform under the symmetry. Flatlanders who study the symmetry realization in their gapped two-dimensional world should be able to infer the existence of the three-dimensional bulk.

The pace of discovery picked up this month when four papers appeared simultaneously on the preprint arXiv, by Metlitski-Kane-Fisher, Chen-Fidkowski-Vishwanath, Bonderson-Nayak-Qi, and Wang-Potter-Senthil, all proposing and analyzing models of SPT phases with gapped edges. It remains to be seen, though, whether this physics will be realized in actual materials.

Are we on the edge?

In Flatland, our two-dimensional friend, finally able to perceive the third dimension thanks to the Sphere’s insistent tutelage, begs to enter a world of still higher dimensions, “where thine own intestines, and those of kindred Spheres, will lie exposed to … view.” The Sphere is baffled by the Flatlander’s request, protesting, “There is no such land. The very idea of it is utterly inconceivable.”

Let’s not be so dogmatic as the Sphere. The lessons learned from the quantum Hall effect and the topological insulator have prepared us to take the next step, envisioning our own three-dimensional world as the edge of a higher-dimensional bulk system. The existence of an unseen bulk may be inferred in the future by us edgelings, if experimental explorations of our three-dimensional effective theory reveal anomalies begging for an explanation.

Perhaps we are on the edge … of a great discovery. At least it’s conceivable.

*Disclaimer: The gender politics of Flatland, to put it mildly, is outdated and offensive. I don’t wish to endorse the idea that women are one dimensional! I included the reference to Flatland because the imagery of two-dimensional beings struggling to imagine the third dimension is a perfect fit to the scientific content of this post.

# We are all Wilsonians now

Ken Wilson

Ken Wilson passed away on June 15 at age 77. He changed how we think about physics.

Renormalization theory, first formulated systematically by Freeman Dyson in 1949, cured the flaws of quantum electrodynamics and turned it into a precise computational tool. But the subject seemed magical and mysterious. Many physicists, Dirac prominently among them, questioned whether renormalization rests on a sound foundation.

Wilson changed that.

The renormalization group concept arose in an extraordinary paper by Gell-Mann and Low in 1954. It was embraced by Soviet physicists like Bogoliubov and Landau, and invoked by Landau to challenge the consistency of quantum electrodynamics. But it was an abstruse and inaccessible topic, as is well illustrated by the baffling discussion at the very end of the two-volume textbook by Bjorken and Drell.

Wilson changed that, too.

Ken Wilson turned renormalization upside down. Dyson and others had worried about the “ultraviolet divergences” occurring in Feynman diagrams. They introduced an artificial cutoff on integrations over the momenta of virtual particles, then tried to show that all the dependence on the cutoff can be eliminated by expressing the results of computations in terms of experimentally accessible quantities. It required great combinatoric agility to show this trick works in electrodynamics. In other theories, notably including general relativity, it doesn’t work.

Wilson adopted an alternative viewpoint. Take the short-distance cutoff seriously, he said, regarding it as part of the physical formulation of the field theory. Now ask what physics looks like at distances much larger than the cutoff. Wilson imagined letting the short-distance cutoff grow, while simultaneously adjusting the theory to preserve its low-energy predictions. This procedure sounds complicated, but Wilson discovered something wonderful — for the purpose of computing low-energy processes the theory becomes remarkably simple, completely characterized by just a few (renormalized) parameters. One recovers Dyson’s results plus much more, while also acquiring a rich and visually arresting physical picture of what is going on.

When I started graduate school in 1975, Wilson, not yet 40, was already a legend. Even Sidney Coleman, for me the paragon of razor sharp intellect, seemed to regard Wilson with awe. (They had been contemporaries at Caltech, both students of Murray Gell-Mann.) It enhanced the legend that Wilson had been notoriously slow to publish. He spent years pondering the foundations of quantum field theory before finally unleashing a torrent of revolutionary papers in the early 70s. Cornell had the wisdom to grant tenure despite Wilson’s unusually low productivity during the 60s.

As a student, I spent countless hours struggling through Wilson’s great papers, some of which were quite difficult. One introduced me to the operator product expansion, which became a workhorse of high-energy scattering theory and the foundation of conformal field theory. Another considered all the possible ways that renormalization group fixed points could control the high-energy behavior of the strong interactions. Conspicuously missing from the discussion was what turned out to be the correct idea — asymptotic freedom. Wilson had not overlooked this possibility; instead he “proved” it to be impossible. The proof contains a subtle error. Wilson analyzed charge renormalization invoking both Lorentz covariance and positivity of the Hilbert space metric, forgetting that gauge theories admit no gauge choice with both properties. Even Ken Wilson made mistakes.

Wilson also formulated the strong-coupling expansion of lattice gauge theory, and soon after pioneered the Euclidean Monte Carlo method for computing the quantitative non-perturbative predictions of quantum chromodynamics, which remains today an extremely active and successful program. But of the papers by Wilson I read while in graduate school, the most exciting by far was this one about the renormalization group. Toward the end of the paper Wilson discussed how to formulate the notion of the “continuum limit” of a field theory with a cutoff. Removing the short-distance cutoff is equivalent to taking the limit in which the correlation length (the inverse of the renormalized mass) is infinitely long compared to the cutoff — the continuum limit is a second-order phase transition. Wilson had finally found the right answer to the decades-old question, “What is quantum field theory?” And after reading his paper, I knew the answer, too! This Wilsonian viewpoint led to further deep insights mentioned in the paper, for example that an interacting self-coupled scalar field theory is unlikely to exist (i.e. have a continuum limit) in four spacetime dimensions.

Wilson’s mastery of quantum field theory led him to another crucial insight in the 1970s which has profoundly influenced physics in the decades since — he denigrated elementary scalar fields as unnatural. I learned about this powerful idea from an inspiring 1979 paper not by Wilson, but by Lenny Susskind. That paper includes a telltale acknowledgment: “I would like to thank K. Wilson for explaining the reasons why scalar fields require unnatural adjustments of bare constants.”

Susskind, channeling Wilson, clearly explains a glaring flaw in the standard model of particle physics — ensuring that the Higgs boson mass is much lighter than the Planck (i.e., cutoff) scale requires an exquisitely careful tuning of the theory’s bare parameters. Susskind proposed to banish the Higgs boson in favor of Technicolor, a new strong interaction responsible for breaking the electroweak gauge symmetry, an idea I found compelling at the time. Technicolor fell into disfavor because it turned out to be hard to build fully realistic models, but Wilson’s complaint about elementary scalars continued to drive the quest for new physics beyond the standard model, and in particular bolstered the hope that low-energy supersymmetry (which eases the fine tuning problem) will be discovered at the Large Hadron Collider. Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?

Wilson’s great legacy is that we now regard nearly every quantum field theory as an effective field theory. We don’t demand or expect that the theory will continue working at arbitrarily short distances. At some stage it will break down and be replaced by a more fundamental description. This viewpoint is now so deeply ingrained in how we do physics that today’s students may be surprised to hear it was not always so. More than anyone else, we have Ken Wilson to thank for this indispensable wisdom. Few ideas have changed physics so much.

# Quantum Matter Animated!

by Jorge Cham

What does it mean for something to be Quantum? I have to confess, I don’t know. My Ph.D was in Robotics and Kinematics, so my neurons are deeply trained to think in terms of classical dynamics. I asked my siblings (two engineers and one architect) what comes to mind for them when they hear the word Quantum, what they remember from college physics, and here is what they said:

– “Quantum Leap!” (the late 80’s TV show)

– “Quantum of Solace!” (the James Bond movie which, incidentally, was filmed in my home country of Panama, even though the movie was set in Bolivia)

– “I don’t remember anything I learned in college”

– “Light acting as a particle instead of a wave?”

The third answer came from my sister, who went to MIT. The fourth came from my brother, who went to Stanford (+1 point for Stanford!).

I also asked my spouse what comes to mind for her. She said, “Quantum Computing: it’s the next big advance in computers. Transistors the size of atoms.” Clearly, I married someone smarter than me (she also went to Stanford). When I asked if she knew how they worked, she said, “I don’t know how it works.” She also said, “Quantum is related to how time moves more slowly as you approach the speed of light, right?” Nice try, but that’s Relativity (-1 point for Stanford!).

I think the word Quantum has a special power in our collective consciousness. It’s used to convey science-iness, technology, the weirdness of the Physical world. If you Google “Quantum”, most of the top hits are for technology companies that have nothing to do with Quantum Physics (including Quantum Fishing Tackles. I suppose that half the time, you pull up a dead fish).

It’s one of those words that a lot of people have heard of, but very few really understand what it means. Which is why I was excited when Spiros Michalakis and IQIM approached me to produce a series of animations that explore and explain Quantum Information and Matter. Like my previous videos (The Higgs Boson, Dark Matter, Exoplanets), I’d have the chance to interview experts in this field and use their expertise and their voices to learn and to help others learn what amazing things lie just around the corner, beyond our classical understanding of the Universe.

This will be a big Leap for me (I’m trying to avoid the obvious pun), and a journey of exploration. The first installment goes live today, and you can watch it below. Like Schrödinger’s box, I don’t know what we’ll discover with these videos, but I know there are exciting possibilities inside. This is also going to be a BIG challenge. Understanding and putting Quantum concepts in visual form will be hard. I mean, Hair-pulling hard. Fortunately, I’ve discovered there’s a remedy for that.

Watch the first installment of this series:

Jorge Cham is the creator of Piled Higher and Deeper (www.phdcomics.com).

CREDITS:

Featuring: Amir Safavi-Naeini and Oskar Painter http://copilot.caltech.edu/

Produced in Partnership with the Institute for Quantum Information and Matter (http://iqim.caltech.edu) at Caltech with funding provided by the National Science Foundation.

Transcription: Noel Dilworth
Thanks to: Spiros Michalakis, John Preskill and Bert Painter

# Entanglement = Wormholes

One of the most enjoyable and inspiring physics papers I have read in recent years is this one by Mark Van Raamsdonk. Building on earlier observations by Maldacena and by Ryu and Takayanagi. Van Raamsdonk proposed that quantum entanglement is the fundamental ingredient underlying spacetime geometry.* Since my first encounter with this provocative paper, I have often mused that it might be a Good Thing for someone to take Van Raamsdonk’s idea really seriously.

Now someone has.

I love wormholes. (Who doesn’t?) Picture two balls, one here on earth, the other in the Andromeda galaxy. It’s a long trip from one ball to the other on the background space, but there’s a shortcut:You can walk into the ball on earth and moments later walk out of the ball in Andromeda. That’s a wormhole.

I’ve mentioned before that John Wheeler was one of my heros during my formative years. Back in the 1950s, Wheeler held a passionate belief that “everything is geometry,” and one particularly intriguing idea he called “charge without charge.” There are no pointlike electric charges, Wheeler proclaimed; rather, electric field lines can thread the mouth of a wormhole. What looks to you like an electron is actually a tiny wormhole mouth. If you were small enough, you could dive inside the electron and emerge from a positron far away. In my undergraduate daydreams, I wished this idea could be true.

But later I found out more about wormholes, and learned about “topological censorship.” It turns out that if energy is nonnegative, Einstein’s gravitational field equations prevent you from traversing a wormhole — the throat always pinches off (or becomes infinitely long) before you get to the other side. It has sometimes been suggested that quantum effects might help to hold the throat open (which sounds like a good idea for a movie), but today we’ll assume that wormholes are never traversable no matter what you do.

Love in a wormhole throat: Alice and Bob are in different galaxies, but each lives near a black hole, and their black holes are connected by a wormhole. If both jump into their black holes, they can enjoy each other’s company for a while before meeting a tragic end.

Are wormholes any fun if we can never traverse them? The answer might be yes if two black holes are connected by a wormhole. Then Alice on earth and Bob in Andromeda can get together quickly if each jumps into a nearby black hole. For solar mass black holes Alice and Bob will have only 10 microseconds to get acquainted before meeting their doom at the singularity. But if the black holes are big enough, Alice and Bob might have a fulfilling relationship before their tragic end.

This observation is exploited in a recent paper by Juan Maldacena and Lenny Susskind (MS) in which they reconsider the AMPS puzzle (named for Almheiri, Marolf, Polchinski, and Sully). I wrote about this puzzle before, so I won’t go through the whole story again. Here’s the short version: while classical correlations can easily be shared by many parties, quantum correlations are harder to share. If Bob is highly entangled with Alice, that limits his ability to entangle with Carrie, and if he entangles with Carrie instead he can’t entangle with Alice. Hence we say that entanglement is “monogamous.” Now, if, as most of us are inclined to believe, information is “scrambled” but not destroyed by an evaporating black hole, then the radiation emitted by an old black hole today should be highly entangled with radiation emitted a long time ago. And if, as most of us are inclined to believe, nothing unusual happens (at least not right away) to an observer who crosses the event horizon of a black hole, then the radiation emitted today should be highly entangled with stuff that is still inside the black hole. But we can’t have it both ways without violating the monogamy of entanglement!

The AMPS puzzle invites audacious reponses, and AMPS were suitably audacious. They proposed that an old black hole has no interior — a freely falling observer meets her doom right at the horizon rather than at a singularity deep inside.

MS are also audacious, but in a different way. They helpfully summarize their key point succinctly in a simple equation:

ER = EPR

Here, EPR means Einstein-Podolsky-Rosen, whose famous paper highlighted the weirdness of quantum correlations, while ER means Einstein-Rosen (sorry, Podolsky), who discovered wormhole solutions to the Einstein equations. (Both papers were published in 1935.) MS (taking Van Raamsdonk very seriously) propose that whenever any two quantum subsystems are entangled they are connected by a wormhole. In many cases, these wormholes are highly quantum mechanical, but in some cases (where the quantum system under consideration has a weakly coupled “gravitational dual”), the wormhole can have a smooth geometry like the one ER described. That wormholes are not traversable is important for the consistency of ER = EPR: just as Alice cannot use their shared entanglement to send a message to Bob instantaneously, so she is unable to send Bob a message through their shared wormhole.

AMPS imagined that Alice could distill qubit C from the black hole’s early radiation and carry it back to the black hole, successfully verifying its entanglement with another qubit B distilled from the recent radiation. Monogamy then ensures that qubit B cannot be entangled with qubit A behind the horizon. Hence when Alice falls through the horizon she will not observe the quiescent vacuum state in which A and B are entangled; instead she encounters a high-energy particle. MS agree with this conclusion.

AMPS go on to say that Alice’s actions before entering the black hole could not have created that energetic particle; it must have been there all along, one of many such particles constituting a seething firewall.

Here MS disagree. They argue that the excitation encountered by Alice as she crosses the horizon was actually created by Alice herself when she interacted with qubit C. How could Alice’s actions, executed far, far away from the black hole, dramatically affect the state of the black hole’s interior? Because C and A are connected by a wormhole!

The ER = EPR conjecture seems to allow us to view the early radiation with which the black hole is entangled as a complementary description of the black hole interior. It’s not clear yet whether this picture works in detail, and even if it does there could still be firewalls; maybe in some sense the early radiation is connected to the black hole via a wormhole, yet this wormhole is wildly fluctuating rather than a smooth geometry. Still, MS provide a promising new perspective on a deep problem.

As physicists we often rely on our sense of smell in judging scientific ideas, and earlier proposed resolutions of the AMPS puzzle (like firewalls) did not smell right. At first whiff, ER = EPR may smell fresh and sweet, but it will have to ripen on the shelf for a while. If this idea is on the right track, there should be much more to say about it. For now, wormhole lovers can relish the possibilities.

Eventually, Wheeler discarded “everything is geometry” in favor of an ostensibly deeper idea: “everything is information.” It would be a fitting vindication of Wheeler’s vision if everything in the universe, including wormholes, is made of quantum correlations.

*Update: Commenter JM reminded me to mention Brian Swingle’s beautiful 2009 paper, which preceded Van Raamsdonk’s and proposed a far-reaching connection between quantum entanglement and spacetime geometry.

# A Public Lecture on Quantum Information

Sooner or later, most scientists are asked to deliver a public lecture about their research specialties. When successful, lecturing about science to the lay public can give one a feeling of deep satisfaction. But preparing the lecture is a lot of work!

Caltech sponsors the Earnest C. Watson lecture series (named after the same Earnest Watson mentioned in my post about Jane Werner Watson), which attracts very enthusiastic audiences to Beckman Auditorium nine times a year. I gave a Watson lecture on April 3 about Quantum Entanglement and Quantum Computing, which is now available from iTunes U and also on YouTube:

I did a Watson lecture once before, in 1997. That occasion precipitated some big changes in my presentation style. To prepare for the lecture, I acquired my first laptop computer and learned to use PowerPoint. This was still the era when a typical physics talk was handwritten on transparencies and displayed using an overhead projector, so I was sort of a pioneer. And I had many anxious moments in the late 1990s worrying about whether my laptop would be able to communicate with the projector — that can still be a problem even today, but was a more common problem then.

I invested an enormous amount of time in preparing that 1997 lecture, an investment still yielding dividends today. Aside from figuring out what computer to buy (an IBM ThinkPad) and how to do animation in PowerPoint, I also learned to draw using Adobe Illustrator under the tutelage of Caltech’s digital media expert Wayne Waller. And apart from all that technical preparation, I had to figure out the content of the lecture!

That was when I first decided to represent a qubit as a box with two doors, which contains a ball that can be either red or green, and I still use some of the drawings I made then.

Entanglement, illustrated with balls in boxes.

This choice of colors was unfortunate, because people with red-green color blindness cannot tell the difference. I still feel bad about that, but I don’t have editable versions of the drawings anymore, so fixing it would be a big job …

I also asked my nephew Ben Preskill (then 10 years old, now a math PhD candidate at UC Berkeley), to make a drawing for me illustrating weirdness.

The desire to put weirdness to work has driven the emergence of quantum information science.

I still use that, for sentimental reasons, even though it would be easier to update.

The turnout at the lecture was gratifying (you can’t really see the audience with the spotlight shining in your eyes, but I sensed that the main floor of the Auditorium was mostly full), and I have gotten a lot of positive feedback (including from the people who came up to ask questions afterward — we might have been there all night if the audio-visual staff had not forced us to go home).

I did make a few decisions about which I have had second thoughts. I was told I had the option of giving a 45 minute talk with a public question period following, or a 55 minute talk with only a private question period, and I opted for the longer talk. Maybe I should have pushed back and insisted on allowing some public questions even after the longer talk — I like answering questions. And I was told that I should stay in the spotlight, to ensure good video quality, so I decided to stand behind the podium the whole time to curb my tendency to pace across the stage. But maybe I would have seemed more dynamic if I had done some pacing.

I got some gentle criticism from my wife, Roberta, who suggested I could modulate my voice more. I have heard that before, particularly in teaching evaluations that complain about my “soporific” tone. I recall that Mike Freedman once commented after watching a video of a public lecture I did at the KITP in Santa Barbara — he praised its professionalism and “newscaster quality”. But that cuts two ways, doesn’t it? Paul Ginsparg listened to a podcast of that same lecture while doing yardwork, and then sent me a compliment by email, with a characteristic Ginspargian twist. Noting that my sentences were clear, precise, and grammatical, Paul asked: “is this something that just came naturally at some early age, or something that you were able to acquire at some later stage by conscious design (perhaps out of necessity, talks on quantum computing might not go over as well without the reassuring smoothness)?”

Another criticism stung more. To illustrate the monogamy of entanglement, I used a slide describing the frustration of Bob, who wants to entangle with both Alice and Carrie, but finds that he can increase his entanglement with Carrie only my sacrificing some of his entanglement with Alice.

Entanglement is monogamous. Bob is frustrated to find that he cannot be fully entangled with both Alice and Carrie.

This got a big laugh. But I used the same slide in a talk at the APS Denver meeting the following week (at a session celebrating the 100th anniversary of Niels Bohr’s atomic model), and a young woman came up to me after that talk to complain. She suggested that my monogamy metaphor was offensive and might discourage women from entering the field!

After discussing the issue with Roberta, I decided to address the problem by swapping the gender roles. The next day, during the question period following Stephen Hawking’s Public Lecture, I spoke about Betty’s frustration over her inability to entangle fully with both Adam and Charlie. But is that really an improvement, or does it reflect negatively on Betty’s morals? I would appreciate advice about this quandary in the comments.

In case you watch the video, there are a couple of things you should know. First, in his introduction, Tom Soifer quotes from a poem about me, but neglects to name the poet. It is former Caltech postdoc Patrick Hayden. And second, toward the end of the lecture I talk about some IQIM outreach activities, but neglect to name our Outreach Director Spiros Michalakis, without whose visionary leadership these things would not have happened.

The most touching feedback I received came from my Caltech colleague Oskar Painter. I joked in the lecture about how mild mannered IQIM scientists can unleash the superpower of quantum information at a moment’s notice.

Mild mannered professor unleashes the superpower of quantum information.

After watching the video, Oskar shot me an email:

“I sent a link to my son [Ewan, age 11] and daughter [Quinn, age 9], and they each watched it from beginning to end on their iPads, without interruption.  Afterwards, they had a huge number of questions for me, and were dreaming of all sorts of “quantum super powers” they imagined for the future.”

# Project X Squared

Alicia Hardesty: full-time fashion designer, part-time nerd.

Have you seen the movie Frankenweenie? It’s a black and white cartoon (an experiment in itself these days) with a very important message:

Don’t be afraid to do what you love and don’t be afraid to be good at it.

The main character is a smart, sensitive kid who is ostracized for his science experiments. Like the teacher says, people don’t understand science so they are afraid of it. Ironically, artists often deal with the same kind of misunderstandings from the public.

I’m not technically a scientist, but I do love to experiment and try stuff. I’m a fashion designer, which requires it’s own level of scientific conviction. I create, combine unlikely variables, hypothesize, and work within my own scientific method throughout my process.

How does this relate to you?

Project X Squared. Where art, science, and technology meet fashion to create a clothing line, much like an experiment, with the underlying hypothesis being that a quantum physicist, a neuroscientist and a fashion designer can create something tangible together.

# Largest prime number found?

Over the past few months, I have been inundated with tweets about the largest prime number ever found. That number, according to Nature News, is $2^{57,885,161}-1$. This is certainly a very large prime number and one would think that we would need a supercomputer to find a prime number larger than this one. In fact, Nature mentions that there are infinitely many prime numbers, but the powerful prime number theorem doesn’t tell us how to find them!
Well, I am here to tell you of the discovery of the new largest prime number ever found, which I will call $P_{euclid}$. Here it is:

$P_{euclid} = 2\cdot 3\cdot 5\cdot 7\cdot 11 \cdot \cdots \cdot (2^{57,885,161}-1) +1.$

This number, the product of all prime numbers known so far plus one, is so large that I can’t even write it down on this blog post. But it is certainly (proof left as an exercise…!) a prime number (see Problem 4 in The allure of elegance) and definitely larger than the one getting all the hype. Finally, I will be getting published in Nature!

In the meantime, if you are looking for a real challenge, calculate how many digits my prime number has in base 10. Whoever gets it right (within an order of magnitude), will be my co-author in the shortest Nature paper ever written.

Update 2: I read somewhere that in order to get attention to your blog posts, you should sprinkle them with grammatical errors and let the commenters do the rest for you. I wish I was mastermind-y enough to engineer this post in this fashion. Instead, I get the feeling that someone will run a primality test on $P_{euclid}$ just to prove me wrong. Well, what are you waiting for? In the meantime, another challenge: What is the smallest number (ballpark it using Prime Number Theorem) of primes we need to multiply together before adding one, in order to have a number with a larger prime factor than $2^{57,885,161}-1$?

Update: The number $P_{euclid}$ given above may not be prime itself, as pointed out quickly by Steve Flammia, Georg and Graeme Smith. But, it does contain within it the new largest prime number ever known, which may be the number itself. Now, if only we had a quantum computer to factor numbers quickly…Wait, wasn’t there a polynomial time primality test?

Note: The number mentioned is the largest known Mersenne prime. That Mersenne primes are crazy hard to find is an awesome problem in number theory.

# Post-Quantum Cryptography

As an undergraduate, I took Introduction to Algorithms from Ron Rivest. One of the topics he taught was the RSA public-key cryptosystem which he had created with Adi Shamir and Leonard Adleman. At the time, RSA was only about a decade old, yet already one of the banner creations of computer science. Today many of us rely on it routinely for the security of banking transactions. The internet would not be the same without it and its successors (such as elliptic curve cryptography, ECC). However, as you may have heard, quantum computation spells change for cryptography. Today I’ll tell a little of this story and talk about prospects for the future.

Ron Rivest

What is public-key cryptography (PKC)? The basic notion is due to Ralph Merkle in 1974 and (in a stronger form) to Whitfield Diffie and Martin Hellman in 1976. Their remarkable proposal was that two parties, “Alice” and “Bob”, could cooperate in cryptographic protocols, even if they had never met before. All prior cryptography, from the ancients up through and after the cryptographic adventures of WWII, had relied on the cooperating parties sharing in advance some “secret key” that gave them an edge over any eavesdropper “Eve”.