Steampunk quantum

A dark-haired man leans over a marble balustrade. In the ballroom below, his assistants tinker with animatronic elephants that trumpet and with potions for improving black-and-white photographs. The man is an inventor near the turn of the 20th century. Cape swirling about him, he watches technology wed fantasy.

Welcome to the steampunk genre. A stew of science fiction and Victorianism, steampunk has invaded literature, film, and the Wall Street Journal. A few years after James Watt improved the steam engine, protagonists build animatronics, clone cats, and time-travel. At sci-fi conventions, top hats and blast goggles distinguish steampunkers from superheroes.

Photo

The closest the author has come to dressing steampunk.

I’ve never read steampunk other than H. G. Wells’s The Time Machine—and other than the scene recapped above. The scene features in The Wolsenberg Clock, a novel by Canadian poet Jay Ruzesky. The novel caught my eye at an Ontario library.

In Ontario, I began researching the intersection of QI with thermodynamics. Thermodynamics is the study of energy, efficiency, and entropy. Entropy quantifies uncertainty about a system’s small-scale properties, given large-scale properties. Consider a room of air molecules. Knowing that the room has a temperature of 75°F, you don’t know whether some molecule is skimming the floor, poking you in the eye, or elsewhere. Ambiguities in molecules’ positions and momenta endow the gas with entropy. Whereas entropy suggests lack of control, work is energy that accomplishes tasks.
Continue reading

Squeezing light using mechanical motion

This post is about generating a special type of light, squeezed light, using a mechanical resonator. But perhaps more importantly, it’s about an experiment (Caltech press release can be found here) that is very close to my heart: an experiment that brings to an end my career as a graduate student at Caltech and the IQIM, while paying homage to nearly four decades of work done by those before me at this institute.

The Quantum Noise of Light

First of all, what is squeezed light? It would be silly of me to imagine that I can provide a more clear and thorough explanation than what Jeff Kimble gave twenty years ago in Caltech’s Engineering and Science magazine. Instead, I’ll try to present what squeezing is in the context of optomechanics.

fig1

Quantization of light makes it noisy. Imagine a steady stream of water hitting a plate, and rolling off of it smoothly. The stream would indeed impart a steady force on the plate, but wouldn’t really cause it to “shake” around much. The plate would sense a steady pressure. This is what the classical theory of light, as proposed by James Clerk Maxwell, predicts. The effect is called radiation pressure. In the early 20th century, a few decades after this prediction, quantum theory came along and told us that “light is made of photons”. More or less, this means that a measurement capable of measuring the energy, power, or pressure imparted by light, if sensitive enough, will detect “quanta”, as if light were composed of particles. The force felt by a mirror is exactly this sort of measurement. To make sense of this, we can replace that mental image of a stream hitting a plate with one of the little raindrops hitting it, where each raindrop is a photon. Since the photons are coming in one at a time, and imparting their momentum all at once in little packets, they generate a new type of noise due to their random arrival times. This is called shot-noise (since the photons act as little “shots”). Since shot-noise is being detected here by the sound it generates due to the pressure imparted by light, we call it “Radiation Pressure Shot-Noise” (RPSN).
Continue reading

Frozen children

Kids_watching_GlenA few weeks ago, my friend Amanda, an elementary school teacher who runs a children’s camp during the summer break, suggested that it could be fun for me to come into the camp one day and do some science demonstrations for the kids. I jumped at the opportunity, despite (or perhaps because of) the fact that I am a purely theoretical physicist and my day-to-day work only involves whiteboards and computers at Caltech. Most of the children attending the camp are relatively young (7-9 year-old kids) so, rather than setting out to give a science lesson, I viewed it as a chance to do some fun demonstrations and get these kids excited about science! Besides, I had an ulterior motive; it was a great excuse to acquire, and play with, liquid nitrogen (LN_2) from a Caltech lab (of which most of the IQIM labs have copious supplies). LN_2 is great for demonstrations; this stuff is awesome! At a temperature of -321^{\circ}\,F (for reference, the coldest temperature ever recorded on the surface of the Earth is -128.6^{\circ}\,F), it behaves in ways unlike anything that most people have ever seen. I convinced my friend Carmen, a postdoc in astronomy at Caltech, to come along and help out. Here, I thought I would share my experience, as well as some of the things I learned about handling LN_2.

Carmen watches on as I pour the liquid nitrogen into the beaker, which boils like crazy until the beaker is chilled. The white gas is actually water vapor condensing from the air; nitrogen gas is transparent (think of your ability to see through air). Note, also, my previous assistant in the background - science can be taxing to the body.

Carmen watches on as I pour the liquid nitrogen into the beaker, which boils like crazy until the beaker is chilled. The white gas is actually water vapor condensing from the air; nitrogen gas is transparent (think of your ability to see through air, which is mostly nitrogen gas).

DSC_0201

Liquid nitrogen volcano! All it takes is a little water added to the liquid nitrogen dewar.

Crime and punishment: As anyone who has seen Terminator 2 knows, objects that are pliable at room temperature become brittle and can shatter when reduced to cryogenic temperatures (including robotic assassins from the future). Thus I devoted a significant amount of the demonstration time to freezing and breaking everyday objects, including flowers and rubber toys. The flowers were particularly spectacular, shattering like glass into a multitude of pieces when struck against the table, providing a good deal of entertainment for the audience as well as myself. Hasta la vista, baby. I also froze several pennies, which then became brittle enough such that Carmen was able to shatter them with a few taps from a hammer. Incidentally, destroying US currency is illegal (which is why I had Carmen do it instead of doing it myself). I informed the children of this fact and asked who among them thought that Carmen should go to prison for her crime. A quick vote revealed that the majority of the children thought that she should be behind bars. Sorry Carmen, maybe the next field trip for the camp can be to visit you in prison?

DSC_0110

A flower, freshly pulled from the vat of liquid nitrogen, prepares to make the ultimate sacrifice in the name of science.

After having frozen a variety of objects, one of the children asked me whether you could freeze people with it. I told the kids that this is something that I always wanted to try, but that I had previously lacked a volunteer, to which an enthusiastic boy jumped up and responded, “freeze me, freeze me!” I asked whether he wanted to be frozen 5 years, 10 years, or longer? He said he would like to be frozen until the end of the world. One must admire his dedication! Before attempting to freeze him, I told him that it would be prudent for me to try it on something less likely to have litigious relatives. To this end a strawberry, a peach and a plum were submerged in LN_2, and then removed and allowed to slowly thaw. They ended up melting into gelatinous blobs; clearly some kinks in my cryogenic freezing and revival process need to be resolved before I graduate the approach to small children.
Continue reading

Surviving in Extreme Conditions.

Sometimes in order to do one thing thoroughly you have to first master many other things, even those which may seem very unrelated to your focus. In the end, everything weaves itself together very elegantly and you find yourself wondering how you got through such an incredible sequence of coincidences to where you are now.

I am a rising first-year PhD student in Astrophysics at Caltech. I just completed my Bachelor’s in Physics also from Caltech last June. My Caltech journey has already led me to a number of unexpected places. New in Astrophysics, I am very excited to see as many observatories, labs and manufacturing locations as I can. I just moved out of the dorms and into the first place that is my very own home (which means I pay my own rent now). All of my windows have a very clear view of the radio tower-adorned Mt. Wilson.

This morning I woke up and looked at the Mt. Wilson horizon and decided to drive up there. I left my morning ballet class early to make time for the drive. The road to the observatory is not simple. HWY 2 is a pretty serious mountain road and accidents happen on it regularly. This is the first thing: to have access to observatories, I need to be able to drive there safely and reliably.

Fortunately I love driving, especially athletic mountain driving, so I am looking for almost any excuse to drive to JPL, Mt. Wilson, and so on. I’ll just stop, by saying that driving is a hobby for me and I see it as a sport, a science, and an art.

The first portion of the 2 is like any normal mountain road with speeding locals, terrifying cyclists and daredevil motorcyclists. The views become more and more breathtaking as you gain elevation, but the driver really shouldn’t be getting any of these views except for the portion that fits into the car’s field of view. The road is demanding, with turns and hills, all along a steep and curving mountainside. However, this part is a piece of cake compared to the second portion.

The turnoff to the observatory itself opens onto a less-maintained road speckled with enthusiastic hikers and with nicely sharp 6-inch pebbles scattered around the road. As much as I was enjoying taking smooth turns and avoiding the brakes, I went very slow on this section to drive around the random rocks on the road. I finally got to the top where I could take in the view in peace.

The first thing visitors see is the Cosmic Cafe. It has a balcony going all around the cafe with a fascinating view when there is no smog or fog. Last April, Caltech had its undergraduate student Formal here. We dined at this cafe and had a dance platform nearby. Driving up here, I could not help thinking how risky this was: 11 high-rise buses took a large portion of the Caltech undergraduate student body up to the top of this mountain in fog so dense we could barely see the bus ahead of us. The bus drivers were saints.

Hiking or running shoes are the best shoes to wear here, so I cannot imagine how we came here in suits, dress shoes, tight dresses, and merciless heels. Well, Caltech students have many talents. Second thing: being an active person in the Tech community takes you to some curious places on interesting occasions.

PAZ0101

Some Caltech undergraduates on Mt. Wilson (I’m purple).

I parked at the first available lot, right in front of the cafe and near some large radio towers. When trying to lock my car, I had some trouble. I have an electronic key which operates as a remote outside the car. The car would not react to my key and would not lock. I tried a few more times and finally it locked. I figured the battery in the key was dying, but that didn’t seem right. If any battery were dying, it would be the battery in the spare key that I am not using.
Continue reading

On the importance of choosing a convenient basis

The benefits of Caltech’s proximity to Hollywood don’t usually trickle down to measly grad students like myself, except in the rare occasions when we befriend the industry’s technical contingent. One of my friends is a computer animator for Disney, which means that she designs algorithms enabling luxuriously flowing hair or trees with realistic lighting or feathers that have gorgeous texture, for movies like Wreck-it Ralph. Empowering computers to efficiently render scenes with these complicated details is trickier than you’d think and it requires sophisticated new mathematics. Fascinating conversations are one of the perks of having friends like this. But so are free trips to Disneyland! A couple nights ago, while standing in line for The Tower of Terror, I asked her what’s she’s currently working on. She’s very smart, as can be evidenced by her BS/MS in Computer Science/Mathematics from MIT, but she asked me if I “know about spherical harmonics.” Asking this to an aspiring quantum mechanic is like asking an auto mechanic if they know how to use a monkey wrench. She didn’t know what she was getting herself into!

me, LIGO, Disney

IQIM, LIGO, Disney

Along with this spherical harmonics conversation, I had a few other incidents last week that hammered home the importance of choosing a convenient basis when solving a scientific problem. First, my girlfriend works on LIGO and she’s currently writing her thesis. LIGO is a huge collaboration involving hundreds of scientists, and naturally, nobody there knows the detailed inner-workings of every subsystem. However, when it comes to writing the overview section of ones thesis, you need to at least make a good faith attempt to understand the whole behemoth. Anyways, my girlfriend recently asked if I know how the wavelet transform works. This is another example of a convenient basis, one that is particularly suited for analyzing abrupt changes, such as detecting the gravitational waves that would be emitted during the final few seconds of two black holes merging (ring-down). Finally, for the past couple weeks, I’ve been trying to understand entanglement entropy in quantum field theories. Most of the calculations that can be carried out explicitly are for the special subclass of quantum field theories called “conformal field theories,” which in two dimensions have a very convenient ‘basis’, the Virasoro algebra.

So why does a Disney animator care about spherical harmonics? It turns out that every frame that goes into one of Disney’s movies needs to be digitally rendered using a powerful computing cluster. The animated film industry has traded the painstaking process of hand-animators drawing every single frame, for the almost equally time-consuming process of computer clusters generating every frame. It doesn’t look like strong AI will be available in our immediate future, and in the meantime, humans are still much better than computers at detecting patterns and making intuitive judgements about the ‘physical correctness of an image.’ One of the primary advantages of computer animation is that an animator shouldn’t need to shade in every pixel of every frame — some of this burden should fall on computers. Let’s imagine a thought experiment. An animator wants to get the lighting correct for a nighttime indoor shot. They should be able to simply place the moon somewhere out of the shot, so that its glow can penetrate through the windows. They should also be able to choose from a drop down menu and tell the computer that a hand drawn lightbulb is a ‘light source.’ The computer should then figure out how to make all of the shadows and brightness appear physically correct. Another example of a hard problem is that an animator should be able to draw a character, then tell the computer that the hair they drew is ‘hair’, so that as the character moves through scenes, the physics of the hair makes sense. Programming computers do these things autonomously is harder than it sounds.

In the lighting example, imagine you want to get the lighting correct in a forest shot with complicated pine trees and leaf structures. The computer would need to do the ray-tracing for all of the photons emanating from the different light sources, and then the second-order effects as these photons reflect, and then third-order effects, etc. It’s a tall order to make the scene look accurate to the human eyeball/brain. Instead of doing all of this ray-tracing, it’s helpful to choose a convenient basis in order to dramatically speed up the processing. Instead of the complicated forest example, let’s imagine you are working with a tree from Super Mario Bros. Imagine drawing a sphere somewhere in the middle of this and then defining a ‘height function’, which outputs the ‘elevation’ of the tree foliage over each point on the sphere. I tried to use suggestive language, so that you’d draw an analogy to thinking of Earth’s ‘height function’ as the elevation of mountains and the depths of trenches over the sphere, with sea-level as a baseline. An example of how you could digitize this problem for a tree or for the earth is by breaking up the sphere into a certain number of pixels, maybe one per square meter for the earth (5*10^14 square meters gives approximately 2^49 pixels), and then associating an integer height value between [-2^15,2^15] to each pixel. This would effectively digitize the height map of the earth. In this case, keeping track of the elevation to approximately the meter level. But this leaves us with a huge amount of information that we need to store, and then process. We’d have to keep track of the height value for each pixel, giving us approximately 2^49*2^16=2^65 bits=4 exabytes that we’d have to keep track of. And this is for an easy static problem with only meter resolution! We can store this information much more efficiently using spherical harmonics.

mariotree

There are many ways to think about spherical harmonics. Basically, they’re functions which map points on the sphere to real numbers Y_l^m: (\theta,\phi) \mapsto Y_l^m(\theta,\phi)\in\mathbb{R}, such that they satisfy a few special properties. They are orthogonal, meaning that if you multiply two different spherical harmonics together and then integrate over the sphere, then you get zero. If you square one of the functions and then integrate over the sphere, you get a finite, nonzero value. This means that they are orthogonal functions. They also span the space of all height functions that one could define over the sphere. This means that for a planet with an arbitrarily complicated topography, you would be able to find some weighted combination of different spherical harmonics which perfectly describes that planet’s topography. These are the key properties which make a set of functions a basis: they span and are orthogonal (this is only a heuristic). There is also a natural way to think about the light that hits the tree. We can use the same sphere and simply calculate the light rays as they would hit the ideal sphere. With these two different ‘height functions’, it’s easy to calculate the shadows and brightness inside the tree. You simply convolve the two functions, which is a fast operation on a computer. It also means that if the breeze slightly changes the shape of the tree, or if the sun moves a little bit, then it’s very easy to update the shading. Implicit in what I just said, using spherical harmonics allows us to efficiently store this height map. I haven’t calculated this on a computer, but it doesn’t seem totally crazy to think that we’d be able to store the topography of the earth to a reasonable accuracy, with 100 nonzero coefficients of the spherical harmonics to 64 bits of precision, 2^7*2^6= 2^13 << 2^65. Where does this cost savings come from? It comes from the fact that the spherical harmonics are a convenient basis, which naturally encode the types of correlations we see in Earth’s topography — if you’re standing at an elevation of 2000m, the area within ten meters is probably at a similar elevation. Cliffs are what break this basis — but are what the wavelet basis was designed to handle.

I’ve only described a couple bases in this post and I’ve neglected to mention some of the most famous examples! This includes the Fourier basis, which was designed to encode periodic signals, such as music and radio waves. I also have not gone into any detail about the Virasoro algebra, which I mentioned at the beginning of this post, and I’ve been using it heavily for the past few weeks. For the sake of diversity, I’ll spend a few sentences whetting your apetite. Complex analysis is primarily the study of analytic functions. In two dimensions, these analytic functions “preserve angles.” This means that if you have two curves which intersect at a point with angle \theta, then after using an analytic function to map these curves to their image, also in the complex plane, then the angle between the curves will still be \theta. An especially convenient basis for the analytic functions in two-dimensions (\{f: \mathbb{C} \to \mathbb{C}\}, where f(z) = \sum_{n=0}^{\infty} a_nz^n) is given by the set of functions \{l_n = -z^{n+1}\partial_z\}. As always, I’m not being exactly precise, but this is a ‘basis’ because we can encode all the information describing an infinitesimal two-dimensional angle-preserving map using these elements. It turns out to have incredibly special properties, including that its quantum cousin yields something called the “central charge” which has deep ramifications in physics, such as being related to the c-theorem. Conformal field theories are fascinating because they describe the physics of phase transitions. Having a convenient basis in two-dimensions is a large part of why we’ve been able to make progress in our understanding of two-dimensional phase transitions (more important is that the 2d conformal symmetry group is infinite-dimensional, but that’s outside the scope of this post.) Convenient bases are also important for detecting gravitational waves, making incredible movies and striking up nerdy conversations in long lines at Disneyland!

 

Monopoles passing through Flatland!

Like many mathematically inclined teenagers, I was charmed when I first read the book Flatland by Edwin Abbott Abbott.* It’s a story about a Sphere who visits a two-dimensional world and tries to awaken its inhabitants to the existence of a third dimension. As perceived by Flatlanders, the Sphere is a circle which appears as a point, grows to maximum size, then shrinks and disappears.

My memories of Flatland were aroused as I read a delightful recent paper by Max Metlitski, Charlie Kane, and Matthew Fisher about magnetic monopoles and three-dimensional bosonic topological insulators. To explain why, I’ll need to recall a few elements of the theory of monopoles and of topological insulators, before returning to the connection between the two and why that reminds me of Flatland.

Flatlanders, confined to the surface of a topological insulator, are convinced by a magnetic monopole that there is a third dimension.

Flatlanders, confined to the two-dimensional surface of a topological insulator, are convinced by a magnetic monopole that a third dimension must exist.

Monopoles

Paul Dirac was no ordinary genius. Aside from formulating relativistic electron theory and predicting the existence of antimatter, Dirac launched the quantum theory of magnetic monopoles in a famous 1931 paper. Dirac envisioned a magnetic monopole as a semi-infinitely long, infinitesimally thin string of magnetic flux, such that the end of the string, where the flux spills out, seems to be a magnetic charge. For this picture to make sense, the string should be invisible. Dirac pointed out that an electron with electric charge e, transported around a string carrying flux \Phi, could detect the string (via what later came to be called the Aharonov-Bohm effect) unless the flux is an integer multiple of 2\pi\hbar /e, where \hbar is Planck’s constant. Conversely, in order for the string to be invisible, if a magnetic monopole exists with magnetic charge g_D = 2\pi\hbar /e, then all electric charges must be integer multiples of e. Thus the existence of magnetic monopoles (which have never been observed) could explain quantization of electric charge (which has been observed).

Captivated by the beauty of his own proposal, Dirac concluded his paper by remarking, “One would be surprised if Nature had made no use of it.”

Our understanding of quantized magnetic monopoles advanced again in 1979 when another extraordinary physicist, Edward Witten, discussed a generalization of Dirac’s quantization condition. Witten noted that the Lagrange density of electrodynamics could contain a term of the form

\frac{\theta e^2\hbar}{4\pi^2}~\vec{E}\cdot\vec{B},

where \vec{E} is the electric field and \vec{B} is the magnetic field. This “\theta term” may also be expressed as

\frac{\theta e^2\hbar}{8\pi^2}~ \partial^\mu\left(\epsilon_{\mu\nu\lambda\sigma}A^\nu\partial^\lambda A^\sigma \right),

where A is the vector potential, and hence is a total derivative which makes no contribution to the classical field equations of electrodynamics. But Witten realized that it can have important consequences for the quantum properties of magnetic monopoles. Specifically, the \theta term modifies the field momentum conjugate to the vector potential, which becomes

\vec{E}+\frac{\theta e^2\hbar}{4\pi^2}\vec{B}.

Because the Gauss law condition satisfied by physical quantum states is altered, for a monopole with magnetic charge m g_D , where g_D is Dirac’s minimal charge 2\pi\hbar /e and m is an integer, the allowed values of the electric charge become

q = e\left( n - \frac{\theta m}{2\pi}\right),

where n is an integer. This spectrum of allowed charges remains invariant if \theta advances by 2\pi, suggesting that the parameter \theta is actually an angular variable with period 2\pi. This periodicity of \theta can be readily verified in a theory admitting fermions with the minimal charge e. But if the charged particles are bosons then \theta turns out to be a periodic variable with period 4\pi instead.

That \theta has a different period for a bosonic theory than a fermionic one has an interesting interpretation. As Goldhaber noticed in 1976, dyons carrying both magnetic and electric charge can exhibit statistical transmutation. That is, in a purely bosonic theory, a dyon with magnetic charge g_D= 2\pi\hbar/e and electric charge ne is a fermion if n is an odd integer — when two dyons are exchanged, transport of each dyon’s electric charge in the magnetic field of the other dyon induces a sign change in the wave function. In a fermionic theory the story is different; now we can think of the dyon as a fermionic electric charge bound to a bosonic monopole. There are two canceling contributions to the exchange phase of the dyon, which is therefore a boson for any integer value of n, whether even or odd.

As \theta smoothly increases from 0 to 2\pi, the statistics (whether bosonic or fermionic) of a dyon remains fixed even as the dyon’s electric charge increases by e. For the bosonic theory with \theta = 2\pi, then, dyons with magnetic charge g_D and electric charge ne are bosons for n odd and fermions for n even, the opposite of what happens when \theta=0. For the bosonic theory, unlike the fermionic theory, we need to increase \theta by 4\pi for the physics of dyons to be fully invariant.

In 1979 Ed Witten was a postdoc at Harvard, where I was a student, though he was visiting CERN for the summer when he wrote his paper about the \theta-dependent monopole charge. I always read Ed’s papers carefully, but I gave special scrutiny to this one because magnetic monopoles were a pet interest of mine. At the time, I wondered whether the Witten effect might clarify how to realize the \theta parameter in a lattice gauge theory. But it certainly did not occur to me that the \theta-dependent electric charge of a magnetic monopole could have important implications for quantum condensed matter physics. Theoretical breakthroughs often have unexpected consequences, which may take decades to emerge.

Symmetry-protected topological phases

Okay, now let’s talk about topological insulators, a very hot topic in condensed matter physics these days. Actually, a topological insulator is a particular instance of a more general concept called a symmetry-protected topological phase of matter (or SPT phase). Consider a d-dimensional hunk of material with a (d-1)-dimensional boundary. If the material is in an SPT phase, then the physics of the d-dimensional bulk is boring — it’s just an insulator with an energy gap, admitting no low-energy propagating excitations. But the physics of the (d-1)-dimensional edge is exotic and exciting — for example the edge might support “gapless” excitations of arbitrarily low energy which can conduct electricity. The exotica exhibited by the edge is a consequence of a symmetry, and is destroyed if the symmetry is broken either explicitly or spontaneously; that is why we say the phase is “symmetry protected.”

The low-energy edge excitations can be described by a (d-1)-dimensional effective field theory. But for a typical SPT phase, this effective field theory is what we call anomalous, which means that for one reason or another the theory does not really make sense. The anomaly tells us something interesting and important, namely that the (d-1)-dimensional theory cannot be really, truly (d-1) dimensional; it can arise only at the edge of a higher-dimensional system.

This phenomenon, in which the edge does not make sense by itself without the bulk, is nicely illustrated by the integer quantum Hall effect, which occurs in a two-dimensional electron system in a high magnetic field and at low temperature, if the sample is sufficiently clean so that the electrons are highly mobile and rarely scattered by impurities. In this case the relevant symmetry is electron number, or equivalently the electric charge. At the one-dimensional edge of a two-dimensional quantum Hall sample, charge carriers move in only one direction — to the right, say, but not to the left. A theory with such chiral electric charges does not really make sense. One problem is that electric charge is not conserved — an electric field along the edge causes charge to be locally created, which makes the theory inconsistent.

The way the theory resolves this conundrum is quite remarkable. A two-dimensional strip of quantum Hall fluid has two edges, one at the top, the other at the bottom. While the top edge has only right-moving excitations, the bottom edge has only left-moving excitations. When electric charge appears on the top edge, it is simultaneously removed from the bottom edge. Rather miraculously, charge can be conveyed across the bulk from one edge to the other, even though the bulk does not have any low-energy excitations at all.

I first learned about this interplay of edge and bulk physics from a beautiful 1985 paper by Curt Callan and Jeff Harvey. They explained very lucidly how an edge theory with an anomaly and a bulk theory with an anomaly can fit together, with each solving the other’s problems. Curiously, the authors did not mention any connection with the quantum Hall effect, which had been discovered five years earlier, and I didn’t appreciate the connection myself until years later.

Topological insulators

In the case of topological insulators, the symmetries which protect the gapless edge excitations are time-reversal invariance and conserved particle number, i.e. U(1) symmetry. Though the particle number might not be coupled to an electromagnetic gauge field, it is instructive for the purpose of understanding the properties of the symmetry-protected phase to imagine that the U(1) symmetry is gauged, and then to consider the potential anomalies that could afflict this gauge symmetry. The first topological insulators conceived by theorists were envisioned as systems of non-interacting electrons whose properties were relatively easy to understand using band theory. But it was not so clear at first how interactions among the electrons might alter their exotic behavior. The wonderful thing about anomalies is that they are robust with respect to interactions. In many cases we can infer the features of anomalies by studying a theory of non-interacting particles, assured that these features survive even when the particles interact.

As have many previous authors, Metlitski et al. argue that when we couple the conserved particle number to a U(1) gauge field, the effective theory describing the bulk physics of a topological insulator in three dimensions may contain a \theta term. But wait … since the electric field is even under time reversal and the magnetic field is odd, the \theta term is T-odd; under T, \theta is mapped to -\theta, so T seems to be violated if \theta has any nonzero value. Except … we have to remember that \theta is really a periodic variable. For a fermionic topological insulator the period is 2\pi; therefore the theory with \theta = \pi is time reversal invariant; \theta = \pi maps to \theta = -\pi under T, which is equivalent to a rotation of \theta by 2\pi. For a bosonic topological insulator the period is 4\pi, which means that \theta = 2\pi is the nontrivial T-invariant value.

If we say that a “trivial” insulator (e.g., the vacuum) has \theta = 0, then we may say that a bulk material with \theta = \pi (fermionic case) or \theta = 2\pi (bosonic case) is a “nontrivial” (a.k.a. topological) insulator. At the edge of the sample, where bulk material meets vacuum, \theta must rotate suddenly by \pi (fermions) or by 2\pi (bosons). The exotic edge physics is a consequence of this abrupt change in \theta.

Monopoles in Flatland

To understand the edge physics, and in particular to grasp how fermionic and bosonic topological insulators differ, Metlitski et al. invite us to imagine a magnetic monopole with magnetic charge g_D passing through the boundary between the bulk and the surrounding vacuum. To the Flatlanders confined to the surface of the bulk sample, the passing monopole induces a sudden change in the magnetic flux through the surface by a single flux quantum g_D, which could arise due to a quantum tunneling event. What does the Flatlander see?

In a fermionic topological insulator, there is a monopole that carries charge e/2 when inside the sample (where \theta=-\pi) and charge 0 when outside (where \theta=0). Since electric charge is surely conserved in the full three-dimensional theory, the change in the monopole’s charge must be compensated by a corresponding change in the charge residing on the surface. Flatlanders are puzzled to witness a spontaneously arising excitation with charge e/2. This is an anomaly — electric charge conservation is violated, which can only make sense if Flatlanders are confined to a surface in a higher-dimensional world. Though unable to escape their surface world, the Flatlanders can be convinced by the Monopole that an extra dimension must exist.

In a bosonic topological insulator, the story is somewhat different: there is a monopole that carries electric charge 0 when inside the sample (where \theta=-2\pi) and charge –e when outside (where \theta=0). In this case, though, there are bosonic charge-e particles living on the surface. A monopole can pick up a charged particle as it passes through Flatland, so that its charge is 0 both inside the bulk sample and outside in the vacuum. Flatlanders are happy — electric charge is conserved!

But hold on … there’s still something wrong. Inside the bulk (where \theta= -2\pi) a monopole with electric charge 0 is a fermion, while outside in the vacuum (where \theta = 0) it is a boson. In the three-dimensional theory it is not possible for any local process to create an isolated fermion, so if the fermionic monopole becomes a bosonic monople as it passes through Flatland, it must leave a fermion behind. Flatlanders are puzzled to witness a spontaneously arising fermion. This is an anomaly — conservation of fermionic parity is violated, which can only make sense if Flatlanders are confined to a surface in a higher-dimensional world. Once again, the clever residents of Flatland learn from the Monopole about an extra spatial dimension, without ever venturing outside their two-dimensional home.

Topological order gets edgy

This post is already pretty long and I should wrap it up. Before concluding I’ll remark that the theory of symmetry-protected phases has been developing rapidly in recent months.

In particular, a new idea, introduced last fall by Vishwanath and Senthil, has been attracting increasing attention. While in most previously studied SPT phases the unbroken symmetry protects gapless excitations confined to the edge of the sample, Vishwanath and Senthil pointed out another possibility — a gapped edge exhibiting topological order. The surface can support anyons with exotic braiding statistics.

Here, too, anomalies are central to the discussion. While anyons in two-dimensional media are already a much-studied subject, the anyon models that can be realized at the edges of three-dimensional SPT phases are different than anyon models realized in really, truly two-dimensional systems. What’s new are not the braiding properties of the anyons, but rather how the anyons transform under the symmetry. Flatlanders who study the symmetry realization in their gapped two-dimensional world should be able to infer the existence of the three-dimensional bulk.

The pace of discovery picked up this month when four papers appeared simultaneously on the preprint arXiv, by Metlitski-Kane-Fisher, Chen-Fidkowski-Vishwanath, Bonderson-Nayak-Qi, and Wang-Potter-Senthil, all proposing and analyzing models of SPT phases with gapped edges. It remains to be seen, though, whether this physics will be realized in actual materials.

Are we on the edge?

In Flatland, our two-dimensional friend, finally able to perceive the third dimension thanks to the Sphere’s insistent tutelage, begs to enter a world of still higher dimensions, “where thine own intestines, and those of kindred Spheres, will lie exposed to … view.” The Sphere is baffled by the Flatlander’s request, protesting, “There is no such land. The very idea of it is utterly inconceivable.”

Let’s not be so dogmatic as the Sphere. The lessons learned from the quantum Hall effect and the topological insulator have prepared us to take the next step, envisioning our own three-dimensional world as the edge of a higher-dimensional bulk system. The existence of an unseen bulk may be inferred in the future by us edgelings, if experimental explorations of our three-dimensional effective theory reveal anomalies begging for an explanation.

Perhaps we are on the edge … of a great discovery. At least it’s conceivable.

*Disclaimer: The gender politics of Flatland, to put it mildly, is outdated and offensive. I don’t wish to endorse the idea that women are one dimensional! I included the reference to Flatland because the imagery of two-dimensional beings struggling to imagine the third dimension is a perfect fit to the scientific content of this post.

This single-shot life

The night before defending my Masters thesis, I ran out of shampoo. I ran out late enough that I wouldn’t defend from beneath a mop like Jack Sparrow’s; but, belonging to the Luxuriant Flowing-Hair Club for Scientists (technically, if not officially), I’d have to visit Shopper’s Drug Mart.

Image

The author’s unofficially Luxuriant Flowing Scientist Hair

Before visiting Shopper’s Drug Mart, I had to defend my thesis. The thesis, as explained elsewhere, concerns epsilons, the mathematical equivalents of seed pearls. The thesis also concerns single-shot information theory.

Ordinary information theory emerged in 1948, midwifed by American engineer Claude E. Shannon. Shannon calculated how efficiently we can pack information into symbols when encoding long messages. Consider encoding this article in the fewest possible symbols. Because “the” appears many times, you might represent “the” by one symbol. Longer strings of symbols suit misfits like “luxuriant” and “oobleck.” The longer the article, the fewer encoding symbols you need per encoded word. The encoding-to-encoded ratio decreases, toward a number called the Shannon entropy, as the message grows infinitely long.

Claude Shannon

We don’t send infinitely long messages, excepting teenagers during phone conversations. How efficiently can we encode just one article or sentence? The answer involves single-shot information theory, or—to those stuffing long messages into the shortest possible emails to busy colleagues—“1-shot info.” Pioneered within the past few years, single-shot theory concerns short messages and single trials, the Twitter to Shannon’s epic. Like articles, quantum states can form messages. Hence single-shot theory blended with quantum information in my thesis.

Continue reading

We are all Wilsonians now

Ken Wilson

Ken Wilson

Ken Wilson passed away on June 15 at age 77. He changed how we think about physics.

Renormalization theory, first formulated systematically by Freeman Dyson in 1949, cured the flaws of quantum electrodynamics and turned it into a precise computational tool. But the subject seemed magical and mysterious. Many physicists, Dirac prominently among them, questioned whether renormalization rests on a sound foundation.

Wilson changed that.

The renormalization group concept arose in an extraordinary paper by Gell-Mann and Low in 1954. It was embraced by Soviet physicists like Bogoliubov and Landau, and invoked by Landau to challenge the consistency of quantum electrodynamics. But it was an abstruse and inaccessible topic, as is well illustrated by the baffling discussion at the very end of the two-volume textbook by Bjorken and Drell.

Wilson changed that, too.

Ken Wilson turned renormalization upside down. Dyson and others had worried about the “ultraviolet divergences” occurring in Feynman diagrams. They introduced an artificial cutoff on integrations over the momenta of virtual particles, then tried to show that all the dependence on the cutoff can be eliminated by expressing the results of computations in terms of experimentally accessible quantities. It required great combinatoric agility to show this trick works in electrodynamics. In other theories, notably including general relativity, it doesn’t work.

Wilson adopted an alternative viewpoint. Take the short-distance cutoff seriously, he said, regarding it as part of the physical formulation of the field theory. Now ask what physics looks like at distances much larger than the cutoff. Wilson imagined letting the short-distance cutoff grow, while simultaneously adjusting the theory to preserve its low-energy predictions. This procedure sounds complicated, but Wilson discovered something wonderful — for the purpose of computing low-energy processes the theory becomes remarkably simple, completely characterized by just a few (renormalized) parameters. One recovers Dyson’s results plus much more, while also acquiring a rich and visually arresting physical picture of what is going on.

When I started graduate school in 1975, Wilson, not yet 40, was already a legend. Even Sidney Coleman, for me the paragon of razor sharp intellect, seemed to regard Wilson with awe. (They had been contemporaries at Caltech, both students of Murray Gell-Mann.) It enhanced the legend that Wilson had been notoriously slow to publish. He spent years pondering the foundations of quantum field theory before finally unleashing a torrent of revolutionary papers in the early 70s. Cornell had the wisdom to grant tenure despite Wilson’s unusually low productivity during the 60s.

As a student, I spent countless hours struggling through Wilson’s great papers, some of which were quite difficult. One introduced me to the operator product expansion, which became a workhorse of high-energy scattering theory and the foundation of conformal field theory. Another considered all the possible ways that renormalization group fixed points could control the high-energy behavior of the strong interactions. Conspicuously missing from the discussion was what turned out to be the correct idea — asymptotic freedom. Wilson had not overlooked this possibility; instead he “proved” it to be impossible. The proof contains a subtle error. Wilson analyzed charge renormalization invoking both Lorentz covariance and positivity of the Hilbert space metric, forgetting that gauge theories admit no gauge choice with both properties. Even Ken Wilson made mistakes.

Wilson also formulated the strong-coupling expansion of lattice gauge theory, and soon after pioneered the Euclidean Monte Carlo method for computing the quantitative non-perturbative predictions of quantum chromodynamics, which remains today an extremely active and successful program. But of the papers by Wilson I read while in graduate school, the most exciting by far was this one about the renormalization group. Toward the end of the paper Wilson discussed how to formulate the notion of the “continuum limit” of a field theory with a cutoff. Removing the short-distance cutoff is equivalent to taking the limit in which the correlation length (the inverse of the renormalized mass) is infinitely long compared to the cutoff — the continuum limit is a second-order phase transition. Wilson had finally found the right answer to the decades-old question, “What is quantum field theory?” And after reading his paper, I knew the answer, too! This Wilsonian viewpoint led to further deep insights mentioned in the paper, for example that an interacting self-coupled scalar field theory is unlikely to exist (i.e. have a continuum limit) in four spacetime dimensions.

Wilson’s mastery of quantum field theory led him to another crucial insight in the 1970s which has profoundly influenced physics in the decades since — he denigrated elementary scalar fields as unnatural. I learned about this powerful idea from an inspiring 1979 paper not by Wilson, but by Lenny Susskind. That paper includes a telltale acknowledgment: “I would like to thank K. Wilson for explaining the reasons why scalar fields require unnatural adjustments of bare constants.”

Susskind, channeling Wilson, clearly explains a glaring flaw in the standard model of particle physics — ensuring that the Higgs boson mass is much lighter than the Planck (i.e., cutoff) scale requires an exquisitely careful tuning of the theory’s bare parameters. Susskind proposed to banish the Higgs boson in favor of Technicolor, a new strong interaction responsible for breaking the electroweak gauge symmetry, an idea I found compelling at the time. Technicolor fell into disfavor because it turned out to be hard to build fully realistic models, but Wilson’s complaint about elementary scalars continued to drive the quest for new physics beyond the standard model, and in particular bolstered the hope that low-energy supersymmetry (which eases the fine tuning problem) will be discovered at the Large Hadron Collider. Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?

Wilson’s great legacy is that we now regard nearly every quantum field theory as an effective field theory. We don’t demand or expect that the theory will continue working at arbitrarily short distances. At some stage it will break down and be replaced by a more fundamental description. This viewpoint is now so deeply ingrained in how we do physics that today’s students may be surprised to hear it was not always so. More than anyone else, we have Ken Wilson to thank for this indispensable wisdom. Few ideas have changed physics so much.

Don’t sweat the epsilons…and it’s all epsilons

I’d come to Barnes and Noble to study and to submerse in the bustle. I needed reminding that humans other than those on my history exam existed. When I ran out of tea and of names to review, I stood, stretched, and browsed the shelves. A blue-bound book caught my eye: Don’t Sweat the Small Stuff…and it’s all small stuff.

Richard Carlson wrote that book for people like me. We have packing lists, grocery lists, and laundry lists of to-do lists. We transcribe lectures. We try to rederive equations that we should just use. Call us “detail-oriented”; call us “conscientious”; we’re boring as toast, and we have earlier bedtimes. When urged to relax, we try. We might not succeed, but we try hard.

For example, I do physics instead of math. Mathematicians agonize over what-ifs: “What if this bit of the fraction reaches one while that bit goes negative and the other goes loop-the-loop? We’d be dividing by zero!” Divisions by zero atom-bomb calculations. Since dividing by a tiny number amounts to multiplying by a large number, dividing by zero amounts to multiplying by infinity. While mathematicians chew their nails over infinities, physicists often assume we needn’t. We use math to represent physical systems like pendulums and ponytails.1 Ponytails have properties, like lacking infinite masses, that don’t smack of the apocalypse. Since those properties don’t, neither does the math that represents those properties. To justify assumptions that our math “behaves nicely,” we use the jargon, “the field goes to zero at the boundary,” “the coupling’s renormalized,” and “it worked last time.”

I tried not to sweat the small stuff. I tried to shrug off the question marks at calculations’ edges. Sometimes, I succeeded. Then I began a Masters thesis about epsilons.

Self-help for calculus addicts.

In many physics problems, the Greek letter epsilon (ε) means “-ish.” The butcher sold you epsilon-close to a pound of beef? He tipped the scale a tad in your favor. Your temperature dropped from 103 to epsilon-close to normal? Stay in bed this afternoon, and you should recover by tomorrow.

For half a year, I’ve used epsilons to describe transformations between quantum states. To visualize the transformations, say you have a fistful of coins. Each coin consists of gold and aluminum. The portion of the coin that’s gold varies from coin to coin. I want a differently-sized fistful of coins, each with a certain gold content. After melting down your fistful, can you cast the fistful I want? Can you cast a fistful that’s epsilon-close to the fistful I want? I calculated answers to those questions, after substituting “quantum states” for “fistfuls” and a property called “purity” for “gold.”2

You might expect epsilon-close conversions to require less effort than exact conversions: Butchers weigh out approximately a pound of beef more quickly than they weigh a pound. But epsilon-close math requires more effort than exact math. Introducing epsilons into calculations, you introduce another number to keep track of. As that number approaches zero, approximate conversions become exact. If that number approaches zero while in a denominator, you atom-bomb calculations with infinities. The infinities remind me of geysers in a water park of quantum theory.

Have you visited a water park where geysers erupt every few minutes? Have you found a geyser head that looks dead, and crouched to check it? Epsilons resemble dead-looking geyser heads. Just as geyser heads rise only inches from the ground, epsilons have values close to zero. Say you’ve divided by epsilon, and you’re lowering its value to naught. Hitch up your swimsuit, lower your head, and squint at the faucet. Farther you crouch, and farther, till SPLAT! Water shoots up your left nostril.

Infinities have been shooting up my left nostril for months.

Rocking back on your heels, you need a towel. Dividing by an epsilon that approaches zero, I need an advisor. An advisor who knows mounds of calculus, who corrects without crushing, and who doesn’t mind my bombarding him with questions once a week. I have one, thank goodness—an advisor, not a towel.3 I wouldn’t trade him for fifty fistfuls of gold coins.

Towel in hand, I tiptoed through the water park of epsilons. I learned how quickly geysers erupt, where they appear, and how to disable some. I learned about smoothed distributions, limits superior, and Asymptotic Equipartition Properties. Though soaked after crossing the park, I survived. I submitted my thesis last week. And I have the right—should I find the chutzpah—to toss off the word “epsilonification” like a spelling-bee champ.

Had I not sweated the epsilons, I wouldn’t have finished the thesis. Should I discard Richard Carlson’s advice? I can’t say, having returned to my history review instead of reading his book. But I don’t view epsilons as troubles to sweat or not. Why not view epsilons as geysers in the water park of quantum theory? Who doesn’t work up a sweat in a park? But I wouldn’t rather leave. And maybe—if enough geysers shoot up our left nostrils—we’ll learn a smidgeon about Old Faithful.

1 I’m not kidding about ponytails.

2 Quantum whizzes: I explored a resource theory like that of pure bipartite entanglement (e.g., http://arxiv.org/abs/quant-ph/9811053). Instead of entanglement or gold, nonuniformity (distance from the maximally mixed state) is a scarce resource. The uniform (maximally mixed state) has no worth, like aluminum. This “resource theory of nonuniformity” models thermodynamic systems whose Hamiltonians are trivial (H = 0).

3 Actually, I have two advisors, and I’m grateful for both. But one helped cure my epsilons. N.B. I have not begun working at Caltech.

Quantum Matter Animated!

by Jorge Cham

What does it mean for something to be Quantum? I have to confess, I don’t know. My Ph.D was in Robotics and Kinematics, so my neurons are deeply trained to think in terms of classical dynamics. I asked my siblings (two engineers and one architect) what comes to mind for them when they hear the word Quantum, what they remember from college physics, and here is what they said:

– “Quantum Leap!” (the late 80’s TV show)

– “Quantum of Solace!” (the James Bond movie which, incidentally, was filmed in my home country of Panama, even though the movie was set in Bolivia)

– “I don’t remember anything I learned in college”

– “Light acting as a particle instead of a wave?”

The third answer came from my sister, who went to MIT. The fourth came from my brother, who went to Stanford (+1 point for Stanford!).

Screen Shot 2013-06-11 at 12.15.21 AM

I also asked my spouse what comes to mind for her. She said, “Quantum Computing: it’s the next big advance in computers. Transistors the size of atoms.” Clearly, I married someone smarter than me (she also went to Stanford). When I asked if she knew how they worked, she said, “I don’t know how it works.” She also said, “Quantum is related to how time moves more slowly as you approach the speed of light, right?” Nice try, but that’s Relativity (-1 point for Stanford!).

I think the word Quantum has a special power in our collective consciousness. It’s used to convey science-iness, technology, the weirdness of the Physical world. If you Google “Quantum”, most of the top hits are for technology companies that have nothing to do with Quantum Physics (including Quantum Fishing Tackles. I suppose that half the time, you pull up a dead fish).

It’s one of those words that a lot of people have heard of, but very few really understand what it means. Which is why I was excited when Spiros Michalakis and IQIM approached me to produce a series of animations that explore and explain Quantum Information and Matter. Like my previous videos (The Higgs Boson, Dark Matter, Exoplanets), I’d have the chance to interview experts in this field and use their expertise and their voices to learn and to help others learn what amazing things lie just around the corner, beyond our classical understanding of the Universe.

Screen Shot 2013-06-11 at 12.16.55 AM

This will be a big Leap for me (I’m trying to avoid the obvious pun), and a journey of exploration. The first installment goes live today, and you can watch it below. Like Schrödinger’s box, I don’t know what we’ll discover with these videos, but I know there are exciting possibilities inside. This is also going to be a BIG challenge. Understanding and putting Quantum concepts in visual form will be hard. I mean, Hair-pulling hard. Fortunately, I’ve discovered there’s a remedy for that.

Screen Shot 2013-06-11 at 12.17.20 AM

Watch the first installment of this series:

Jorge Cham is the creator of Piled Higher and Deeper (www.phdcomics.com).

CREDITS:

Featuring: Amir Safavi-Naeini and Oskar Painter http://copilot.caltech.edu/

Produced in Partnership with the Institute for Quantum Information and Matter (http://iqim.caltech.edu) at Caltech with funding provided by the National Science Foundation.

Transcription: Noel Dilworth
Thanks to: Spiros Michalakis, John Preskill and Bert Painter