About preskill

I am a theoretical physicist at Caltech, and the Director of the Institute for Quantum Information and Matter. Follow me on Twitter @preskill.

Wouldn’t you like to know what’s going on in my mind?

I suppose most theoretical physicists who (like me) are comfortably past the age of 60 worry about their susceptibility to “crazy-old-guy syndrome.” (Sorry for the sexism, but all the victims of this malady I know are guys.) It can be sad when a formerly great scientist falls far out of the mainstream and seems to be spouting nonsense.

Matthew Fisher is only 55, but reluctance to be seen as a crazy old guy might partially explain why he has kept pretty quiet about his passionate pursuit of neuroscience over the past three years. That changed two months ago when he posted a paper on the arXiv about Quantum Cognition.

Neuroscience has a very seductive pull, because it is at once very accessible and very inaccessible. While a theoretical physicist might think and write about a brane even without having or seeing a brane, everybody’s got a brain (some scarecrows excepted). On the other hand, while it’s not too hard to write down and study the equations that describe a brane, it is not at all easy to write down the equations for a brain, let alone solve them. The brain is fascinating because we know so little about it. And … how can anyone with a healthy appreciation for Gödel’s Theorem not be intrigued by the very idea of a brain that thinks about itself?

(Almost) everybody's got a brain.

(Almost) everybody’s got a brain.

The idea that quantum effects could have an important role in brain function is not new, but is routinely dismissed as wildly implausible. Matthew Fisher begs to differ. And those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy. He may be onto something. At least he’s raising some very interesting questions.

My appreciation for Matthew and his paper was heightened further this Wednesday, when Matthew stopped by Caltech for a lunch-time seminar and one of my interminable dinner-time group meetings. I don’t know whether my brain is performing quantum information processing (and neither does Matthew), but just the thought that it might be is lighting me up like a zebrafish.

Following Matthew, let’s take a deep breath and ask ourselves: What would need to be true for quantum information processing to be important in the brain? Presumably we would need ways to (1) store quantum information for a long time, (2) transport quantum information, (3) create entanglement, and (4) have entanglement influence the firing of neurons. After a three-year quest, Matthew has interesting things to say about all of these issues. For details, you should read the paper.

Matthew argues that the only plausible repositories for quantum information in the brain are the Phosphorus-31 nuclear spins in phosphate ions. Because these nuclei are spin-1/2, they have no electric quadrupole moments and hence corresponding long coherence times — of order a second. That may not be long enough, but phosphate ions can be bound with calcium ions into objects called Posner clusters, each containing six P-31 nuclei. The phosphorus nuclei in Posner clusters might have coherence times greatly enhanced by motional narrowing, perhaps as long as weeks or even longer.

Where energy is being consumed in a cell, ATP sometimes releases diphosphate ions (what biochemists call pyrophosphate), which are later broken into two separate phosphate ions, each with a single P-31 qubit. Matthew argues that the breakup of the diphosphate, catalyzed by a suitable enzyme, will occur at an enhanced rate when these two P-31 qubits are in a spin singlet rather than a spin triplet. The reason is that the enzyme has to grab ahold of the diphosphate molecule and stop its rotation in order to break it apart, which is much easier when the molecule has even rather than odd orbital angular momentum; therefore due to Fermi statistics the spin state of the P-31 nuclei must be antisymmetric. Thus wherever ATP is consumed there is a plentiful source of entangled qubit pairs.

If the phosphate molecules remain unbound, this entanglement will decay in about a second, but it is a different story if the phosphate ions group together quickly enough into Posner clusters, allowing the entanglement to survive for a much longer time. If the two members of an entangled qubit pair are snatched up by different Posner clusters, the clusters may then be transported into different cells, distributing the entanglement over relatively long distances.

(a) Two entangled Posner clusters. Each dot is a P-31 nuclear spin, and each dashed line represents a singlet pair. (b) Many entangled Posner clusters. [From the paper]

(a) Two entangled Posner clusters. Each dot is a P-31 nuclear spin, and each dashed line represents a singlet pair. (b) Many entangled Posner clusters. [From Fisher 2015]

What causes a neuron to fire is a complicated story that I won’t attempt to wade into. Suffice it to say that part of the story may involve the chemical binding of a pair of Posner clusters which then melt if the environment is sufficiently acidic, releasing calcium ions and phosphate ions which enhance the firing. The melting rate depends on the spin state of the six P-31 nuclei within the cluster, so that entanglement between clusters in different cells may induce nonlocal correlations among different neurons, which could be quite complex if entanglement is widely distributed.

This scenario raises more questions than it answers, but these are definitely scientific questions inviting further investigation and experimental exploration. One thing that is far from clear at this stage is whether such quantum correlations among neurons (if they exist at all) would be easy to simulate with a classical computer. Even if that turns out to be so, these potential quantum effects involving many neurons could be fabulously interesting. IQIM’s mission is to reach for transformative quantum science, particularly approaches that take advantage of synergies between different fields of study. This topic certainly qualifies.* It’s going to be great fun to see where it leads.

If you are a young and ambitious scientist, you may be contemplating the dilemma: Should I pursue quantum physics or neuroscience? Maybe, just maybe, the right answer is: Both.

*Matthew is the only member of the IQIM faculty who is not a Caltech professor, though he once was.

Beware global search and replace!

I’m old enough to remember when cutting and pasting were really done with scissors and glue (or Scotch tape). When I was a graduate student in the late 1970s, few physicists typed their own papers, and if they did they left gaps in the text, to be filled in later with handwritten equations. The gold standard of technical typing was the IBM Correcting Selectric II typewriter. Among its innovations was the correction ribbon, which allowed one to remove a typo with the touch of a key. But it was especially important for scientists that the Selectric could type mathematical characters, including Greek letters.

IBM Selectric typeballs

IBM Selectric typeballs

It wasn’t easy. Many different typeballs were available, to support various fonts and special characters. Typing a displayed equation or in-line equation usually involved swapping back and forth between typeballs to access all the needed symbols. Most physics research groups had staff who knew how to use the IBM Selectric and spent much of their time typing manuscripts.

Though the IBM Selectric was used by many groups, typewriters have unique personalities, as forensic scientists know. I had a friend who claimed he had learned to recognize telltale differences among documents produced by various IBM Selectric machines. That way, whenever he received a referee report, he could identify its place of origin.

Manuscripts did not evolve through 23 typeset versions in those days, as one of my recent papers did. Editing was arduous and frustrating, particularly for a lowly graduate student like me, who needed to beg Blanche to set aside what she was doing for Steve Weinberg and devote a moment or two to working on my paper.

It was tremendously liberating when I learned to use TeX in 1990 and started typing my own papers. (Not LaTeX in those days, but Plain TeX embellished by a macro for formatting.) That was a technological advance that definitely improved my productivity. An earlier generation had felt the same way about the Xerox machine.

But as I was reminded a few days ago, while technological advances can be empowering, they can also be dangerous when used recklessly. I was editing a very long document, and decided to make a change. I had repeatedly used $x$ to denote an n-bit string, and thought it better to use $\vec x$ instead. I was walking through the paper with the replace button, changing each $x$ to $\vec x$ where the change seemed warranted. But I slipped once, and hit the “Replace All” button instead of “Replace.” My computer curtly informed me that it had made the replacement 1011 times. Oops …

This was a revocable error. There must have been a way to undo it (though it was not immediately obvious how). Or I could have closed the file without saving, losing some recent edits but limiting the damage.

But it was late at night and I was tired. I panicked, immediately saving and LaTeXing the file. It was a mess.

Okay, no problem, all I had to do was replace every \vec x with x and everything would be fine. Except that in the original replacement I had neglected to specify “Match Case.” In 264 places $X$ had become $\vec x$, and the new replacement did not restore the capitalization. It took hours to restore every $X$ by hand, and there are probably a few more that I haven’t noticed yet.

Which brings me to the cautionary tale of one of my former graduate students, Robert Navin. Rob’s thesis had two main topics, scattering off vortices and scattering off monopoles. On the night before the thesis due date, Rob made a horrifying discovery. The crux of his analysis of scattering off vortices concerned the singularity structure of a certain analytic function, and the chapter about vortices made many references to the poles of this function. What Rob realized at this late stage is that these singularities are actually branch points, not poles!

What to do? It’s late and you’re tired and your thesis is due in a few hours. Aha! Global search and replace! Rob replaced every occurrence of “pole” in his thesis by “branch point.” Problem solved.

Except … Rob had momentarily forgotten about that chapter on monopoles. Which, when I read the thesis, had been transformed into a chapter on monobranch points. His committee accepted the thesis, but requested some changes …

Rob Navin no longer does physics, but has been very successful in finance. I’m sure he’s more careful now.

Kitaev, Moore, Read share Dirac Medal!

Since its founding 30 years ago, the Dirac Medal has been one of the most prestigious honors in theoretical physics. Particle theorists and string theorists have claimed most of the medals, but occasionally other fields break through, as when Haldane, Kane, and Zhang shared the 2012 Dirac Medal for their pioneering work on topological insulators. I was excited to learn today that the 2015 Dirac Medal has been awarded to Alexei Kitaev, Greg Moore, and Nick Read “for their interdisciplinary contributions which introduced  concepts of conformal field theory and non-abelian quasiparticle statistics in condensed matter systems and  applications of these ideas to quantum computation.”

Left to right: Alexei Kitaev, Greg Moore and Nicholas Read.

Left to right: Alexei Kitaev, Greg Moore, and Nick Read.

I have written before about the exciting day in April 1997 when Alesha and I met, and I heard for the first time about the thrilling concept of a topological quantum computer. I’ll take the liberty of drawing a quote from that post, which seems particularly relevant today:

Over coffee at the Red Door Cafe that afternoon, we bonded over our shared admiration for a visionary paper by Greg Moore and Nick Read about non-abelian anyons in fractional quantum Hall systems, though neither of us fully understood the paper (and I still don’t). Maybe, we mused together, non-abelian anyons are not just a theorist’s dream … It was the beginning of a beautiful friendship.

As all physics students know, fundamental particles in three spatial dimensions come in two varieties, bosons and fermions, but in two spatial dimensions more exotic possibilities abound, dubbed “anyons” by Wilczek. Anyons have an exotic spin, a fraction of an electron’s spin, and corresponding exotic statistics — when one anyon is carried around another, their quantum state picks up a nontrivial topological phase. (I had some fun discussions with Frank Wilczek in 1981 as he was developing the theory of anyons. In some of his writings Frank has kindly credited me for suggesting to him that a robust spin-statistics connection should hold in two dimensions, so that fractional spin is necessarily accompanied by fractional statistics. The truth is that my understanding of this point was murky at best back then.) Not long after Wilczek’s paper, Bert Halperin recognized the relevance of anyons to the strange fractional quantum Hall states that had recently been discovered; these support particle-like objects carrying a fraction of the electron’s electric charge, which Halperin recognized to be anyons.

Non-abelian anyons are even more exotic. In a system with many widely separated non-abelian anyons, there are a vast number of different ways for the particles to “fuse” together, giving rise to many possible quantum states, all of which are in principle distinguishable but in practice are hard to tell apart. Furthermore, by “braiding” the anyons (performing a sequence of particle exchanges, so the world lines of the anyons trace out a braid in three-dimensional spacetime), this state can be manipulated, coherently processing the quantum information encoded in the system.

Others (including me) had mused about non-abelian anyons before Moore and Read came along, but no one had proposed a plausible story for how such exotic objects would arise in a realistic laboratory setting. As collaborators, Moore and Read complemented one another perfectly. Greg was, and is, one of the world’s leading experts on conformal field theory. Nick was, and is, one of the world’s leading experts on the fractional quantum Hall effect. Together, they realized that one of the already known fractional quantum Hall states (at filling factor 5/2) is a good candidate for a topological phase supporting non-abelian anyons. This was an inspired guess, most likely correct, though we still don’t have smoking gun experimental evidence 25 years later. Their paper is a magical and rare combination of mathematical sophistication with brilliant intuition.

Alexei arrived at his ideas about non-abelian anyons coming from a different direction, though I suspect he drew inspiration from the earlier deep contributions of Moore and Read. He was trying to imagine a physical system that could store and process a quantum state reliably. Normally quantum systems are very fragile — just looking at the system alters its state. To prevent a quantum computer from making errors, we need to isolate the information processed by the computer from the environment. A system of non-abelian anyons has just the right properties to make this possible; it carries lots of information, but the environment can’t read (or damage) that information when it looks at the particles one at a time. That’s because the information is not encoded in the individual particles, but instead in subtle collective properties shared by many particles at once.

Alexei and I had inspiring discussions about topological quantum computing when we first met at Caltech in April 1997, which continued at a meeting in Torino, Italy that summer, where we shared a bedroom. I was usually asleep by the time he came to bed, because he was staying up late, typing his paper.

Alexei did not think it important to publish his now renowned 1997 paper in a journal — he was content for the paper to be accessible on the arXiv. But after a few years I started to get worried … in my eyes Alexei was becoming an increasingly likely Nobel Prize candidate. Would it cause a problem if his most famous paper had never been published? Just to be safe, I arranged for it to appear in Annals of Physics in 2003, where I was on the editorial board at the time. Frank Wilczek, then the editor, was delighted by this submission, which has definitely boosted the journal’s impact factor! (“Fault-tolerant quantum computation by anyons” has 2633 citations as of today, according to Google Scholar.) Nobelists are ineligible for the Dirac Medal, but some past medalists have proceeded to greater glory. It could happen again, right?

Alesha and I have now been close friends and collaborators for 18 years, but I have actually known Greg and Nick even longer. I taught at Harvard for a few years in the early 1980s, at a time when an amazingly talented crew of physics graduate students roamed the halls, of whom Andy Cohen, Jacques Distler, Ben Grinstein, David Kaplan, Aneesh Manohar, Ann Nelson, and Phil Nelson among others all made indelible impressions. But there was something special about Greg. The word that comes to mind is intensity. Few students exhibit as much drive and passion for physics as Greg did in those days. He’s calmer now, but still pretty intense. I met Nick a few years later when we tried to recruit him to the Caltech faculty. Luring him to southern California turned out to be a lost cause because he didn’t know how to drive a car. I suppose he’s learned by now?* Whenever I’ve spoken to Nick in the years since then, I’ve always been dazzled by his clarity of thought.

Non-abelian anyons are at a pivotal stage, with lots of experimental hints supporting their existence, but still no ironclad evidence. I feel confident this will change in the next few years. These are exciting times!

And guess what? This occasion gives me another opportunity to dust off one of my poems!

Anyon, Anyon

Anyon, anyon, where do you roam?
Braid for a while before you go home.

Though you’re condemned just to slide on a table,
A life in 2D also means that you’re able
To be of a type neither Fermi nor Bose
And to know left from right — that’s a kick, I suppose.

You and your buddy were made in a pair
Then wandered around, braiding here, braiding there.
You’ll fuse back together when braiding is through
We’ll bid you adieu as you vanish from view.

Alexei exhibits a knack for persuading
That someday we’ll crunch quantum data by braiding,
With quantum states hidden where no one can see,
Protected from damage through top-ology.

Anyon, anyon, where do you roam?
Braid for a while, before you go home.

*Note added: Nick confirms, “Yes, I’ve had a driving license since 1992, and a car since 1994!”

20 years of qubits: the arXiv data

Editor’s Note: The preceding post on Quantum Frontiers inspired the always curious Paul Ginsparg to do some homework on usage of the word “qubit” in papers posted on the arXiv. Rather than paraphrase Paul’s observations I will quote his email verbatim, so you can experience its Ginspargian style.qubit-data

fig has total # uses of qubit in arxiv (divided by 10) per month, and
total # docs per month:
an impressive 669394 total in 29587 docs.

the graph starts at 9412 (dec '94), but that is illusory since qubit
only shows up in v2 of hep-th/9412048, posted in 2004.
the actual first was quant-ph/9503016 by bennett/divicenzo/shor et al
(posted 23 Mar '95) where they carefully attribute the term to
schumacher ("PRA, to appear '95") and jozsa/schumacher ("J. Mod Optics
'94"), followed immediately by quant-ph/9503017 by deutsch/jozsa et al
(which no longer finds it necessary to attribute term)

[neither of schumacher's first two articles is on arxiv, but otherwise
probably have on arxiv near 100% coverage of its usage and growth, so
permits a viral epidemic analysis along the lines of kaiser's "drawing
theories apart"  of use of Feynman diagrams in post wwII period].

ever late to the party, the first use by j.preskill was
quant-ph/9602016, posted 21 Feb 1996

#articles by primary subject area as follows (hep-th is surprisingly
low given the firewall connection...):

quant-ph 22096
cond-mat.mes-hall 3350
cond-mat.supr-con 880
cond-mat.str-el 376
cond-mat.mtrl-sci 250
math-ph 244
hep-th 228
physics.atom-ph 224
cond-mat.stat-mech 213
cond-mat.other 200
physics.optics 177
cond-mat.quant-gas 152
physics.gen-ph 120
gr-qc 105
cond-mat 91
cs.CC 85
cs.IT 67
cond-mat.dis-nn 55
cs.LO 49
cs.CR 43
physics.chem-ph 33
cs.ET 25
physics.ins-det 21
math.CO,nlin.CD 20
physics.hist-ph,physics.bio-ph,math.OC 19
hep-ph 18
cond-mat.soft,cs.DS,math.OA 17
cs.NE,cs.PL,math.QA 13
cs.AR,cs.OH 12
physics.comp-ph 11
math.LO 10
physics.soc-ph,physics.ed-ph,cs.AI 9
math.ST,physics.pop-ph,cs.GT 8
nlin.AO,astro-ph,cs.DC,cs.FL,q-bio.GN 7
physics.data-an 6
nlin.SI,math.CT,q-fin.GN,cs.LG,q-bio.BM,cs.DM,math.GT 5
math.DS,physics.atm-clus,q-bio.PE 4
math.RA,math.AG,astro-ph.IM,q-bio.OT 3
math.RT 2
nucl-ex 1

Who named the qubit?

Perhaps because my 40th wedding anniversary is just a few weeks away, I have been thinking about anniversaries lately, which reminded me that we are celebrating the 20th anniversary of a number of milestones in quantum information science. In 1995 Cirac and Zoller proposed, and Wineland’s group first demonstrated, the ion trap quantum computer. Quantum error-correcting codes were invented by Shor and Steane, entanglement concentration and purification were described by Bennett et al., and there were many other fast-breaking developments. It was an exciting year.

But the event that moved me to write a blog post is the 1995 appearance of the word “qubit” in an American Physical Society journal. When I was a boy, two-level quantum systems were called “two-level quantum systems.” Which is a descriptive name, but a mouthful and far from euphonious. Think of all the time I’ve saved in the past 20 years by saying “qubit” instead of “two-level quantum system.” And saying “qubit” not only saves time, it also conveys the powerful insight that a quantum state encodes a novel type of information. (Alas, the spelling was bound to stir controversy, with the estimable David Mermin a passionate holdout for “qbit”. Give it up, David, you lost.)

Ben Schumacher. Thanks for the qubits, Ben!

Ben Schumacher. Thanks for the qubits, Ben!

For the word “qubit” we know whom to thank: Ben Schumacher. He introduced the word in his paper “Quantum Coding” which appeared in the April 1995 issue of Physical Review A. (History is complicated, and in this case the paper was actually submitted in 1993, which allowed another paper by Jozsa and Schumacher to be published earlier even though it was written and submitted later. But I’m celebrating the 20th anniversary of the qubit now, because otherwise how will I justify this blog post?)

In the acknowledgments of the paper, Ben provided some helpful background on the origin of the word:

The term “qubit” was coined in jest during one of the author’s many intriguing and valuable conversations with W. K. Wootters, and became the initial impetus for this work.

I met Ben (and other luminaries of quantum information theory) for the first time at a summer school in Torino, Italy in 1996. After reading his papers my expectations were high, all the more so after Sam Braunstein warned me that I would be impressed: “Ben’s a good talker,” Sam assured me. I was not disappointed.

(I also met Asher Peres at that Torino meeting. When I introduced myself Asher asked, “Isn’t there someone with a similar name in particle theory?” I had no choice but to come clean. I particularly remember that conversation because Asher told me his secret motivation for studying quantum entanglement: it might be important in quantum gravity!)

A few years later Ben spent his sabbatical year at Caltech, which gave me an opportunity to compose a poem for the introduction to Ben’s (characteristically brilliant) talk at our physics colloquium. This poem does homage to that famous 1995 paper in which Ben not only introduced the word “qubit” but also explained how to compress a quantum state to the minimal number of qubits from which the original state can be recovered with a negligible loss of fidelity, thus formulating and proving the quantum version of Shannon’s famous source coding theorem, and laying the foundation for many subsequent developments in quantum information theory.

Sometimes when I recite a poem I can sense the audience’s appreciation. But in this case there were only a few nervous titters. I was going for edgy but might have crossed the line into bizarre.. Since then I’ve (usually) tried to be more careful.

(For reading the poem, it helps to know that the quantum state appears to be random when it has been compressed as much as possible.)

On Quantum Compression (in honor of Ben Schumacher)

He rocks.
I remember
He showed me how to fit
A qubit
In a small box.

I wonder how it feels
To be compressed.
And then to pass
A fidelity test.

Or does it feel
At all, and if it does
Would I squeal
Or be just as I was?

If not undone
I’d become as I’d begun
And write a memorandum
On being random.
Had it felt like a belt
Of rum?

And might it be predicted
That I’d become addicted,
Longing for my session
Of compression?

I’d crawl
To Ben again.
And call,
“Put down your pen!
Don’t stall!
Make me small!”

Celebrating Theoretical Physics at Caltech’s Burke Institute

Editor’s Note: Yesterday and today, Caltech is celebrating the inauguration of the Walter Burke Institute for Theoretical Physics. John Preskill made the following remarks at a dinner last night honoring the board of the Sherman Fairchild Foundation.

This is an exciting night for me and all of us at Caltech. Tonight we celebrate physics. Especially theoretical physics. And in particular the Walter Burke Institute for Theoretical Physics.

Some of our dinner guests are theoretical physicists. Why do we do what we do?

I don’t have to convince this crowd that physics has a profound impact on society. You all know that. We’re celebrating this year the 100th anniversary of general relativity, which transformed how we think about space and time. It may be less well known that two years later Einstein laid the foundations of laser science. Einstein was a genius for sure, but I don’t think he envisioned in 1917 that we would use his discoveries to play movies in our houses, or print documents, or repair our vision. Or see an awesome light show at Disneyland.

And where did this phone in my pocket come from? Well, the story of the integrated circuit is fascinating, prominently involving Sherman Fairchild, and other good friends of Caltech like Arnold Beckman and Gordon Moore. But when you dig a little deeper, at the heart of the story are two theorists, Bill Shockley and John Bardeen, with an exceptionally clear understanding of how electrons move through semiconductors. Which led to transistors, and integrated circuits, and this phone. And we all know it doesn’t stop here. When the computers take over the world, you’ll know who to blame.

Incidentally, while Shockley was a Caltech grad (BS class of 1932), John Bardeen, one of the great theoretical physicists of the 20th century, grew up in Wisconsin and studied physics and electrical engineering at the University of Wisconsin at Madison. I suppose that in the 1920s Wisconsin had no pressing need for physicists, but think of the return on the investment the state of Wisconsin made in the education of John Bardeen.1

So, physics is a great investment, of incalculable value to society. But … that’s not why I do it. I suppose few physicists choose to do physics for that reason. So why do we do it? Yes, we like it, we’re good at it, but there is a stronger pull than just that. We honestly think there is no more engaging intellectual adventure than struggling to understand Nature at the deepest level. This requires attitude. Maybe you’ve heard that theoretical physicists have a reputation for arrogance. Okay, it’s true, we are arrogant, we have to be. But it is not that we overestimate our own prowess, our ability to understand the world. In fact, the opposite is often true. Physics works, it’s successful, and this often surprises us; we wind up being shocked again and again by “unreasonable effectiveness of mathematics in the natural sciences.” It’s hard to believe that the equations you write down on a piece of paper can really describe the world. But they do.

And to display my own arrogance, I’ll tell you more about myself. This occasion has given me cause to reflect on my own 30+ years on the Caltech faculty, and what I’ve learned about doing theoretical physics successfully. And I’ll tell you just three principles, which have been important for me, and may be relevant to the future of the Burke Institute. I’m not saying these are universal principles – we’re all different and we all contribute in different ways, but these are principles that have been important for me.

My first principle is: We learn by teaching.

Why do physics at universities, at institutions of higher learning? Well, not all great physics is done at universities. Excellent physics is done at industrial laboratories and at our national laboratories. But the great engine of discovery in the physical sciences is still our universities, and US universities like Caltech in particular. Granted, US preeminence in science is not what it once was — it is a great national asset to be cherished and protected — but world changing discoveries are still flowing from Caltech and other great universities.

Why? Well, when I contemplate my own career, I realize I could never have accomplished what I have as a research scientist if I were not also a teacher. And it’s not just because the students and postdocs have all the great ideas. No, it’s more interesting than that. Most of what I know about physics, most of what I really understand, I learned by teaching it to others. When I first came to Caltech 30 years ago I taught advanced elementary particle physics, and I’m still reaping the return from what I learned those first few years. Later I got interested in black holes, and most of what I know about that I learned by teaching general relativity at Caltech. And when I became interested in quantum computing, a really new subject for me, I learned all about it by teaching it.2

Part of what makes teaching so valuable for the teacher is that we’re forced to simplify, to strip down a field of knowledge to what is really indispensable, a tremendously useful exercise. Feynman liked to say that if you really understand something you should be able to explain it in a lecture for the freshman. Okay, he meant the Caltech freshman. They’re smart, but they don’t know all the sophisticated tools we use in our everyday work. Whether you can explain the core idea without all the peripheral technical machinery is a great test of understanding.

And of course it’s not just the teachers, but also the students and the postdocs who benefit from the teaching. They learn things faster than we do and often we’re just providing some gentle steering; the effect is to amplify greatly what we could do on our own. All the more so when they leave Caltech and go elsewhere to change the world, as they so often do, like those who are returning tonight for this Symposium. We’re proud of you!

My second principle is: The two-trick pony has a leg up.

I’m a firm believer that advances are often made when different ideas collide and a synthesis occurs. I learned this early, when as a student I was fascinated by two topics in physics, elementary particles and cosmology. Nowadays everyone recognizes that particle physics and cosmology are closely related, because when the universe was very young it was also very hot, and particles were colliding at very high energies. But back in the 1970s, the connection was less widely appreciated. By knowing something about cosmology and about particle physics, by being a two-trick pony, I was able to think through what happens as the universe cools, which turned out to be my ticket to becoming a Caltech professor.

It takes a community to produce two-trick ponies. I learned cosmology from one set of colleagues and particle physics from another set of colleagues. I didn’t know either subject as well as the real experts. But I was a two-trick pony, so I had a leg up. I’ve tried to be a two-trick pony ever since.

Another great example of a two-trick pony is my Caltech colleague Alexei Kitaev. Alexei studied condensed matter physics, but he also became intensely interested in computer science, and learned all about that. Back in the 1990s, perhaps no one else in the world combined so deep an understanding of both condensed matter physics and computer science, and that led Alexei to many novel insights. Perhaps most remarkably, he connected ideas about error-correcting code, which protect information from damage, with ideas about novel quantum phases of matter, leading to radical new suggestions about how to operate a quantum computer using exotic particles we call anyons. These ideas had an invigorating impact on experimental physics and may someday have a transformative effect on technology. (We don’t know that yet; it’s still way too early to tell.) Alexei could produce an idea like that because he was a two-trick pony.3

Which brings me to my third principle: Nature is subtle.

Yes, mathematics is unreasonably effective. Yes, we can succeed at formulating laws of Nature with amazing explanatory power. But it’s a struggle. Nature does not give up her secrets so readily. Things are often different than they seem on the surface, and we’re easily fooled. Nature is subtle.4

Perhaps there is no greater illustration of Nature’s subtlety than what we call the holographic principle. This principle says that, in a sense, all the information that is stored in this room, or any room, is really encoded entirely and with perfect accuracy on the boundary of the room, on its walls, ceiling and floor. Things just don’t seem that way, and if we underestimate the subtlety of Nature we’ll conclude that it can’t possibly be true. But unless our current ideas about the quantum theory of gravity are on the wrong track, it really is true. It’s just that the holographic encoding of information on the boundary of the room is extremely complex and we don’t really understand in detail how to decode it. At least not yet.

This holographic principle, arguably the deepest idea about physics to emerge in my lifetime, is still mysterious. How can we make progress toward understanding it well enough to explain it to freshmen? Well, I think we need more two-trick ponies. Except maybe in this case we’ll need ponies who can do three tricks or even more. Explaining how spacetime might emerge from some more fundamental notion is one of the hardest problems we face in physics, and it’s not going to yield easily. We’ll need to combine ideas from gravitational physics, information science, and condensed matter physics to make real progress, and maybe completely new ideas as well. Some of our former Sherman Fairchild Prize Fellows are leading the way at bringing these ideas together, people like Guifre Vidal, who is here tonight, and Patrick Hayden, who very much wanted to be here.5 We’re very proud of what they and others have accomplished.

Bringing ideas together is what the Walter Burke Institute for Theoretical Physics is all about. I’m not talking about only the holographic principle, which is just one example, but all the great challenges of theoretical physics, which will require ingenuity and synthesis of great ideas if we hope to make real progress. We need a community of people coming from different backgrounds, with enough intellectual common ground to produce a new generation of two-trick ponies.

Finally, it seems to me that an occasion as important as the inauguration of the Burke Institute should be celebrated in verse. And so …

Who studies spacetime stress and strain
And excitations on a brane,
Where particles go back in time,
And physicists engage in rhyme?

Whose speedy code blows up a star
(Though it won’t quite blow up so far),
Where anyons, which braid and roam
Annihilate when they get home?

Who makes math and physics blend
Inside black holes where time may end?
Where do they do all this work?
The Institute of Walter Burke!

We’re very grateful to the Burke family and to the Sherman Fairchild Foundation. And we’re confident that your generosity will make great things happen!


  1. I was reminded of this when I read about a recent proposal by the current governor of Wisconsin. 
  2. And by the way, I put my lecture notes online, and thousands of people still download them and read them. So even before MOOCs – massive open online courses – the Internet was greatly expanding the impact of our teaching. Handwritten versions of my old particle theory and relativity notes are also online here
  3. Okay, I admit it’s not quite that simple. At that same time I was also very interested in both error correction and in anyons, without imagining any connection between the two. It helps to be a genius. But a genius who is also a two-trick pony can be especially awesome. 
  4. We made that the tagline of IQIM. 
  5. Patrick can’t be here for a happy reason, because today he and his wife Mary Race welcomed a new baby girl, Caroline Eleanor Hayden, their first child. The Burke Institute is not the only good thing being inaugurated today. 

Bell’s inequality 50 years later

This is a jubilee year.* In November 1964, John Bell submitted a paper to the obscure (and now defunct) journal Physics. That paper, entitled “On the Einstein Podolsky Rosen Paradox,” changed how we think about quantum physics.

The paper was about quantum entanglement, the characteristic correlations among parts of a quantum system that are profoundly different than correlations in classical systems. Quantum entanglement had first been explicitly discussed in a 1935 paper by Einstein, Podolsky, and Rosen (hence Bell’s title). Later that same year, the essence of entanglement was nicely and succinctly captured by Schrödinger, who said, “the best possible knowledge of a whole does not necessarily include the best possible knowledge of its parts.” Schrödinger meant that even if we have the most complete knowledge Nature will allow about the state of a highly entangled quantum system, we are still powerless to predict what we’ll see if we look at a small part of the full system. Classical systems aren’t like that — if we know everything about the whole system then we know everything about all the parts as well. I think Schrödinger’s statement is still the best way to explain quantum entanglement in a single vigorous sentence.

To Einstein, quantum entanglement was unsettling, indicating that something is missing from our understanding of the quantum world. Bell proposed thinking about quantum entanglement in a different way, not just as something weird and counter-intuitive, but as a resource that might be employed to perform useful tasks. Bell described a game that can be played by two parties, Alice and Bob. It is a cooperative game, meaning that Alice and Bob are both on the same side, trying to help one another win. In the game, Alice and Bob receive inputs from a referee, and they send outputs to the referee, winning if their outputs are correlated in a particular way which depends on the inputs they receive.

But under the rules of the game, Alice and Bob are not allowed to communicate with one another between when they receive their inputs and when they send their outputs, though they are allowed to use correlated classical bits which might have been distributed to them before the game began. For a particular version of Bell’s game, if Alice and Bob play their best possible strategy then they can win the game with a probability of success no higher than 75%, averaged uniformly over the inputs they could receive. This upper bound on the success probability is Bell’s famous inequality.**

Classical and quantum versions of Bell's game. If Alice and Bob share entangled qubits rather than classical bits, then they can win the game with a higher success probability.

Classical and quantum versions of Bell’s game. If Alice and Bob share entangled qubits rather than classical bits, then they can win the game with a higher success probability.

There is also a quantum version of the game, in which the rules are the same except that Alice and Bob are now permitted to use entangled quantum bits (“qubits”)  which were distributed before the game began. By exploiting their shared entanglement, they can play a better quantum strategy and win the game with a higher success probability, better than 85%. Thus quantum entanglement is a useful resource, enabling Alice and Bob to play the game better than if they shared only classical correlations instead of quantum correlations.

And experimental physicists have been playing the game for decades, winning with a success probability that violates Bell’s inequality. The experiments indicate that quantum correlations really are fundamentally different than, and stronger than, classical correlations.

Why is that such a big deal? Bell showed that a quantum system is more than just a probabilistic classical system, which eventually led to the realization (now widely believed though still not rigorously proven) that accurately predicting the behavior of highly entangled quantum systems is beyond the capacity of ordinary digital computers. Therefore physicists are now striving to scale up the weirdness of the microscopic world to larger and larger scales, eagerly seeking new phenomena and unprecedented technological capabilities.

1964 was a good year. Higgs and others described the Higgs mechanism, Gell-Mann and Zweig proposed the quark model, Penzias and Wilson discovered the cosmic microwave background, and I saw the Beatles on the Ed Sullivan show. Those developments continue to reverberate 50 years later. We’re still looking for evidence of new particle physics beyond the standard model, we’re still trying to unravel the large scale structure of the universe, and I still like listening to the Beatles.

Bell’s legacy is that quantum entanglement is becoming an increasingly pervasive theme of contemporary physics, important not just as the source of a quantum computer’s awesome power, but also as a crucial feature of exotic quantum phases of matter, and even as a vital element of the quantum structure of spacetime itself. 21st century physics will advance not only by probing the short-distance frontier of particle physics and the long-distance frontier of cosmology, but also by exploring the entanglement frontier, by elucidating and exploiting the properties of increasingly complex quantum states.

frontiersSometimes I wonder how the history of physics might have been different if there had been no John Bell. Without Higgs, Brout and Englert and others would have elucidated the spontaneous breakdown of gauge symmetry in 1964. Without Gell-Mann, Zweig could have formulated the quark model. Without Penzias and Wilson, Dicke and collaborators would have discovered the primordial black-body radiation at around the same time.

But it’s not obvious which contemporary of Bell, if any, would have discovered his inequality in Bell’s absence. Not so many good physicists were thinking about quantum entanglement and hidden variables at the time (though David Bohm may have been one notable exception, and his work deeply influenced Bell.) Without Bell, the broader significance of quantum entanglement would have unfolded quite differently and perhaps not until much later. We really owe Bell a great debt.

*I’m stealing the title and opening sentence of this post from Sidney Coleman’s great 1981 lectures on “The magnetic monopole 50 years later.” (I’ve waited a long time for the right opportunity.)

**I’m abusing history somewhat. Bell did not use the language of games, and this particular version of the inequality, which has since been extensively tested in experiments, was derived by Clauser, Horne, Shimony, and Holt in 1969.