Paul Dirac and poetry

In science one tries to tell people, in such a way as to be understood by everyone, something that no one ever knew before. But in the case of poetry, it’s the exact opposite!

      – Paul Dirac

http://en.wikiquote.org/wiki/Paul_Dirac

Paul Dirac

I tacked Dirac’s quote onto the bulletin board above my desk, the summer before senior year of high school. I’d picked quotes by T.S. Elliot and Einstein, Catullus and Hatshepsut.* In a closet, I’d found amber-, peach-, and scarlet-colored paper. I’d printed the quotes and arranged them, starting senior year with inspiration that looked like a sunrise.

Not that I knew who Paul Dirac was. Nor did I evaluate his opinion. But I’d enrolled in Advanced Placement Physics C and taken the helm of my school’s literary magazine. The confluence of two passions of mine—science and literature—in Dirac’s quote tickled me.

A fiery lecturer began to alleviate my ignorance in college. Dirac, I learned, had co-invented quantum theory. The “Dee-rac Equa-shun,” my lecturer trilled in her Italian accent, describes relativistic quantum systems—tiny particles associated with high speeds. I developed a taste for spin, a quantum phenomenon encoded in Dirac’s equation. Spin serves quantum-information scientists as two-by-fours serve carpenters: Experimentalists have tried to build quantum computers from particles that have spins. Theorists keep the idea of electron spins in a mental car trunk, to tote out when illustrating abstract ideas with examples.

The next year, I learned that Dirac had predicted the existence of antimatter. Three years later, I learned to represent antimatter mathematically. I memorized the Dirac Equation, forgot it, and re-learned it.

One summer in grad school, visiting my parents, I glanced at my bulletin board.

The sun rises beyond a window across the room from the board. Had the light faded the papers’ colors? If so, I couldn’t tell.

In science one tries to tell people, in such a way as to be understood by everyone, something that no one ever knew before. But in the case of poetry, it’s the exact opposite!

Do poets try to obscure ideas everyone understands? Some poets express ideas that people intuit but feel unable, lack the attention, or don’t realize one should, articulate. Reading and hearing poetry helps me grasp the ideas. Some poets express ideas in forms that others haven’t imagined.

Did Dirac not represent physics in a form that others hadn’t imagined?

Dirac Eqn

The Dirac Equation

Would you have imagined that form? I didn’t imagine it until learning it. Do scientists not express ideas—about gravity, time, energy, and matter—that people feel unable, lack the attention, or don’t realize we should, articulate?

The U.S. and Canada have designated April as National Poetry Month. A hub for cousins of poets, Quantum Frontiers salutes. Carry a poem in your pocket this month. Or carry a copy of the Dirac Equation. Or tack either on a bulletin board; I doubt whether their colors will fade.

http://cdn.shopify.com/s/files/1/0377/1697/products/2011_poster_WEB_1024x1024.jpg?v=1410808396

*“Now my heart turns this way and that, as I think what the people will say. Those who see my monuments in years to come, and who shall speak of what I have done.” I expect to build no such monuments. But here’s to trying.

Putting back the pieces of a broken hologram

It is Monday afternoon and the day seems to be a productive one, if not yet quite memorable. As I revise some notes on my desk, Beni Yoshida walks into my office to remind me that the high-energy physics seminar is about to start. I hesitate, somewhat apprehensive of the near-certain frustration of being lost during the first few minutes of a talk in an unfamiliar field. I normally avoid such a situation, but in my email I find John’s forecast for an accessible talk by Daniel Harlow and a title with three words I can cling onto. “Quantum error correction” has driven my curiosity for the last seven years. The remaining acronyms in the title will become much more familiar in the four months to come.

Most of you are probably familiar with holograms, these shiny flat films representing a 3D object from essentially any desired angle. I find it quite remarkable how all the information of a 3D object can be printed on an essentially 2D film. True, the colors are not represented as faithfully as in a traditional photograph, but it looks as though we have taken a photograph from every possible angle! The speaker’s main message that day seemed even more provocative than the idea of holography itself. Even if the hologram is broken into pieces, and some of these are lost, we may still use the remaining pieces to recover parts of the 3D image or even the full thing given a sufficiently large portion of the hologram. The 3D object is not only recorded in 2D, it is recorded redundantly!

Left to right: Beni Yoshida, Aleksander Kubica, Aidan Chatwin-Davies and Fernando Pastawski discussing holographic codes.

Left to right: Beni Yoshida, Aleksander Kubica, Aidan Chatwin-Davies and Fernando Pastawski discussing holographic codes.

Half way through Daniel’s exposition, Beni and I exchange a knowing glance. We recognize a familiar pattern from our latest project. A pattern which has gained the moniker of “cleaning lemma” within the quantum information community which can be thought of as a quantitative analog of reconstructing the 3D image from pieces of the hologram. Daniel makes connections using a language that we are familiar with. Beni and I discuss what we have understood and how to make it more concrete as we stride back through campus. We scribble diagrams on the whiteboard and string words such as tensor, encoder, MERA and negative curvature into our discussion. An image from the web gives us some intuition on the latter. We are onto something. We have a model. It is simple. It is new. It is exciting.

Poincare projection of a regular pentagon tiling of negatively curved space.

Poincare projection of a regular pentagon tiling of negatively curved space.

Food has not come our way so we head to my apartment as we enthusiastically continue our discussion. I can only provide two avocados and some leftover pasta but that is not important, we are sharing the joy of insight. We arrange a meeting with Daniel to present our progress. By Wednesday Beni and I introduce the holographic pentagon code at the group meeting. A core for a new project is already there, but we need some help to navigate the high-energy waters. Who better to guide us in such an endeavor than our mentor, John Preskill, who recognized the importance of quantum information in Holography as early as 1999 and has repeatedly proven himself a master of both trades.

“I feel that the idea of holography has a strong whiff of entanglement—for we have seen that in a profoundly entangled state the amount of information stored locally in the microscopic degrees of freedom can be far less than we would naively expect. For example, in the case of the quantum error-correcting codes, the encoded information may occupy a small ‘global’ subspace of a much larger Hilbert space. Similarly, the distinct topological phases of a fractional quantum Hall system look alike locally in the bulk, but have distinguishable edge states at the boundary.”
-J. Preskill, 1999

As Beni puts it, the time for using modern quantum information tools in high-energy physics has come. By this he means quantum error correction and maybe tensor networks. First privately, then more openly, we continue to sharpen and shape our project. Through conferences, Skype calls and emails, we further our discussion and progressively shape ideas. Many speculations mature to conjectures and fall victim to counterexamples. Some stand the test of simulations or are even promoted to theorems by virtue of mathematical proofs.

Beni Yoshida presenting our work at a quantum entanglement conference in Puerto Rico.

Beni Yoshida presenting our work at a quantum entanglement conference in Puerto Rico.

I publicly present the project for the first time at a select quantum information conference in Australia. Two months later, after a particularly intense writing, revising and editing process, the article is almost complete. As we finalize the text and relabel the figures, Daniel and Beni unveil our work to quantum entanglement experts in Puerto Rico. The talks are a hit and it is time to let all our peers read about it.

You are invited to do so and Beni will even be serving a reader’s guide in an upcoming post.

Quantum Frontiers salutes Terry Pratchett.

I blame British novels for my love of physics. Philip Pullman introduced me to elementary particles; Jasper Fforde, to the possibility that multiple worlds exist; Diana Wynne Jones, to questions about space and time.

So began the personal statement in my application to Caltech’s PhD program. I didn’t mention Sir Terry Pratchett, but he belongs in the list. Pratchett wrote over 70 books, blending science fiction with fantasy, humor, and truths about humankind. Pratchett passed away last week, having completed several novels after doctors diagnosed him with early-onset Alzheimer’s. According to the San Francisco Chronicle, Pratchett “parodie[d] everything in sight.” Everything in sight included physics.

http://www.lookoutmountainbookstore.com/

Terry Pratchett continues to influence my trajectory through physics: This cover has a cameo in a seminar I’m presenting in Maryland this March.

Pratchett set many novels on the Discworld, a pancake of a land perched atop four elephants, which balance on the shell of a turtle that swims through space. Discworld wizards quantify magic in units called thaums. Units impressed their importance upon me in week one of my first high-school physics class. We define one meter as “the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second.” Wizards define one thaum as “the amount of magic needed to create one small white pigeon or three normal-sized billiard balls.”

Wizards study the thaum in a High-Energy Magic Building reminiscent of Caltech’s Lauritsen-Downs Building. To split the thaum, the wizards built a Thaumatic Resonator. Particle physicists in our world have split atoms into constituent particles called mesons and baryons. Discworld wizards discovered that the thaum consists of resons. Mesons and baryons consist of quarks, seemingly elementary particles that we believe cannot be split. Quarks fall into six types, called flavors: up, down, charmed, strange, top (or truth), and bottom (or beauty). Resons, too, consist of quarks. The Discworld’s quarks have the flavors up, down, sideways, sex appeal, and peppermint.

Reading about the Discworld since high school, I’ve wanted to grasp Pratchett’s allusions. I’ve wanted to do more than laugh at them. In Pyramids, Pratchett describes “ideas that would make even a quantum mechanic give in and hand back his toolbox.” Pratchett’s ideas have given me a hankering for that toolbox. Pratchett nudged me toward training as a quantum mechanic.

Pratchett hasn’t only piqued my curiosity about his allusions. He’s piqued my desire to create as he did, to do physics as he wrote. While reading or writing, we build worlds in our imaginations. We visualize settings; we grow acquainted with characters; we sense a plot’s consistency or the consistency of a system of magic. We build worlds in our imaginations also when doing and studying physics and math. The Standard Model is a system that encapsulates the consistency of our knowledge about particles. We tell stories about electrons’ behaviors in magnetic fields. Theorems’ proofs have logical structures like plots’. Pratchett and other authors trained me to build worlds in my imagination. Little wonder I’m training to build worlds as a physicist.

Around the time I graduated from college, Diana Wynne Jones passed away. So did Brian Jacques (another British novelist) and Madeleine L’Engle. L’Engle wasn’t British, but I forgave her because her Time Quartet introduced me to dimensions beyond three. As I completed one stage of intellectual growth, creators who’d led me there left.

Terry Pratchett has joined Jones, Jacques, and L’Engle. I will probably create nothing as valuable as his Discworld, let alone a character in the Standard Model toward which the Discworld steered me.

But, because of Terry Pratchett, I have to try.

Always look on the bright side…of CPTP maps.

http://www.google.com/imgres?imgurl=http://www.stmoroky.com/reviews/films/brian.jpg&imgrefurl=http://www.stmoroky.com/reviews/films/brian.htm&h=290&w=250&tbnid=ZVAVf4ZX_zMISM:&zoom=1&docid=D5VZDH_bepBS0M&hl=en&ei=3bLfVIiYOoSpogS6toLYCQ&tbm=isch&ved=0CFEQMygYMBg

Once upon a time, I worked with a postdoc who shaped my views of mathematical physics, research, and life. Each week, I’d email him a PDF of the calculations and insights I’d accrued. He’d respond along the lines of, “Thanks so much for your notes. They look great! I think they’re mostly correct; there are just a few details that might need fixing.” My postdoc would point out the “details” over espresso, at a café table by a window. “Are you familiar with…?” he’d begin, and pull out of his back pocket some bit of math I’d never heard of. My calculations appeared to crumble like biscotti.

Some of the math involved CPTP maps. “CPTP” stands for a phrase little more enlightening than the acronym: “completely positive trace-preserving”. CPTP maps represent processes undergone by quantum systems. Imagine preparing some system—an electron, a photon, a superconductor, etc.—in a state I’ll call “\rho“. Imagine turning on a magnetic field, or coupling one electron to another, or letting the superconductor sit untouched. A CPTP map, labeled as \mathcal{E}, represents every such evolution.

“Trace-preserving” means the following: Imagine that, instead of switching on the magnetic field, you measured some property of \rho. If your measurement device (your photodetector, spectrometer, etc.) worked perfectly, you’d read out one of several possible numbers. Let p_i denote the probability that you read out the i^{\rm{th}} possible number. Because your device outputs some number, the probabilities sum to one: \sum_i p_i = 1.  We say that \rho “has trace one.” But you don’t measure \rho; you switch on the magnetic field. \rho undergoes the process \mathcal{E}, becoming a quantum state \mathcal{E(\rho)}. Imagine that, after the process ended, you measured a property of \mathcal{E(\rho)}. If your measurement device worked perfectly, you’d read out one of several possible numbers. Let q_a denote the probability that you read out the a^{\rm{th}} possible number. The probabilities sum to one: \sum_a q_a =1. \mathcal{E(\rho)} “has trace one”, so the map \mathcal{E} is “trace preserving”.

Now that we understand trace preservation, we can understand positivity. The probabilities p_i are positive (actually, nonnegative) because they lie between zero and one. Since the p_i characterize a crucial aspect of \rho, we call \rho “positive” (though we should call \rho “nonnegative”). \mathcal{E} turns the positive \rho into the positive \mathcal{E(\rho)}. Since \mathcal{E} maps positive objects to positive objects, we call \mathcal{E} “positive”. \mathcal{E} also satisfies a stronger condition, so we call such maps “completely positive.”**

So I called my postdoc. “It’s almost right,” he’d repeat, nudging aside his espresso and pulling out a pencil. We’d patch the holes in my calculations. We might rewrite my conclusions, strengthen my assumptions, or prove another lemma. Always, we salvaged cargo. Always, I learned.

I no longer email weekly updates to a postdoc. But I apply what I learned at that café table, about entanglement and monotones and complete positivity. “It’s almost right,” I tell myself when a hole yawns in my calculations and a week’s work appears to fly out the window. “I have to fix a few details.”

Am I certain? No. But I remain positive.

*Experts: “Trace-preserving” means \rm{Tr}(\rho) =1 \Rightarrow \rm{Tr}(\mathcal{E}(\rho)) = 1.

**Experts: Suppose that ρ is defined on a Hilbert space H and that E of rho is defined on H'. “Channel is positive” means Positive

To understand what “completely positive” means, imagine that our quantum system interacts with an environment. For example, suppose the system consists of photons in a box. If the box leaks, the photons interact with the electromagnetic field outside the box. Suppose the system-and-environment composite begins in a state SigmaAB defined on a Hilbert space HAB. Channel acts on the system’s part of state. Let I denote the identity operation that maps every possible environment state to itself. Suppose that Channel changes the system’s state while I preserves the environment’s state. The system-and-environment composite ends up in the state Channel SigmaAB. This state is positive, so we call Channel “completely positive”:Completely pos

Celebrating Theoretical Physics at Caltech’s Burke Institute

Editor’s Note: Yesterday and today, Caltech is celebrating the inauguration of the Walter Burke Institute for Theoretical Physics. John Preskill made the following remarks at a dinner last night honoring the board of the Sherman Fairchild Foundation.

This is an exciting night for me and all of us at Caltech. Tonight we celebrate physics. Especially theoretical physics. And in particular the Walter Burke Institute for Theoretical Physics.

Some of our dinner guests are theoretical physicists. Why do we do what we do?

I don’t have to convince this crowd that physics has a profound impact on society. You all know that. We’re celebrating this year the 100th anniversary of general relativity, which transformed how we think about space and time. It may be less well known that two years later Einstein laid the foundations of laser science. Einstein was a genius for sure, but I don’t think he envisioned in 1917 that we would use his discoveries to play movies in our houses, or print documents, or repair our vision. Or see an awesome light show at Disneyland.

And where did this phone in my pocket come from? Well, the story of the integrated circuit is fascinating, prominently involving Sherman Fairchild, and other good friends of Caltech like Arnold Beckman and Gordon Moore. But when you dig a little deeper, at the heart of the story are two theorists, Bill Shockley and John Bardeen, with an exceptionally clear understanding of how electrons move through semiconductors. Which led to transistors, and integrated circuits, and this phone. And we all know it doesn’t stop here. When the computers take over the world, you’ll know who to blame.

Incidentally, while Shockley was a Caltech grad (BS class of 1932), John Bardeen, one of the great theoretical physicists of the 20th century, grew up in Wisconsin and studied physics and electrical engineering at the University of Wisconsin at Madison. I suppose that in the 1920s Wisconsin had no pressing need for physicists, but think of the return on the investment the state of Wisconsin made in the education of John Bardeen.1

So, physics is a great investment, of incalculable value to society. But … that’s not why I do it. I suppose few physicists choose to do physics for that reason. So why do we do it? Yes, we like it, we’re good at it, but there is a stronger pull than just that. We honestly think there is no more engaging intellectual adventure than struggling to understand Nature at the deepest level. This requires attitude. Maybe you’ve heard that theoretical physicists have a reputation for arrogance. Okay, it’s true, we are arrogant, we have to be. But it is not that we overestimate our own prowess, our ability to understand the world. In fact, the opposite is often true. Physics works, it’s successful, and this often surprises us; we wind up being shocked again and again by “unreasonable effectiveness of mathematics in the natural sciences.” It’s hard to believe that the equations you write down on a piece of paper can really describe the world. But they do.

And to display my own arrogance, I’ll tell you more about myself. This occasion has given me cause to reflect on my own 30+ years on the Caltech faculty, and what I’ve learned about doing theoretical physics successfully. And I’ll tell you just three principles, which have been important for me, and may be relevant to the future of the Burke Institute. I’m not saying these are universal principles – we’re all different and we all contribute in different ways, but these are principles that have been important for me.

My first principle is: We learn by teaching.

Why do physics at universities, at institutions of higher learning? Well, not all great physics is done at universities. Excellent physics is done at industrial laboratories and at our national laboratories. But the great engine of discovery in the physical sciences is still our universities, and US universities like Caltech in particular. Granted, US preeminence in science is not what it once was — it is a great national asset to be cherished and protected — but world changing discoveries are still flowing from Caltech and other great universities.

Why? Well, when I contemplate my own career, I realize I could never have accomplished what I have as a research scientist if I were not also a teacher. And it’s not just because the students and postdocs have all the great ideas. No, it’s more interesting than that. Most of what I know about physics, most of what I really understand, I learned by teaching it to others. When I first came to Caltech 30 years ago I taught advanced elementary particle physics, and I’m still reaping the return from what I learned those first few years. Later I got interested in black holes, and most of what I know about that I learned by teaching general relativity at Caltech. And when I became interested in quantum computing, a really new subject for me, I learned all about it by teaching it.2

Part of what makes teaching so valuable for the teacher is that we’re forced to simplify, to strip down a field of knowledge to what is really indispensable, a tremendously useful exercise. Feynman liked to say that if you really understand something you should be able to explain it in a lecture for the freshman. Okay, he meant the Caltech freshman. They’re smart, but they don’t know all the sophisticated tools we use in our everyday work. Whether you can explain the core idea without all the peripheral technical machinery is a great test of understanding.

And of course it’s not just the teachers, but also the students and the postdocs who benefit from the teaching. They learn things faster than we do and often we’re just providing some gentle steering; the effect is to amplify greatly what we could do on our own. All the more so when they leave Caltech and go elsewhere to change the world, as they so often do, like those who are returning tonight for this Symposium. We’re proud of you!

My second principle is: The two-trick pony has a leg up.

I’m a firm believer that advances are often made when different ideas collide and a synthesis occurs. I learned this early, when as a student I was fascinated by two topics in physics, elementary particles and cosmology. Nowadays everyone recognizes that particle physics and cosmology are closely related, because when the universe was very young it was also very hot, and particles were colliding at very high energies. But back in the 1970s, the connection was less widely appreciated. By knowing something about cosmology and about particle physics, by being a two-trick pony, I was able to think through what happens as the universe cools, which turned out to be my ticket to becoming a Caltech professor.

It takes a community to produce two-trick ponies. I learned cosmology from one set of colleagues and particle physics from another set of colleagues. I didn’t know either subject as well as the real experts. But I was a two-trick pony, so I had a leg up. I’ve tried to be a two-trick pony ever since.

Another great example of a two-trick pony is my Caltech colleague Alexei Kitaev. Alexei studied condensed matter physics, but he also became intensely interested in computer science, and learned all about that. Back in the 1990s, perhaps no one else in the world combined so deep an understanding of both condensed matter physics and computer science, and that led Alexei to many novel insights. Perhaps most remarkably, he connected ideas about error-correcting code, which protect information from damage, with ideas about novel quantum phases of matter, leading to radical new suggestions about how to operate a quantum computer using exotic particles we call anyons. These ideas had an invigorating impact on experimental physics and may someday have a transformative effect on technology. (We don’t know that yet; it’s still way too early to tell.) Alexei could produce an idea like that because he was a two-trick pony.3

Which brings me to my third principle: Nature is subtle.

Yes, mathematics is unreasonably effective. Yes, we can succeed at formulating laws of Nature with amazing explanatory power. But it’s a struggle. Nature does not give up her secrets so readily. Things are often different than they seem on the surface, and we’re easily fooled. Nature is subtle.4

Perhaps there is no greater illustration of Nature’s subtlety than what we call the holographic principle. This principle says that, in a sense, all the information that is stored in this room, or any room, is really encoded entirely and with perfect accuracy on the boundary of the room, on its walls, ceiling and floor. Things just don’t seem that way, and if we underestimate the subtlety of Nature we’ll conclude that it can’t possibly be true. But unless our current ideas about the quantum theory of gravity are on the wrong track, it really is true. It’s just that the holographic encoding of information on the boundary of the room is extremely complex and we don’t really understand in detail how to decode it. At least not yet.

This holographic principle, arguably the deepest idea about physics to emerge in my lifetime, is still mysterious. How can we make progress toward understanding it well enough to explain it to freshmen? Well, I think we need more two-trick ponies. Except maybe in this case we’ll need ponies who can do three tricks or even more. Explaining how spacetime might emerge from some more fundamental notion is one of the hardest problems we face in physics, and it’s not going to yield easily. We’ll need to combine ideas from gravitational physics, information science, and condensed matter physics to make real progress, and maybe completely new ideas as well. Some of our former Sherman Fairchild Prize Fellows are leading the way at bringing these ideas together, people like Guifre Vidal, who is here tonight, and Patrick Hayden, who very much wanted to be here.5 We’re very proud of what they and others have accomplished.

Bringing ideas together is what the Walter Burke Institute for Theoretical Physics is all about. I’m not talking about only the holographic principle, which is just one example, but all the great challenges of theoretical physics, which will require ingenuity and synthesis of great ideas if we hope to make real progress. We need a community of people coming from different backgrounds, with enough intellectual common ground to produce a new generation of two-trick ponies.

Finally, it seems to me that an occasion as important as the inauguration of the Burke Institute should be celebrated in verse. And so …

Who studies spacetime stress and strain
And excitations on a brane,
Where particles go back in time,
And physicists engage in rhyme?

Whose speedy code blows up a star
(Though it won’t quite blow up so far),
Where anyons, which braid and roam
Annihilate when they get home?

Who makes math and physics blend
Inside black holes where time may end?
Where do they do all this work?
The Institute of Walter Burke!

We’re very grateful to the Burke family and to the Sherman Fairchild Foundation. And we’re confident that your generosity will make great things happen!

 


  1. I was reminded of this when I read about a recent proposal by the current governor of Wisconsin. 
  2. And by the way, I put my lecture notes online, and thousands of people still download them and read them. So even before MOOCs – massive open online courses – the Internet was greatly expanding the impact of our teaching. Handwritten versions of my old particle theory and relativity notes are also online here
  3. Okay, I admit it’s not quite that simple. At that same time I was also very interested in both error correction and in anyons, without imagining any connection between the two. It helps to be a genius. But a genius who is also a two-trick pony can be especially awesome. 
  4. We made that the tagline of IQIM. 
  5. Patrick can’t be here for a happy reason, because today he and his wife Mary Race welcomed a new baby girl, Caroline Eleanor Hayden, their first child. The Burke Institute is not the only good thing being inaugurated today. 

Democrat plus Republican over the square-root of two

I wish I could superpose votes on Election Day.

However much I agree with Candidate A about social issues, I dislike his running mate. I lean toward Candidate B’s economic plans and C’s science-funding record, but nobody’s foreign policy impresses me. Must I settle on one candidate? May I not vote

TeXVoteClip

Now you can—at least in theory. Caltech postdoc Ning Bao and I concocted quantum elections in which voters can superpose, entangle, and create probabilistic mixtures of votes.

Previous quantum-voting work has focused on privacy and cryptography. Ning and I channeled quantum game theory. Quantum game theorists ask what happens if players in classical games, such as the Prisoner’s Dilemma, could superpose strategies and share entanglement. Quantization can change the landscape of possible outcomes.

The Prisoner’s Dilemma, for example, concerns two thugs whom the police have arrested and have isolated in separate cells. Each prisoner must decide whether to rat out the other. How much time each serves depends on who, if anyone, confesses. Since neither prisoner knows the other’s decision, each should rat to minimize his or her jail time. But both would serve less time if neither confessed. The prisoners can escape this dilemma using quantum resources.

Introducing superpositions and entanglement into games helps us understand the power of quantum mechanics. Elections involve gameplay; pundits have been feeding off Hilary Clinton’s for months. So superpositions and entanglement merit introduction into elections.

How can you model elections with quantum systems? Though multiple options exist, Ning and I followed two principles: (1) A general quantum process—a preparation procedure, an evolution, and a measurement—should model a quantum election. (2) Quantum elections should remain as true as possible to classical.

Given our quantum voting system, one can violate a quantum analogue of Arrow’s Impossibility Theorem. Arrow’s Theorem, developed by the Nobel-winning economist Kenneth Arrow during the mid-20th century, is a no-go theorem about elections: If a constitution has three innocuous-seeming properties, it’s a dictatorship. Ning and I translated the theorem as faithfully as we knew how into our quantum voting scheme. The result, dubbed the Quantum Arrow Conjecture, rang false.

Superposing (and probabilistically mixing) votes entices me for a reason that science does: I feel ignorant. I read articles and interview political junkies about national defense; but I miss out on evidence and subtleties. I read quantum-physics books and work through papers; but I miss out on known mathematical tools and physical interpretations. Not to mention tools and interpretations that humans haven’t discovered.

Science involves identifying (and diminishing) what humanity doesn’t know. Science frees me to acknowledge my ignorance. I can’t throw all my weight behind Candidate A’s defense policy because I haven’t weighed all the arguments about defense, because I don’t know all the arguments. Believing that I do would violate my job description. How could I not vote for elections that accommodate superpositions?

Though Ning and I identified applications of superpositions and entanglement, more quantum strategies might await discovery. Monogamy of entanglement, discussed elsewhere on this blog, might limit the influence voters exert on each other. Also, we quantized ordinal voting systems (in which each voter ranks candidates, as in “A above C above B”). The quantization of cardinal voting (in which each voter grades the candidates, as in “5 points to A, 3 points to C, 2 points to B”) or another voting scheme might yield more insights.

If you have such insights, drop us a line. Ideally before the presidential smack-down of 2016.

To become a good teacher, ignore everything you’re told and learn from the masters (part 4 of 4)

In the previous posts in this series, I described how using lessons I learned from Richard Feynman and John Preskill led me to become a more popular TA.

What I learned from a few others*

If by some miracle I ever get to be a professor, there will be a few others I look to for teaching wisdom, some of which I’ve already made use of.

When it comes to writing problem sets, I would look to the two physicists who write the best problem sets I’ve ever seen: Kip Thorne and Andreas Ludwig. In my opinion, a problem set should not be something that just makes you apply the things you learned in class to other examples that are essentially the same as things you’ve seen before. The best problems make you work through an exciting new topic that the professor doesn’t have time to cover in lecture. I remember sitting in my office looking out at one of the 4 km long arms of the LIGO Hanford Observatory, where I was working during the summer after my sophomore year at Caltech, while working through some of Kip’s homework problems because those were some of the best resources I could find to teach me how the amazing contraption really works.1 And Ludwig probably has the best problems of all. The first problem set from one of his classes on many-body field theory, a pretty typical one, consists of two problems written over five pages along with six pages of appendices attached. Those problems sets looked daunting, but they really weren’t. Once you read through everything and thought carefully about it, the problems weren’t so bad and you ended up deriving some really cool results!2

Finally, I’ll describe some things that I learned from Ed McCaffery who brilliantly dealt with the challenging job of teaching humanities at Caltech by accomplishing two goals. First, he “tricked” us into finding a “boring” subject interesting. Second, he had a great sense of who his audience was and taught us in a way that we would actually be receptive to once he had our attention. He accomplished the first goal by making his lectures extremely humorous and ridiculous, but in such a way that they were actually filled with content. He accomplished the second by boiling down the subject to a few key principles and essentially deriving the rest of the ideas from these while ignoring all of the technical details (it was almost as if we were in a physics or math class). The main reason I kept going to his classes was to be entertained—which is especially impressive seeing as how I would normally consider law to be a terribly boring subject—but I accidentally ended up learning a lot. Of all the hums I took at Caltech, his two classes are the ones I remember the most from to this day, and I know I’m not alone in this regard.

If I’m ever put in the challenging position of, for example, teaching an introductory physics course to students, maybe primarily premeds for illustrative purposes, who have no desire to learn it and are being forced to take it to satisfy some requirement, I would try to accomplish the same two goals that McCaffery did in what was essentially an equivalent scenario. It would be hard to be as funny as McCaffery, but maybe I could figure out some other way of being sufficiently ridiculous to “trick” the students into caring about the class. On the other hand, with a little experimentation and a willingness to ignore what I was told to do, I bet it wouldn’t be too hard to find a way to teach some physics in a way that these students could relate to and maybe actually remember something from after the class was over. In a bigger school like UCSB—where not everyone is going to be either a scientist, a mathematician, or an engineer and there are several segregated levels of introductory physics courses—I’ve often asked, “Why are we teaching premeds how to calculate the trajectory of a cannon ball, the moment of inertia of a disk, or the efficiency of a heat engine in quantitative detail?” It’s pretty clear that most of them aren’t going to care about any of this, and they’re really not going to need to know how to do any of that after they pass the class anyway. So is it really that shocking that they tend to go through the class with the attitude that they’re just going to do what it takes to get a good grade instead of with the attitude that they’d like to learn some science? I believe this is roughly equivalent to trying to teach Caltech undergrads the intricacies of the tax code, in the way you might teach USC law students, which I’m pretty sure wouldn’t be a huge success.

The first thing I would try would be to teach physics in a back-of-an-envelope kind of way, ignoring any rigor and just trying to get a feel for how powerful of a tool physics, and really science in general, can be by applying it to problems that I thought the students would be interested in or find amusing. For example, I might explain how using a simple scaling argument shows that the height that most animals can jump, regardless of their size, is roughly universal (and, by observations, happens to be on the order of half a meter). Or maybe I’d explain how some basic knowledge of material properties allows you to estimate how the maximum height of a mountain depends on the properties of a planet—and when you plug the numbers in for the earth, you basically get the height of Mt. Everest.3 You could probably even do this in such a way that every example and every problem would actually be relevant to what premeds might use later in their careers. And maybe the most important part is that I know I would learn a lot from doing this—it could even be completely different if taught multiple times—and so I would actually be excited about the material and would be motivated to explain it well.

Trinity

Figure 4.1: Maybe the most famous back-of-an-envelope calculation of all time. Oops, we added a scale bar and a time stamp. With a little dimensional analysis and some physical intuition, anyone can now estimate how powerful our nuclear bomb is.

And if that didn’t work, I’d hope that in talking with the students and getting a sense of what was important to them, I would be able to come up with a different approach that would be successful. I simply don’t believe that it’s impossible to find a way to reach every kind of student, whether they be aspiring scientists, mathematicians, engineers, doctors, lawyers, historians, poets, artists, or politicians. Physics is just too exciting of a subject for that to be true, but you’ve got to know your audience. (Maybe McCaffery feels the same way about law and economics for all I know.)

Closing thoughts

There are two memories in particular of interactions with students that will always mean more to me than the teaching award itself because they remind me of how I felt interacting with John. After we were leaving a particularly good lunch lecture,4 I remember Steve grinning from ear to ear telling everyone within sight about how awesome the lecture was. In a similar situation after one of my quantum field theory lectures,5 I was standing in the hallway answering a student’s question, but out of the corner of my eye I could see a small group of students with similar grins as Steve’s and overheard them saying that these were some of the best recitation sections they had ever been to. The other time was when a student came to my office hours and asked if he could ask me a question about some random advanced physics topic he was reading about because he really wanted to understand it well and wanted to hear what I had to say about it. That student probably had no idea how good that made me feel because that’s exactly how I felt anytime I asked John a question.

Looking back over my notes for the field theory course, I feel like I didn’t actually do that good of a job overall, though I am happy that the students seemed to enjoy it and learned a lot from it. There are some things I am very proud of. Probably the biggest one was my last lecture on the Casimir effect which included a digression on what it means for something to be “renormalizable” or “non-renormalizable” and how there is absolutely nothing wrong with the second kind in the context of effective field theories.6 After an introduction to the philosophy of effective field theories (see footnote 6 for an excellent reference), that discussion mostly included the classic pictures, found in Figure 4.2, of the standard model superseding Fermi theory followed by string theory superseding the effective field theory of gravity,7 though I also very briefly mentioned that neutrino masses and proton decay could be understood by similar arguments.

EFTs

Figure 4.2: The top three images are snapshots from my 5.5 page digression on renormalizability. The bottom image is an excerpt from notes that I frantically scribbled down after returning to my office at the IQI after John’s lecture on the same topic. The above-mentioned “regurgitation” should be clear.

But the lectures taken as a whole were too technical for the intended audience, unlike my phase transition lecture. At the time I was giving the lectures, I was working through Weinberg’s books on QFT (and GR) and was very excited about his non-standard approach which seemed especially elegant to me.8 I think I let my excitement about Weinberg creep into some of my lectures without properly toning down the math9 as is most clearly illustrated by my attempt to explain the spin-statistics theorem. That could be much better explained to the intended audience with pictures along the lines of Feynman’s discussion in Elementary Particles and the Laws of Physics or John’s comments in his chapter on anyons.10 If I have the opportunity to teach ever again, I’ll try to do an even better John Preskill imitation, maybe perturbing it slightly with further wisdom gained from others over the years.


*This section lies somewhat out of the blog’s main line of development, and may be omitted in a first reading.


  1. The problems used to be available here under the link to Gravitational Waves (Ph237, 2002), but the link appears to be broken now. Maybe someone at Caltech can fix it. [Update: The link has been fixed.] It was a really great resource which also included all of the video of Kip—another great lecturer with lots to learn from—teaching the course. If Kip hadn’t gone emeritus to start a career in Hollywood right after my freshman year, it’s possible that I would have been trying to imitate him instead of John. The very first thought I had when I opened my Caltech admissions letter was, “Yes! Now I’m going to get to learn GR from Kip Thorne!” It didn’t work out that way, but I still had a pretty awesome time there. 
  2. In classic Ludwig fashion, the few words telling you what you are actually expected to do for the assignment are written in bold. I actually already stole Ludwig’s style when I was given the opportunity to write an extra credit problem on magnetic monopoles for the electrodynamics class, a problem I wrote based on John’s excellent review article. (I know that at least one of my students actually read John’s paper after doing the problem because he asked me questions about it afterwards.) 
  3. I was first exposed to these things by Sterl Phinney in Caltech’s Physics 101 class. Here is a draft of a nice textbook on this material, here is another one in a similar vein, and here are some problems by Purcell. 
  4. It was originally a lecture on tachyons which somehow ended with John explaining how the CMB spontaneously breaks Lorentz invariance. While the end of that lecture was especially awesome, the main part proved particularly valuable when I took string theory in grad school two years later since it is still the best explanation of what tachyons really are—and why they’re not that scary or mysterious—that I’ve ever seen or heard. The right two diagrams in Figure 2.1 of the second post (showing dispersion relations for ordinary and tachyonic particles and tachyon condensation in a Higgs-like potential) are actually the pictures John drew to describe this. While ordinary massive particles have a group velocity that’s zero at zero momentum and approaches the speed of light from below as the momentum is increased, tachyons have a formally infinite group velocity that approaches the speed of light from above. But all this is saying is that you expanded about an unstable vacuum; there’s nothing scary like causality violation or a breakdown of special relativity. 
  5. I think it was the lecture where I explained how being stubborn about promoting the global U(1) symmetry that the Dirac equation (the equation governing the dynamics of electrons and positrons) naturally posses to a local U(1) symmetry forces you to add a photon and write down Maxwell’s equations. I had insomnia for a week when I learned that from Sergei Gukov as a junior at Caltech. Unfortunately I don’t know of a good reference for this explained in this way, so here are my notes for that class—the real fun starts on page 3. (Note that I was careful not to go through all the details of the math on the board and just highlighted the results enough to tell a good story.)
         However, I do have some excellent references for essentially the same result told from a slightly different point of view which includes gravitons (the spin-2 generalization of the spin-1 photons) as well as the spin-1 gauge bosons: two papers by Weinberg here and here (see also Chapters 2, 5, and 13 of The Quantum Theory of Fields), The Feynman Lectures on Gravitation, and a paper by Wald. For a good introduction and overview of all of this literature, along with lots more references and details, see comments by John and Kip here. For the awesome Weinberg approach, see these awesome lectures by Freddy Cachazo and Nima Arkani-Hamed (all four of them which also include a lot of extensions and many failed attempts to avoid what seems to be the inevitable). (I have never seen the generalization to non-abelian gauge fields made explicit except by Cachazo or Arkani-Hamed. Also, if you’re having trouble getting through Chapter 2 of The Quantum Theory of Fields, a pretty necessary prerequisite for understanding a lot of Weinberg’s arguments, the first two lectures by Cachazo in that series make a huge dent, and the last two make a good dent in getting through Georgi.)
         The main point of a lot of these references is that if you include special relativity, quantum mechanics, and a massless spin-1 particle, you are essentially forced to write down Maxwell’s equations. If you do the same thing but for a spin-2 particle, you are essentially forced to write down Einstein’s equations. You have no choice; you have to do it! (Appropriate caveats about effective field theories apply.) You also find that you can’t do anything non-trivial (at least at low energies and under a few other technical assumptions) if you have a massless particle of spin greater than 2. See also here for another approach to a similar result. I had another week-long bout of insomnia when I discovered these references around the time I was actually teaching this course. 
  6. I felt like this was an especially important lesson for my students to learn since they were using Bjorken and Drell as a textbook. It might be a classic, but it’s a dated classic and certainly doesn’t explain Ken Wilson’s incredible insights. See this paper for a very nice, and at one point humorous, discussion by Joe Polchinski that should be required reading for anyone before they take a QFT class. (Here is the more technical paper on the topic that Joe is really famous for.) 
  7. See here and here for complementary introductions to thinking about gravity as an effective field theory and here for a more comprehensive review. See also this take by Weinberg which also discusses the possibility that the effective field theory of gravity is asymptotically safe and that that’s all there is. Weinberg presents a similar argument in this lecture where he says, “If I had to bet, I would bet that’s not the case….My bet would be something on string theory. I’m not against string theory. I just want to raise this as a possibility we shouldn’t forget.” 
  8. I think Weinberg’s books often get an unfair rap. The three volumes in The Quantum Theory of Fields speak for themselves. They’re certainly not for beginners, but once you’ve gotten through several other books first, it’s a really great way to look at things. (See the links to Freddy Cachazo in footnote 5 if you want some help getting through some early parts that seem to deter a lot of people from ever really trying to read Weinberg.)
         But his GR book is one of my all-time favorites (though the cosmology parts are a bit old and so you should consult his newer book or even his awesome popular science book on that subject). It starts by taking the results discussed in footnote 5 seriously and viewing the (strong) equivalence principle as a theorem, rather than a postulate, derived from quantum field theory from which the rest of general relativity can be derived. (A mathematician may not call it a theorem, but I’m a physicist.) All of the same equations you would find in the other GR books that use the more traditional geometric approach are there, it’s just that different words are used to describe them. The following quote by Weinberg nicely summarizes the approach taken in the book (as well as explaining why I encounter so much resistance to it seeing as how I’m surrounded by “many general relativists”):
         …the geometric interpretation of the theory of gravitation has dwindled to a mere analogy, which lingers in our language in terms like “metric,” “affine connection,” and “curvature,” but is not otherwise very useful. The important thing is to be able to make predictions…and it simply doesn’t matter whether we ascribe these predictions to the physical effect of gravitational fields on the motion of planets and photons or to a curvature of space and time. (The reader should be warned that these views are heterodox and would be met with objections from many general relativists.)
         But I love this point of view because it lays the groundwork for putting gravity in it’s place as “just” another effective field theory. Despite what you may have been led to believe, gravity and quantum mechanics actually do play nice together, for a while at least until you try to push them too close and all of the disasters that you’ve heard about kick in that signal the need for new physics (see Figure 4.2 again). To quote Joe, “Nobody ever promised you a rose garden” (see footnote 6), but this seems like at least a nice field of grass to me. See footnote 7 for more on this view. 
  9. In my opinion, “toning down the math” is one of the easiest mistakes to make, and the hardest to avoid, when teaching to any audience, even your own peers. It can be very tempting to go through an argument in all the gory details, but this is when, at least for me when I’m watching a talk, you start to lose the audience. Besides, no one is really going to believe you unless they reproduce the results for themselves with their own pencil (or piece of chalk), so you might as well tell a good story and leave the dotting of the is and the crossing of the ts “as an exercise for the listener.” Now whenever I prepare a talk, I try to take into account who the audience will be, and think to myself, “Remember, try to say it like John would.” I’m still not that great at it, but I think I’m getting better. 
  10. Everyone should read John’s comments. I’m embarrassed to admit it, but I did get the “quite misleading” impression discussed on page 10. This even caused great confusion when I asserted some wrong things in a condensed matter student talk, but since none of my colleagues were able to isolate my mistake clearly, or at least communicate it to me, I think this misleading impression is quite widespread.
         I think the point is that the standard proofs of the spin-statistics theorem (at least the ones given in Weinberg, Srednicki, and Streater and Wightman) show that special relativity and quantum mechanics require you to add antiparticles and to quantize with bosonic statistics for integer spins and fermionic statistics for half-integer spins. This is completely correct—and really cool!—but as John points out, all that is necessary for the spin-statistics connection is the existence of antiparticles. In QFT, you see that you need antiparticles and then immediately need to quantize them “properly,” but it’s easy to conflate these two steps. It also didn’t help that I was explicitly told many times that you need relativistic quantum mechanics to understand the spin-statistics connection. Unfortunately, I repeated this partial lie to my own students.