# Always look on the bright side…of CPTP maps.

Once upon a time, I worked with a postdoc who shaped my views of mathematical physics, research, and life. Each week, I’d email him a PDF of the calculations and insights I’d accrued. He’d respond along the lines of, “Thanks so much for your notes. They look great! I think they’re mostly correct; there are just a few details that might need fixing.”

My postdoc would point out the “details” over espresso, at a café table by a window. “Are you familiar with…?” he’d begin, and pull out of his back pocket some bit of math I’d never heard of. My calculations appeared to crumble like biscotti.

Some of the math involved CPTP maps. “CPTP” stands for a phrase little more enlightening than the acronym: “completely positive trace-preserving”. CPTP maps represent processes undergone by quantum systems. Imagine preparing some system—an electron, a photon, a superconductor, etc.—in a state I’ll call “ρ.” Imagine turning on a magnetic field, or coupling one electron to another, or letting the superconductor sit untouched. A CPTP map, labeled as , represents every such evolution.

“Trace-preserving” means the following. Imagine that, instead of switching on the magnetic field, you measured some property of ρ. If your measurement device (your photodetector, spectrometer, etc.) worked perfectly, you’d read out one of several possible numbers. Let pi denote the probability that you read out the ith possible number. Because your device outputs some number, the probabilities sum to one: .  We say that ρ “has trace one.”

But you don’t measure ρ; you switch on the magnetic field. ρ undergoes the process , becoming a quantum state  . Imagine that, after the process ended, you measured a property of . If your measurement device worked perfectly, you’d read out one of several possible numbers. Let qa denote the probability that you read out the ath possible number. The probabilities sum to one:   “has trace one,” so  is “trace-preserving.” *

Now that we understand trace preservation, we can understand positivity. The probabilities pi are positive (actually, nonnegative) because they lie between zero and one. Since the pi characterize ρ, we call ρ “positive” (though we should call ρ “nonnegative”).  turns the positive ρ into the positive . Since  maps positive objects to positive objects, we call  “positive.”  also satisfies a stronger condition, so we call  “completely positive.”**

So I called my postdoc. “It’s almost right,” he’d repeat, nudging aside his espresso and pulling out a pencil. We’d patch the holes in my calculations. We might rewrite my conclusions, strengthen my assumptions, or prove another lemma. Always, we salvaged cargo. Always, I learned.

I no longer email weekly updates to a postdoc. But I apply what I learned at that café table, about entanglement and monotones and complete positivity. “It’s almost right,” I tell myself when a hole yawns in my calculations and a week’s work appears to fly out the window. “I have to fix a few details.”

Am I certain? No. But I remain positive.

*Experts: “Trace-preserving” means

**Experts: Suppose that ρ is defined on a Hilbert space  and that  is defined on . “ is positive” means

To understand what “completely positive” means, imagine that our quantum system interacts with an environment. For example, suppose the system consists of photons in a box. If the box leaks, the photons interact with the electromagnetic field outside the box.

Suppose the system-and-environment composite begins in a state  defined on a Hilbert space .  acts on the system’s part of state. Let  denote the identity operation that maps every possible environment state to itself. Suppose that  changes the system’s state while  preserves the environment’s state. The system-and-environment composite ends up in the state . This state is positive, so we call  “completely positive”:

# Celebrating Theoretical Physics at Caltech’s Burke Institute

Editor’s Note: Yesterday and today, Caltech is celebrating the inauguration of the Walter Burke Institute for Theoretical Physics. John Preskill made the following remarks at a dinner last night honoring the board of the Sherman Fairchild Foundation.

This is an exciting night for me and all of us at Caltech. Tonight we celebrate physics. Especially theoretical physics. And in particular the Walter Burke Institute for Theoretical Physics.

Some of our dinner guests are theoretical physicists. Why do we do what we do?

I don’t have to convince this crowd that physics has a profound impact on society. You all know that. We’re celebrating this year the 100th anniversary of general relativity, which transformed how we think about space and time. It may be less well known that two years later Einstein laid the foundations of laser science. Einstein was a genius for sure, but I don’t think he envisioned in 1917 that we would use his discoveries to play movies in our houses, or print documents, or repair our vision. Or see an awesome light show at Disneyland.

And where did this phone in my pocket come from? Well, the story of the integrated circuit is fascinating, prominently involving Sherman Fairchild, and other good friends of Caltech like Arnold Beckman and Gordon Moore. But when you dig a little deeper, at the heart of the story are two theorists, Bill Shockley and John Bardeen, with an exceptionally clear understanding of how electrons move through semiconductors. Which led to transistors, and integrated circuits, and this phone. And we all know it doesn’t stop here. When the computers take over the world, you’ll know who to blame.

Incidentally, while Shockley was a Caltech grad (BS class of 1932), John Bardeen, one of the great theoretical physicists of the 20th century, grew up in Wisconsin and studied physics and electrical engineering at the University of Wisconsin at Madison. I suppose that in the 1920s Wisconsin had no pressing need for physicists, but think of the return on the investment the state of Wisconsin made in the education of John Bardeen.1

So, physics is a great investment, of incalculable value to society. But … that’s not why I do it. I suppose few physicists choose to do physics for that reason. So why do we do it? Yes, we like it, we’re good at it, but there is a stronger pull than just that. We honestly think there is no more engaging intellectual adventure than struggling to understand Nature at the deepest level. This requires attitude. Maybe you’ve heard that theoretical physicists have a reputation for arrogance. Okay, it’s true, we are arrogant, we have to be. But it is not that we overestimate our own prowess, our ability to understand the world. In fact, the opposite is often true. Physics works, it’s successful, and this often surprises us; we wind up being shocked again and again by “unreasonable effectiveness of mathematics in the natural sciences.” It’s hard to believe that the equations you write down on a piece of paper can really describe the world. But they do.

And to display my own arrogance, I’ll tell you more about myself. This occasion has given me cause to reflect on my own 30+ years on the Caltech faculty, and what I’ve learned about doing theoretical physics successfully. And I’ll tell you just three principles, which have been important for me, and may be relevant to the future of the Burke Institute. I’m not saying these are universal principles – we’re all different and we all contribute in different ways, but these are principles that have been important for me.

My first principle is: We learn by teaching.

Why do physics at universities, at institutions of higher learning? Well, not all great physics is done at universities. Excellent physics is done at industrial laboratories and at our national laboratories. But the great engine of discovery in the physical sciences is still our universities, and US universities like Caltech in particular. Granted, US preeminence in science is not what it once was — it is a great national asset to be cherished and protected — but world changing discoveries are still flowing from Caltech and other great universities.

Why? Well, when I contemplate my own career, I realize I could never have accomplished what I have as a research scientist if I were not also a teacher. And it’s not just because the students and postdocs have all the great ideas. No, it’s more interesting than that. Most of what I know about physics, most of what I really understand, I learned by teaching it to others. When I first came to Caltech 30 years ago I taught advanced elementary particle physics, and I’m still reaping the return from what I learned those first few years. Later I got interested in black holes, and most of what I know about that I learned by teaching general relativity at Caltech. And when I became interested in quantum computing, a really new subject for me, I learned all about it by teaching it.2

Part of what makes teaching so valuable for the teacher is that we’re forced to simplify, to strip down a field of knowledge to what is really indispensable, a tremendously useful exercise. Feynman liked to say that if you really understand something you should be able to explain it in a lecture for the freshman. Okay, he meant the Caltech freshman. They’re smart, but they don’t know all the sophisticated tools we use in our everyday work. Whether you can explain the core idea without all the peripheral technical machinery is a great test of understanding.

And of course it’s not just the teachers, but also the students and the postdocs who benefit from the teaching. They learn things faster than we do and often we’re just providing some gentle steering; the effect is to amplify greatly what we could do on our own. All the more so when they leave Caltech and go elsewhere to change the world, as they so often do, like those who are returning tonight for this Symposium. We’re proud of you!

My second principle is: The two-trick pony has a leg up.

I’m a firm believer that advances are often made when different ideas collide and a synthesis occurs. I learned this early, when as a student I was fascinated by two topics in physics, elementary particles and cosmology. Nowadays everyone recognizes that particle physics and cosmology are closely related, because when the universe was very young it was also very hot, and particles were colliding at very high energies. But back in the 1970s, the connection was less widely appreciated. By knowing something about cosmology and about particle physics, by being a two-trick pony, I was able to think through what happens as the universe cools, which turned out to be my ticket to becoming a Caltech professor.

It takes a community to produce two-trick ponies. I learned cosmology from one set of colleagues and particle physics from another set of colleagues. I didn’t know either subject as well as the real experts. But I was a two-trick pony, so I had a leg up. I’ve tried to be a two-trick pony ever since.

Another great example of a two-trick pony is my Caltech colleague Alexei Kitaev. Alexei studied condensed matter physics, but he also became intensely interested in computer science, and learned all about that. Back in the 1990s, perhaps no one else in the world combined so deep an understanding of both condensed matter physics and computer science, and that led Alexei to many novel insights. Perhaps most remarkably, he connected ideas about error-correcting code, which protect information from damage, with ideas about novel quantum phases of matter, leading to radical new suggestions about how to operate a quantum computer using exotic particles we call anyons. These ideas had an invigorating impact on experimental physics and may someday have a transformative effect on technology. (We don’t know that yet; it’s still way too early to tell.) Alexei could produce an idea like that because he was a two-trick pony.3

Which brings me to my third principle: Nature is subtle.

Yes, mathematics is unreasonably effective. Yes, we can succeed at formulating laws of Nature with amazing explanatory power. But it’s a struggle. Nature does not give up her secrets so readily. Things are often different than they seem on the surface, and we’re easily fooled. Nature is subtle.4

Perhaps there is no greater illustration of Nature’s subtlety than what we call the holographic principle. This principle says that, in a sense, all the information that is stored in this room, or any room, is really encoded entirely and with perfect accuracy on the boundary of the room, on its walls, ceiling and floor. Things just don’t seem that way, and if we underestimate the subtlety of Nature we’ll conclude that it can’t possibly be true. But unless our current ideas about the quantum theory of gravity are on the wrong track, it really is true. It’s just that the holographic encoding of information on the boundary of the room is extremely complex and we don’t really understand in detail how to decode it. At least not yet.

This holographic principle, arguably the deepest idea about physics to emerge in my lifetime, is still mysterious. How can we make progress toward understanding it well enough to explain it to freshmen? Well, I think we need more two-trick ponies. Except maybe in this case we’ll need ponies who can do three tricks or even more. Explaining how spacetime might emerge from some more fundamental notion is one of the hardest problems we face in physics, and it’s not going to yield easily. We’ll need to combine ideas from gravitational physics, information science, and condensed matter physics to make real progress, and maybe completely new ideas as well. Some of our former Sherman Fairchild Prize Fellows are leading the way at bringing these ideas together, people like Guifre Vidal, who is here tonight, and Patrick Hayden, who very much wanted to be here.5 We’re very proud of what they and others have accomplished.

Bringing ideas together is what the Walter Burke Institute for Theoretical Physics is all about. I’m not talking about only the holographic principle, which is just one example, but all the great challenges of theoretical physics, which will require ingenuity and synthesis of great ideas if we hope to make real progress. We need a community of people coming from different backgrounds, with enough intellectual common ground to produce a new generation of two-trick ponies.

Finally, it seems to me that an occasion as important as the inauguration of the Burke Institute should be celebrated in verse. And so …

Who studies spacetime stress and strain
And excitations on a brane,
Where particles go back in time,
And physicists engage in rhyme?

Whose speedy code blows up a star
(Though it won’t quite blow up so far),
Where anyons, which braid and roam
Annihilate when they get home?

Who makes math and physics blend
Inside black holes where time may end?
Where do they do all this work?
The Institute of Walter Burke!

We’re very grateful to the Burke family and to the Sherman Fairchild Foundation. And we’re confident that your generosity will make great things happen!

1. I was reminded of this when I read about a recent proposal by the current governor of Wisconsin.
2. And by the way, I put my lecture notes online, and thousands of people still download them and read them. So even before MOOCs – massive open online courses – the Internet was greatly expanding the impact of our teaching. Handwritten versions of my old particle theory and relativity notes are also online here
3. Okay, I admit it’s not quite that simple. At that same time I was also very interested in both error correction and in anyons, without imagining any connection between the two. It helps to be a genius. But a genius who is also a two-trick pony can be especially awesome.
4. We made that the tagline of IQIM.
5. Patrick can’t be here for a happy reason, because today he and his wife Mary Race welcomed a new baby girl, Caroline Eleanor Hayden, their first child. The Burke Institute is not the only good thing being inaugurated today.

# Democrat plus Republican over the square-root of two

I wish I could superpose votes on Election Day.

However much I agree with Candidate A about social issues, I dislike his running mate. I lean toward Candidate B’s economic plans and C’s science-funding record, but nobody’s foreign policy impresses me. Must I settle on one candidate? May I not vote

Now you can—at least in theory. Caltech postdoc Ning Bao and I concocted quantum elections in which voters can superpose, entangle, and create probabilistic mixtures of votes.

Previous quantum-voting work has focused on privacy and cryptography. Ning and I channeled quantum game theory. Quantum game theorists ask what happens if players in classical games, such as the Prisoner’s Dilemma, could superpose strategies and share entanglement. Quantization can change the landscape of possible outcomes.

The Prisoner’s Dilemma, for example, concerns two thugs whom the police have arrested and have isolated in separate cells. Each prisoner must decide whether to rat out the other. How much time each serves depends on who, if anyone, confesses. Since neither prisoner knows the other’s decision, each should rat to minimize his or her jail time. But both would serve less time if neither confessed. The prisoners can escape this dilemma using quantum resources.

Introducing superpositions and entanglement into games helps us understand the power of quantum mechanics. Elections involve gameplay; pundits have been feeding off Hilary Clinton’s for months. So superpositions and entanglement merit introduction into elections.

How can you model elections with quantum systems? Though multiple options exist, Ning and I followed two principles: (1) A general quantum process—a preparation procedure, an evolution, and a measurement—should model a quantum election. (2) Quantum elections should remain as true as possible to classical.

Given our quantum voting system, one can violate a quantum analogue of Arrow’s Impossibility Theorem. Arrow’s Theorem, developed by the Nobel-winning economist Kenneth Arrow during the mid-20th century, is a no-go theorem about elections: If a constitution has three innocuous-seeming properties, it’s a dictatorship. Ning and I translated the theorem as faithfully as we knew how into our quantum voting scheme. The result, dubbed the Quantum Arrow Conjecture, rang false.

Superposing (and probabilistically mixing) votes entices me for a reason that science does: I feel ignorant. I read articles and interview political junkies about national defense; but I miss out on evidence and subtleties. I read quantum-physics books and work through papers; but I miss out on known mathematical tools and physical interpretations. Not to mention tools and interpretations that humans haven’t discovered.

Science involves identifying (and diminishing) what humanity doesn’t know. Science frees me to acknowledge my ignorance. I can’t throw all my weight behind Candidate A’s defense policy because I haven’t weighed all the arguments about defense, because I don’t know all the arguments. Believing that I do would violate my job description. How could I not vote for elections that accommodate superpositions?

Though Ning and I identified applications of superpositions and entanglement, more quantum strategies might await discovery. Monogamy of entanglement, discussed elsewhere on this blog, might limit the influence voters exert on each other. Also, we quantized ordinal voting systems (in which each voter ranks candidates, as in “A above C above B”). The quantization of cardinal voting (in which each voter grades the candidates, as in “5 points to A, 3 points to C, 2 points to B”) or another voting scheme might yield more insights.

If you have such insights, drop us a line. Ideally before the presidential smack-down of 2016.

# To become a good teacher, ignore everything you’re told and learn from the masters (part 4 of 4)

In the previous posts in this series, I described how using lessons I learned from Richard Feynman and John Preskill led me to become a more popular TA.

What I learned from a few others*

If by some miracle I ever get to be a professor, there will be a few others I look to for teaching wisdom, some of which I’ve already made use of.

When it comes to writing problem sets, I would look to the two physicists who write the best problem sets I’ve ever seen: Kip Thorne and Andreas Ludwig. In my opinion, a problem set should not be something that just makes you apply the things you learned in class to other examples that are essentially the same as things you’ve seen before. The best problems make you work through an exciting new topic that the professor doesn’t have time to cover in lecture. I remember sitting in my office looking out at one of the 4 km long arms of the LIGO Hanford Observatory, where I was working during the summer after my sophomore year at Caltech, while working through some of Kip’s homework problems because those were some of the best resources I could find to teach me how the amazing contraption really works.1 And Ludwig probably has the best problems of all. The first problem set from one of his classes on many-body field theory, a pretty typical one, consists of two problems written over five pages along with six pages of appendices attached. Those problems sets looked daunting, but they really weren’t. Once you read through everything and thought carefully about it, the problems weren’t so bad and you ended up deriving some really cool results!2

Finally, I’ll describe some things that I learned from Ed McCaffery who brilliantly dealt with the challenging job of teaching humanities at Caltech by accomplishing two goals. First, he “tricked” us into finding a “boring” subject interesting. Second, he had a great sense of who his audience was and taught us in a way that we would actually be receptive to once he had our attention. He accomplished the first goal by making his lectures extremely humorous and ridiculous, but in such a way that they were actually filled with content. He accomplished the second by boiling down the subject to a few key principles and essentially deriving the rest of the ideas from these while ignoring all of the technical details (it was almost as if we were in a physics or math class). The main reason I kept going to his classes was to be entertained—which is especially impressive seeing as how I would normally consider law to be a terribly boring subject—but I accidentally ended up learning a lot. Of all the hums I took at Caltech, his two classes are the ones I remember the most from to this day, and I know I’m not alone in this regard.

If I’m ever put in the challenging position of, for example, teaching an introductory physics course to students, maybe primarily premeds for illustrative purposes, who have no desire to learn it and are being forced to take it to satisfy some requirement, I would try to accomplish the same two goals that McCaffery did in what was essentially an equivalent scenario. It would be hard to be as funny as McCaffery, but maybe I could figure out some other way of being sufficiently ridiculous to “trick” the students into caring about the class. On the other hand, with a little experimentation and a willingness to ignore what I was told to do, I bet it wouldn’t be too hard to find a way to teach some physics in a way that these students could relate to and maybe actually remember something from after the class was over. In a bigger school like UCSB—where not everyone is going to be either a scientist, a mathematician, or an engineer and there are several segregated levels of introductory physics courses—I’ve often asked, “Why are we teaching premeds how to calculate the trajectory of a cannon ball, the moment of inertia of a disk, or the efficiency of a heat engine in quantitative detail?” It’s pretty clear that most of them aren’t going to care about any of this, and they’re really not going to need to know how to do any of that after they pass the class anyway. So is it really that shocking that they tend to go through the class with the attitude that they’re just going to do what it takes to get a good grade instead of with the attitude that they’d like to learn some science? I believe this is roughly equivalent to trying to teach Caltech undergrads the intricacies of the tax code, in the way you might teach USC law students, which I’m pretty sure wouldn’t be a huge success.

The first thing I would try would be to teach physics in a back-of-an-envelope kind of way, ignoring any rigor and just trying to get a feel for how powerful of a tool physics, and really science in general, can be by applying it to problems that I thought the students would be interested in or find amusing. For example, I might explain how using a simple scaling argument shows that the height that most animals can jump, regardless of their size, is roughly universal (and, by observations, happens to be on the order of half a meter). Or maybe I’d explain how some basic knowledge of material properties allows you to estimate how the maximum height of a mountain depends on the properties of a planet—and when you plug the numbers in for the earth, you basically get the height of Mt. Everest.3 You could probably even do this in such a way that every example and every problem would actually be relevant to what premeds might use later in their careers. And maybe the most important part is that I know I would learn a lot from doing this—it could even be completely different if taught multiple times—and so I would actually be excited about the material and would be motivated to explain it well.

Figure 4.1: Maybe the most famous back-of-an-envelope calculation of all time. Oops, we added a scale bar and a time stamp. With a little dimensional analysis and some physical intuition, anyone can now estimate how powerful our nuclear bomb is.

And if that didn’t work, I’d hope that in talking with the students and getting a sense of what was important to them, I would be able to come up with a different approach that would be successful. I simply don’t believe that it’s impossible to find a way to reach every kind of student, whether they be aspiring scientists, mathematicians, engineers, doctors, lawyers, historians, poets, artists, or politicians. Physics is just too exciting of a subject for that to be true, but you’ve got to know your audience. (Maybe McCaffery feels the same way about law and economics for all I know.)

Closing thoughts

Looking back over my notes for the field theory course, I feel like I didn’t actually do that good of a job overall, though I am happy that the students seemed to enjoy it and learned a lot from it. There are some things I am very proud of. Probably the biggest one was my last lecture on the Casimir effect which included a digression on what it means for something to be “renormalizable” or “non-renormalizable” and how there is absolutely nothing wrong with the second kind in the context of effective field theories.6 After an introduction to the philosophy of effective field theories (see footnote 6 for an excellent reference), that discussion mostly included the classic pictures, found in Figure 4.2, of the standard model superseding Fermi theory followed by string theory superseding the effective field theory of gravity,7 though I also very briefly mentioned that neutrino masses and proton decay could be understood by similar arguments.

Figure 4.2: The top three images are snapshots from my 5.5 page digression on renormalizability. The bottom image is an excerpt from notes that I frantically scribbled down after returning to my office at the IQI after John’s lecture on the same topic. The above-mentioned “regurgitation” should be clear.

But the lectures taken as a whole were too technical for the intended audience, unlike my phase transition lecture. At the time I was giving the lectures, I was working through Weinberg’s books on QFT (and GR) and was very excited about his non-standard approach which seemed especially elegant to me.8 I think I let my excitement about Weinberg creep into some of my lectures without properly toning down the math9 as is most clearly illustrated by my attempt to explain the spin-statistics theorem. That could be much better explained to the intended audience with pictures along the lines of Feynman’s discussion in Elementary Particles and the Laws of Physics or John’s comments in his chapter on anyons.10 If I have the opportunity to teach ever again, I’ll try to do an even better John Preskill imitation, maybe perturbing it slightly with further wisdom gained from others over the years.

*This section lies somewhat out of the blog’s main line of development, and may be omitted in a first reading.

1. The problems used to be available here under the link to Gravitational Waves (Ph237, 2002), but the link appears to be broken now. Maybe someone at Caltech can fix it. [Update: The link has been fixed.] It was a really great resource which also included all of the video of Kip—another great lecturer with lots to learn from—teaching the course. If Kip hadn’t gone emeritus to start a career in Hollywood right after my freshman year, it’s possible that I would have been trying to imitate him instead of John. The very first thought I had when I opened my Caltech admissions letter was, “Yes! Now I’m going to get to learn GR from Kip Thorne!” It didn’t work out that way, but I still had a pretty awesome time there.
2. In classic Ludwig fashion, the few words telling you what you are actually expected to do for the assignment are written in bold. I actually already stole Ludwig’s style when I was given the opportunity to write an extra credit problem on magnetic monopoles for the electrodynamics class, a problem I wrote based on John’s excellent review article. (I know that at least one of my students actually read John’s paper after doing the problem because he asked me questions about it afterwards.)
3. I was first exposed to these things by Sterl Phinney in Caltech’s Physics 101 class. Here is a draft of a nice textbook on this material, here is another one in a similar vein, and here are some problems by Purcell.
4. It was originally a lecture on tachyons which somehow ended with John explaining how the CMB spontaneously breaks Lorentz invariance. While the end of that lecture was especially awesome, the main part proved particularly valuable when I took string theory in grad school two years later since it is still the best explanation of what tachyons really are—and why they’re not that scary or mysterious—that I’ve ever seen or heard. The right two diagrams in Figure 2.1 of the second post (showing dispersion relations for ordinary and tachyonic particles and tachyon condensation in a Higgs-like potential) are actually the pictures John drew to describe this. While ordinary massive particles have a group velocity that’s zero at zero momentum and approaches the speed of light from below as the momentum is increased, tachyons have a formally infinite group velocity that approaches the speed of light from above. But all this is saying is that you expanded about an unstable vacuum; there’s nothing scary like causality violation or a breakdown of special relativity.
5. I think it was the lecture where I explained how being stubborn about promoting the global U(1) symmetry that the Dirac equation (the equation governing the dynamics of electrons and positrons) naturally posses to a local U(1) symmetry forces you to add a photon and write down Maxwell’s equations. I had insomnia for a week when I learned that from Sergei Gukov as a junior at Caltech. Unfortunately I don’t know of a good reference for this explained in this way, so here are my notes for that class—the real fun starts on page 3. (Note that I was careful not to go through all the details of the math on the board and just highlighted the results enough to tell a good story.)
However, I do have some excellent references for essentially the same result told from a slightly different point of view which includes gravitons (the spin-2 generalization of the spin-1 photons) as well as the spin-1 gauge bosons: two papers by Weinberg here and here (see also Chapters 2, 5, and 13 of The Quantum Theory of Fields), The Feynman Lectures on Gravitation, and a paper by Wald. For a good introduction and overview of all of this literature, along with lots more references and details, see comments by John and Kip here. For the awesome Weinberg approach, see these awesome lectures by Freddy Cachazo and Nima Arkani-Hamed (all four of them which also include a lot of extensions and many failed attempts to avoid what seems to be the inevitable). (I have never seen the generalization to non-abelian gauge fields made explicit except by Cachazo or Arkani-Hamed. Also, if you’re having trouble getting through Chapter 2 of The Quantum Theory of Fields, a pretty necessary prerequisite for understanding a lot of Weinberg’s arguments, the first two lectures by Cachazo in that series make a huge dent, and the last two make a good dent in getting through Georgi.)
The main point of a lot of these references is that if you include special relativity, quantum mechanics, and a massless spin-1 particle, you are essentially forced to write down Maxwell’s equations. If you do the same thing but for a spin-2 particle, you are essentially forced to write down Einstein’s equations. You have no choice; you have to do it! (Appropriate caveats about effective field theories apply.) You also find that you can’t do anything non-trivial (at least at low energies and under a few other technical assumptions) if you have a massless particle of spin greater than 2. See also here for another approach to a similar result. I had another week-long bout of insomnia when I discovered these references around the time I was actually teaching this course.
6. I felt like this was an especially important lesson for my students to learn since they were using Bjorken and Drell as a textbook. It might be a classic, but it’s a dated classic and certainly doesn’t explain Ken Wilson’s incredible insights. See this paper for a very nice, and at one point humorous, discussion by Joe Polchinski that should be required reading for anyone before they take a QFT class. (Here is the more technical paper on the topic that Joe is really famous for.)
7. See here and here for complementary introductions to thinking about gravity as an effective field theory and here for a more comprehensive review. See also this take by Weinberg which also discusses the possibility that the effective field theory of gravity is asymptotically safe and that that’s all there is. Weinberg presents a similar argument in this lecture where he says, “If I had to bet, I would bet that’s not the case….My bet would be something on string theory. I’m not against string theory. I just want to raise this as a possibility we shouldn’t forget.”
8. I think Weinberg’s books often get an unfair rap. The three volumes in The Quantum Theory of Fields speak for themselves. They’re certainly not for beginners, but once you’ve gotten through several other books first, it’s a really great way to look at things. (See the links to Freddy Cachazo in footnote 5 if you want some help getting through some early parts that seem to deter a lot of people from ever really trying to read Weinberg.)
But his GR book is one of my all-time favorites (though the cosmology parts are a bit old and so you should consult his newer book or even his awesome popular science book on that subject). It starts by taking the results discussed in footnote 5 seriously and viewing the (strong) equivalence principle as a theorem, rather than a postulate, derived from quantum field theory from which the rest of general relativity can be derived. (A mathematician may not call it a theorem, but I’m a physicist.) All of the same equations you would find in the other GR books that use the more traditional geometric approach are there, it’s just that different words are used to describe them. The following quote by Weinberg nicely summarizes the approach taken in the book (as well as explaining why I encounter so much resistance to it seeing as how I’m surrounded by “many general relativists”):
…the geometric interpretation of the theory of gravitation has dwindled to a mere analogy, which lingers in our language in terms like “metric,” “affine connection,” and “curvature,” but is not otherwise very useful. The important thing is to be able to make predictions…and it simply doesn’t matter whether we ascribe these predictions to the physical effect of gravitational fields on the motion of planets and photons or to a curvature of space and time. (The reader should be warned that these views are heterodox and would be met with objections from many general relativists.)
But I love this point of view because it lays the groundwork for putting gravity in it’s place as “just” another effective field theory. Despite what you may have been led to believe, gravity and quantum mechanics actually do play nice together, for a while at least until you try to push them too close and all of the disasters that you’ve heard about kick in that signal the need for new physics (see Figure 4.2 again). To quote Joe, “Nobody ever promised you a rose garden” (see footnote 6), but this seems like at least a nice field of grass to me. See footnote 7 for more on this view.
9. In my opinion, “toning down the math” is one of the easiest mistakes to make, and the hardest to avoid, when teaching to any audience, even your own peers. It can be very tempting to go through an argument in all the gory details, but this is when, at least for me when I’m watching a talk, you start to lose the audience. Besides, no one is really going to believe you unless they reproduce the results for themselves with their own pencil (or piece of chalk), so you might as well tell a good story and leave the dotting of the is and the crossing of the ts “as an exercise for the listener.” Now whenever I prepare a talk, I try to take into account who the audience will be, and think to myself, “Remember, try to say it like John would.” I’m still not that great at it, but I think I’m getting better.
10. Everyone should read John’s comments. I’m embarrassed to admit it, but I did get the “quite misleading” impression discussed on page 10. This even caused great confusion when I asserted some wrong things in a condensed matter student talk, but since none of my colleagues were able to isolate my mistake clearly, or at least communicate it to me, I think this misleading impression is quite widespread.
I think the point is that the standard proofs of the spin-statistics theorem (at least the ones given in Weinberg, Srednicki, and Streater and Wightman) show that special relativity and quantum mechanics require you to add antiparticles and to quantize with bosonic statistics for integer spins and fermionic statistics for half-integer spins. This is completely correct—and really cool!—but as John points out, all that is necessary for the spin-statistics connection is the existence of antiparticles. In QFT, you see that you need antiparticles and then immediately need to quantize them “properly,” but it’s easy to conflate these two steps. It also didn’t help that I was explicitly told many times that you need relativistic quantum mechanics to understand the spin-statistics connection. Unfortunately, I repeated this partial lie to my own students.

# To become a good teacher, ignore everything you’re told and learn from the masters (part 3 of 4)

In the first post of this series, I described some frustrations I had with what we were told to do in TA training. In the previous post, I recalled some memories of interactions I had with John Preskill while I was working at the IQI.

When it came time for me to give my last electrodynamics lecture, I remember thinking that I wanted to give a lecture that would inspire my students to go read more and which would serve as a good introduction to do just that—just as John’s lectures had done for me so many times. Now I am not nearly as quick as John,1 so I didn’t prepare my lecture in my head on the short walk from my office to the classroom where I taught, but I did prepare a lecture that I hoped would satisfy the above criteria. I thought that the way magnets actually work was left as a bit of a mystery in the standard course on electrodynamics. After toying around with a few magnet themes, I eventually prepared a lecture on the ferromagnetic-paramagnetic phase transition from a Landau-Ginzburg effective field theory point of view.2 The lecture had a strong theme about the importance and role of spontaneous symmetry breaking throughout physics, using magnets and magnons mainly as an example of the more general phenomenon. Since there was a lot of excitement about the discovery of the Higgs boson just a few months earlier, I also briefly hinted at the connection between Landau-Ginzburg and the Higgs. Keeping John’s excellent example in mind, my lecture consisted almost entirely of pictures with maybe an equation or two here and there.

This turned out to be one of the most rewarding experiences of my life. I taught identical recitation sections three times a week, and the section attendance dropped rapidly, probably to around 5–10 students per section, after the first few weeks of the quarter. Things were no different for my last lectures earlier in the week. But on the final day, presumably after news of what I was doing had time to spread and the students realized I was not just reviewing for the final, the classroom was packed. There were far more students than I had ever seen before, even on the first day of class. I remember saying something like, “Now your final is next week and I’m happy to answer any questions you may have on any of the material we have covered. Or if no one has any questions, I have prepared a lecture on the ferromagnetic-paramagnetic phase transition. So, does anyone have any questions?” The response was swift and decisive. One of the students sitting in the front row eagerly and immediately blurted out, “No, do that!”

So I proceeded to give my lecture, which seemed to go very well. The highlight of the lecture—at least for me because it is an example where a single physical idea explains a variety of physical phenomena occurring over a very large range of scales and because it demonstrated that at least some of the students had understood most of what I had already said—was when I had the class help me fill in and discuss Table 3.1. Many of the students were asking good questions throughout the lecture, several stayed after class to ask me more questions, and some came to my office hours after the break to ask even more questions—clearly after doing some reading on their own!

Table 3.1: Adapted from Table 6.1 of Magnetism in Condensed Matter. I added the last row and the last column myself before lecture. The first row was discussed heavily in lecture, the class helped me fill in the next two, and I briefly said some words about the last three. Tc is the critical temperature at which the phase transition occurs, M is the magnetization, P is the polarization, ρG is the Fourier component of charge density at reciprocal lattice vector G (as could be measured by Bragg diffraction), ψ and Δ are the condensate wavefunctions in a superfluid or superconductor respectively, φ is the Higgs field, BB = “Big Bang”, EW = “Electroweak”, and GUT = “Grand Unified Theory” (the only speculative entry on the list). The last two signify the phase transitions between the grand unified, electroweak, and quark epochs. Thanks to Alex Rasmussen and Dominic Else for heated arguments about the condensed matter rows that helped to clarify my understanding of them. To pre-empt Dominic yelling at me about Elitzur’s theorem, here is a very nice explanation by Francois Englert explaining, among other things, why gauge symmetries cannot technically be spontaneously broken. (This seems to be a question of semantics intimately related to how some people would say that gauge symmetries are not really symmetries—they’re redundancies. Regardless of what words you want to use, Englert’s paper makes it very clear what is physically happening.)

After that experience, I decided to completely ignore everything I was taught in TA training and to instead always try to do my best John Preskill imitation anytime I interacted with students. (Minus the voice, of course. There’s no replicating that, and even if I tried, hardly anyone would get the joke.) As I suggested earlier, a big frustration with what I was told to do was that—since UCSB has so many students with such a wide range of abilities—I should mostly aim my instruction for the middle of the bell curve, but could try to do some extra things to challenge the students on one tail and some remedial things to help the students on the other—but only if I felt like it. I’ve been told there’s no reason to “waste time” explaining how an idea fits into the bigger picture and is related to other physical concepts. Most students aren’t going to care and I could spend that time going over another explicit example showing the students exactly how to do something. I thought this was stupid advice and did it anyway, and in my experience, explaining the wider context was usually pretty helpful even to the students struggling the most.3

But when I started trying to imitate John by completely ignoring what I was supposed to do and instead just trying to explain exciting physics like he did, I seemed to become a fairly popular TA. I think most students appreciated being treated as naive equals who had the ability to learn awesome things—just as I was treated at the IQI. Instead of the null hypothesis being that your students are stupid and you need to coddle them otherwise they will be completely lost, make the null hypothesis that your students are actually pretty bright and are both interested in and able to learn exciting things, even if that means using a few concepts from other courses. But also be very willing to quickly and momentarily reject the null in a non-condescending manner. I concede the point that there are many arguments against my philosophy and that the anecdotal data to support my approach is potentially extremely biased,4 but I still think that it’s probably best to err on the side of allowing teachers to teach in the way that excites them the most since this will probably motivate the students to learn just by osmosis—in theory at least.

The last class I was a TA for (an advanced undergraduate elective on relativistic quantum mechanics) was probably the one that benefited the most from my attempts to imitate John, and me regurgitating things that I learned from him and a few others (see Figure 4.2 in the last post). The course used Dirac and Bjorken and Drell as textbooks, which are not the books that I would have chosen for the intended audience: advanced undergrads who might want to take QFT in the future.5 By that time, I was fully into ignoring what I was supposed to do and was just going to try to teach my students some interesting physics. So there was no way I was going to spend recitation sections going over, in quantitative detail, the Dirac Sea and other such outdated topics that Julian Schwinger would describe as “best regarded as a historical curiosity, and forgotten.”6 Instead, I decided right from the beginning, that I would give a series of lectures on an introduction to quantum field theory by giving my best guess as to the lectures John would have given if Steve or I had asked him questions such as “Why are spin and statistics related?,” or “Why are neutral conducting plates attracted to each other in vacuum?” Those lectures were often pretty crowded, one time so much so that I literally tripped over the outstretched legs of one of the students who was sitting on the ground in the front of the classroom as I was trying to work at the boards; all of the seats were taken and several people were already standing in the back. (In all honesty, that classroom was terrible and tiny.7 Nevertheless, at least for that one lecture, I would be surprised to learn that everyone in attendance was enrolled in the course.)

In the last post, I’ll describe some other teaching wisdom I learned from a few other good professors as well as concluding with some things I wish I had done better.

1. Though I suspect a part of the reason that he is so quick to come up with these explanations is that he has thought deeply about them several times before when lecturing or writing papers on them. So maybe one day, after a few more decades of sustained effort, I will be able to approach his speed on certain topics that I’ve thought deeply about.
2. I mostly followed the discussion in Kardar: Fields for this part. When I went to write this post, I realized that there is a lecture of John’s on the same topic available here
3. I bumped into one of my former students who occasionally struggled, but always made an honest effort to learn, about a year after I taught him last and he graduated and moved on to other things, and he felt it was necessary to thank me for doing these kinds of things. Although this is only one data point, the looks of relief on several students’ faces who I helped in office hours after explaining how what they were struggling with fit into the bigger picture leads me to believe that this is true more generally.
4. While almost all of the students who benefit from it are going to tell me, probably no one who would have preferred that I go over examples directly relevant to their assignments in great detail is going to tell me in front of those who are so excited about it. And I never got to see my teaching evaluations where the students may have been more comfortable criticizing my approach.
5. Dirac is an excellent book on non-relativistic quantum mechanics, and I remember finding it extremely useful when I read it as a supplement to Shankar and Cohen-Tannoudji while taking quantum mechanics as a sophomore at Caltech. However, the relativistic chapters are really old, though of course it’s fun to read about the Dirac equation from Dirac. Bjorken and Drell is also a classic, but it is also pretty old and, in my opinion, was way too technical for the intended audience. Quantum Field Theory for the Gifted Amateur by Lancaster and Blundell looks like a really good candidate for the class and the intended audience. In all fairness, that book hadn’t been published yet when the course was taught, but why we didn’t even use Zee or Srednicki is completely baffling to me—and both of them are even professors here, so it’s not like those books should have been unknown! (In my opinion, Zee would have been a great choice for that class, while Srednicki may be a slightly better choice for the graduate version.)
6. As quoted by Weinberg in Chapter 1, titled “Historical Introduction,” of The Quantum Theory of Fields. Reading that chapter made me feel like I would have been a TA for the history department rather than the physics department if I taught what I was expected to. It also made me cringe several times while typing up the solutions to the problem sets for the class, and, to atone for my sins, I felt like I had to do better in the recitation sections.
7. I also hated that room because it had whiteboards, instead of chalkboards, and so I couldn’t use another teaching skill, Lewin Lines, that I learned in high school from the great MIT physics professor Walter Lewin. (While I was in the editing stages of writing this post, I was very sad to learn of some recent controversy surrounding Lewin. Regardless of the awful things he has been accused of doing, if you’re going to be honest, you have to be able to separate that from his wonderful lectures which aspiring teachers should still try to learn from no matter what the truth turns out to be. I cannot rewrite history and pretend that watching all of his lectures didn’t provide me with a solid foundation to then read books such as Landau and Lifshitz or Jackson while still in high school. You also have to admit that Lewin Lines are really fun. After intense research (in collaboration with an emeritus head TA of the physics department) and much practice, I am, with low fidelity, able to make a bad approximation to the chalkboard Lewin Lines on a whiteboard—but they’re nowhere near as cool.)

# To become a good teacher, ignore everything you’re told and learn from the masters (part 2 of 4)

In the previous post in this series, I described some teaching skills that I subconsciously absorbed by reading works by Feynman. I also described some frustrations I had with what we were told to do in TA training.

What I learned from Preskill

Luckily for me, and hopefully for my students, the instructor for the first class I was a TA for (the upper division classical electrodynamics course) eventually gave me the freedom to really do things how I would want to, which, when I saw the results, led me to really ignore everything I was told. During the last week of the course, he said I could do whatever I wanted in the recitation sections since he was going to do review and talk about some advanced topics in lecture. It’s hard to know how all my experiences with good (and bad) teachers subconsciously influenced my teaching style, but the first time I remember consciously thinking about what kind of a teacher I wanted to be was the night before my first recitation section of the week, while I was planning my last lecture of the course. I distinctly remember reflecting on the following memories from my time at the IQI while trying to decide what to talk about and how to say it.

Once a week, John would run a two part group meeting. The first part consisted of everyone going around the room briefly summarizing what they had worked on for the past week, while the second consisted of a more traditional lecture about a specific topic of the speaker’s choice. At one point, one of the postdocs was in the middle of giving a series of lectures on a very technical subject. I was always struggling to follow them and stay interested and engaged in the discussion. I remember almost nothing from this particular series of lectures other than that the postdoc seemed to be trying to teach us every single detail about the subject rather than trying to give us a general feeling for what it was about. I’m sure it wasn’t so bad for the more advanced physicists in the room, and I bet I would get more out of them if I heard them again today.

But then one marvelous week, John decided he was going to give the lecture instead. He started off by saying something along the lines of, “We have heard some very technical talks recently, which is good, and I have learned a lot from them. But sometimes it’s also good just to look at pictures.” He proceeded to give an extremely interesting, coherent, and inspiring lecture on black hole complementarity. I don’t think he ever wrote down a single equation, but I remember most of what he talked about and the pictures he drew to this day. I was honestly sad1 when the hour was up and was extremely excited to learn more. As soon as I got home that night, I read many papers on the subject of black holes and information, some of them John’s of course,2 for several hours and couldn’t have managed to get more than three hours of sleep.

This was not a unique experience with John’s masterful teaching technique. The summer I spent working at the IQI was one of the happiest times of my life since I got to spend so much time with so many scientists who I wasn’t even working with, including John. The project that I was supposed to be working on at the IQI was supervised by (then postdoc) Alexey Gorshkov and dealt with two-dimensional spin systems and efforts to simulate them experimentally. Although I greatly enjoyed working on that project, which actually went pretty well and led to my first paper, by far my favorite part of working at the IQI was getting to eat lunch with John and the rest of the group, in particular (then postdoc) Steve Flammia.

Most days as we would walk back to the IQI from the cafeteria, Steve would ask John some question, usually about quantum field theory or cosmology. (For example, “Why do some classical symmetries not survive quantization?”) By the time we got back to the lounge, John would have prepared the perfect lecture to answer Steve’s question, enlightening both to the most accomplished postdoc and the most naive undergrad. These lectures were amazing. I wish we had taped them so that I could watch them again. John did not shy away from equations when they were called for in these lectures, but they were still filled with pictures. (See Steve’s recollections of these lectures here and John’s here. The closest recorded lectures to John’s lunch lectures that I am aware of are some of Lenny Susskind’s more advanced courses from The Theoretical Minimum.3 The main difference between the two styles is that, due to the large discrepancy between the two audiences, John would compress something Susskind might explain into maybe a third to a quarter of the time Susskind would take. On occasion, John might use some slightly more sophisticated technology or go into a little more detail as well, but the essential style is very similar.)

I think John knew that I loved these lectures, but what he and Alexey might not know is that they were extremely detrimental to my productivity on the project that “I was supposed to be working on,” though I don’t think John would see it that way. After each of these lectures, I was forced to spend at least an hour reading textbooks or journal articles going into more detail on the topics John had just so masterfully discussed. Since I shared an office with Alexey, I often tried to hide what I was actually doing before getting back to work; I don’t think Alexey would have cared, but I wasn’t taking any chances. When I say “I was forced,” I literally mean that listening to John put me in a state of mind where I felt I had no choice but to read more. (Maybe John’s distinctive bedtime story voice makes him a kind of “snake charmer” of physicists by hypnotizing us into reading more physics.) Since John’s lectures always cut straight to the heart of any subject, this was the perfect way to learn something new or to clarify something found confusing before. Armed with a Preskill lecture and all the nice pictures fresh in my mind, it always seemed so easy to just start reading a technical account of something that I would certainly have found difficult without John’s introduction to it.

Figure 2.1: John Preskill, a “snake charmer” of physicists. Artwork by my uncle Eric Wayne.

At the time, Steve claimed that his upcoming research appointment at the University of Washington required him to increase his knowledge of physics outside of quantum information and that asking John these questions was the perfect way to do that. This is partially accurate, though the truth is that Steve actually “knew” the answers, in a technical sense, to most of the questions he asked John; what was really important was to get an intuitive picture that could easily be lost when hearing or reading a very in-depth technical account of something. He wanted to “hear the story from a master” and in the process learn both some awesome physics and John’s “style,” i.e. to be able to give a deep, simple, and well organized argument on a wide range of subjects at a moments notice.4 In this sense, it almost didn’t matter what subject John actually lectured on, and I think this was the main reason for the somewhat vague “Why?” format in which most of these questions were phrased. Luckily for me, and my future students, I was able to get a good enough look at John’s “style” as well that I would be able to make an (imperfect) imitation when I became a TA about a year later.

In the next post, I’ll describe how, by trying to imitate John, I seemed to become a fairly popular TA.

1. It might sound strange to be sad when a lecture is over, but that’s really the emotion I felt. Two other professors from my Caltech days, Niles Pierce and Hirosi Ooguri, come to mind whose lectures consistently made me sad when they were over—but in an excited way analogous to how one might feel when the episode of a favorite TV show ends while waiting for the next week. (These lectures on advanced mathematical methods in physics were the Ooguri lectures that made me really sad and are actually in English.)
2. I particularly remember reading about how black holes are mirrors and comments on a black hole’s final state. (Though see this paper, or this talk, for updated comments on the latter in light of the recent firewall controversy.) I also remember reading Susskind et. al.’s original paper on the subject that night.
3. The title for his courses is in reference to a comprehensive course of physics study that Lev Landau expected all of his students to pass. See here for an amusing, and somewhat terrifying, first hand account of what it was like to be one of the 43 students “to pass Landau’s minimum.”
4. S. Flammia, private correspondence.

# To become a good teacher, ignore everything you’re told and learn from the masters (part 1 of 4)

Editor’s Note: Kevin Kuns, a physics concentrator in the Caltech Class of 2012, received the D. S. Kothari Prize in Physics for undergraduate research conducted at Caltech’s Institute for Quantum Information.  Now a graduate student at University of California at Santa Barbara, Kevin won an Outstanding Teaching Assistant Award there. Bursting with pride upon hearing from Kevin that his IQI experiences had contributed to his teaching success, we urged Kevin to tell his story. This took some arm twisting because of Kevin’s inherent modesty, especially when it became clear that Kevin had more to say than could comfortably fit into one blog post. But we persevered, and we’re glad we did! We hope you enjoy Kevin’s series of four posts, starting with this one.

This is the story of how—by ignoring what I was told to do and instead trying to reproduce the conditions that made me so excited about physics by trying to imitate key aspects of the people who inspired me—I won the Outstanding Teaching Assistant Award from the UCSB physics department at the end of my first year there as a grad student.1 Many of the things that inspired me to become a good teacher occurred while working at the IQI as an undergrad at Caltech, which I guess is what qualifies me to write this series of posts. Since my advice is to ignore what people tell you and to do what works for you, you should also ignore everything I say—except for the parts that work for you. While I’m certainly not a master, I hope the extensive scientific references, mainly found in the footnotes (especially in the later posts) and easily skipped by the laymen, will provide the more technically inclined readers of all levels with the resources to learn some exciting physics from the real masters.

Figure 1.1: The only baseball cap I own. There was no “and Matter” when I was there, but I remember a lot of excitement about the upcoming addition of the “M” when John announced it at the very end of my time there.

I’ll start by recounting some of the things that excited me about physics to such a degree that I was then forced to spend a considerable amount of time learning more on my own. Then I’ll move on to how I learned from them as a TA.

While there are many people who have influenced my teaching style, the two biggest influences are Richard Feynman and John Preskill. Though I never got to know Feynman, I had the extreme good fortune of actually getting to talk with John somewhat frequently. I think I used the lessons I learned from Feynman, probably mostly absorbed subconsciously, effectively in my teaching, both in lecture and one-on-one with students in office hours, to become a competent TA. But I think I became a good TA when I decided to try to imitate John.

What I learned from Feynman

I’ve been interested in science for as long as I can remember, but I became laser focussed on physics when my parents bought me some of the audio to The Feynman Lectures on Physics for Christmas when I was in ninth grade. I couldn’t understand much of it at the time, especially without the pictures or equations to go along with the audio, but after (I think three times) listening to the lecture on the double slit experiment, I was able to piece together what was going on. I thought I must have misunderstood—maybe I guessed the geometry of the slits, the screen, and the electron gun incorrectly?—and listened to it several more times to figure out where I went wrong. But I hadn’t misunderstood: that’s how the world really works!2

Figure 1.2: The best Christmas present I ever had: 12 of Feynman’s lectures on quantum mechanics. In hindsight, a more appropriate gift for a complete novice who’s never taken a physics class in his life would be the version of Six Easy Pieces that comes with the audio and which includes the double slit experiment as the last chapter.

I had to know more and had absolutely no patience in doing so. Within a year, and before having ever taken a physics class, I read QED: The Strange Theory of Light and Matter,3 then The Character of Physical Law,4 then Six Easy Pieces, then Six Not-So-Easy Pieces, then I got the full three volume set. I picked up the necessary math along the way and started branching away from only reading things that Feynman wrote5 and eventually read many of the classic textbooks you would use in an undergrad course on physics. But since Feynman was the initial source of my excitement about physics, I also read much of his non-technical stuff and learned a lot of lessons from that as well.

I think the most important non-technical lesson I learned was the importance of being able to explain things simply and clearly, but without lying or changing the essence of the phenomenon in anyway. Probably inspired by a line from Cargo Cult Science, “The first principle is that you must not fool yourself—and you are the easiest person to fool,” I have, beginning in high school, developed an internal check system to ensure that I really understand new things and am not fooling myself. Right now it works like this (I’m sure Feynman said similar things that caused me to do this, but I don’t remember where at the moment): First, I think about how I would explain something to one of my colleagues—a fellow grad student. Then I think about how I would explain it to an undergrad, then to a high school student studying physics, then to my parents or my little sister. The further down I get, the better I understand something; but if I can’t get all the way down, then I don’t really understand it. There are not many things that I really understand, but there are a few.

Going into teaching with this attitude was probably pretty helpful. In my opinion, learning from your students is one of the funnest parts of teaching. There are usually many ways to understand a concept and apply it to certain problems, and sometimes there’s “an obviously easier” approach or way of thinking that is useful in certain circumstances. But to get to the point where you’re “qualified” to be a teacher, you only had to understand things in whatever way worked for you. To be a competent teacher, on the other hand, requires you to understand every way one of your students could possibly understand it so that you can explain things in whatever way is best for each individual student. Maybe if a student doesn’t understand something, it’s really your fault for not explaining it well or for never really giving them the chance in the first place. If they can’t understand something, maybe that means that you don’t really understand it as well as you thought you did—and then you get to learn something too! In this vein, I think student mistakes can also be really fun ways for even you to understand things better because it forces you to really understand why they were wrong, which can sometimes be rather subtle. I would not be surprised if I learned more from my students’ mistakes and misconceptions than they learned from me.6

The most frustrating part of TA training for me was the implicit, and sometimes explicit, message that your students are idiots who need to have their hands held through everything. I’ve been guilty of ridiculing some particularly ignorant mistakes in private myself, but, at least for me, the real frustration was that the system allowed these mistakes to propagate so far and not with the students themselves—many of whom were really trying and just not getting the help they needed in the way they needed it. All the things that we were “trained to do” basically catered to the students struggling the most and left the most talented students to fend for themselves.7 I have absolutely no problem helping the students struggling the most; I think I demonstrated that with the patience I had in dealing with some very confused questions in office hours. Sometimes this required going back through years of accumulated misunderstandings to get to the real source of the problem, but I don’t think I ever became short or condescending to them. But a lot of the things we were told to do were, quite frankly, condescending. I believe that even confused students are not stupid and can quickly recognize when they are being talked down to. There’s a difference between simplifying—what people like Feynman and Preskill do with great skill—and dumbing down; never choose the later.8

Even though I ignored this silly advice right from the beginning, I still mostly covered the kinds of things I was supposed to even if it wasn’t necessarily in the way I was supposed to do it. I tried to add some more interesting material as side comments in recitation section or as tangents in the solution sets, but, ignoring my instincts, I still mostly just did what was already done in lecture or on the homework, in painful detail, again in recitation section as I was told to do. Unless you’re completely lost, this would be incredibly boring; and if you’re really that lost, it’s probably just better to go to office hours and get one-on-one help anyway. I wouldn’t have gone to my own early recitation sections if I was an undergrad; in fact, after going to a few as a new freshman and realizing what they’re traditionally about, I never went to another during my entire undergrad career.

In the next post, I’ll describe some memories of my time at the IQI, which, when I thought about them, caused me to give recitation sections that I would have gone to as an undergrad, and which some of the UCSB undergrads seemed to appreciate as well.

1. As I hope will become clear in the later posts, the award itself means a lot to me because of the interactions with students that suggest that I was actually successful.
2. I highly recommend finding the audio for that lecture; there’s no substitute for hearing Feynman describe it himself. There’s a sense of excitement, and a bit of humor, that is somewhat lost if you just read the text. (There must have been some reason why I listened to that one lecture so many times before I had any idea about what was going on.) I can still hear Feynman describe how electrons (don’t) behave, “They do not behave like waves. They do not behave like particles. They do not behave like clouds, nor like billiard balls, nor like weights on springs, nor like anything that you know anything about.” And man is the way they do behave exciting!
3. You can watch his original lectures here
4. You can watch his original lectures here
5. Or, more accurately, transcriptions of things that he said. One of the great things about Feynman is that, if you try hard enough, you can usually find the video or audio to something he “wrote,” as long as it’s not something like a journal article which he actually wrote.
6. My favorite example from personal experience is when almost half the class solved an electrostatics midterm problem in a completely ridiculous and wrong way. In trying to understand how so many people could have made the exact same mistake, I realized that what they were trying to do would have been essentially correct if we lived in a world with two spatial dimensions instead of three! What most of them had done, without realizing it, was equivalent to correctly using the two-dimensional Coulomb’s Law. So then I got to tell them how Coulomb’s Law in 2d falls off as $r^{-1}$, instead of $r^{-2}$, which opened the door for me to tell them about electrostatics in general dimensions. Instead of just ridiculing them for blindly plugging stuff into equations which they clearly didn’t understand, everyone got to learn something—even the students who did it correctly.
7. To use an overly precise simile that nevertheless qualitatively captures what I feel we were told, it was as if we were, at best, supposed to only really try to help students up to $1\sigma$ past the mean of the distribution. But I think the result of this was that we only helped students up to $1\sigma$ before the mean.
8. I consider QED to be a prime example of simplifying. Probably any motivated layman could understand the arguments in that book without too much difficulty. I grasped them quickly: I remember drawing all of the little arrows to reproduce Feynman’s arguments several times while bored and sitting in the back of biology in ninth grade. But there’s no way I could have come anywhere close to understanding the arguments in even the great books discussed in footnote 5 of the third post at the time: I hadn’t even learned any calculus yet, though I would within a few more months. (I still consider the books in that footnote to be good examples of simplifying; they’re just simplified for a more sophisticated audience.) It wasn’t until about six years later, when I started learning QFT for the first time, when I fully appreciated how truly brilliant Feynman’s explanation is. He really captures the essence of the physics; it’s only the technical details that are missing. For an example of dumbing down, consider some of the popular science treatments of cutting edge physics. Many (but not all!) of these explanations use oversimplified examples that miss a lot of the physics and make, for example, string theory sound a lot more silly than it really is without conveying the deep reasons why it’s an important thing to think about. Maybe we can’t all be exactly like the great simplifiers, but by trying to at least, we can probably eventually become pretty good.