To become a good teacher, ignore everything you’re told and learn from the masters (part 3 of 4)

In the first post of this series, I described some frustrations I had with what we were told to do in TA training. In the previous post, I recalled some memories of interactions I had with John Preskill while I was working at the IQI.

When it came time for me to give my last electrodynamics lecture, I remember thinking that I wanted to give a lecture that would inspire my students to go read more and which would serve as a good introduction to do just that—just as John’s lectures had done for me so many times. Now I am not nearly as quick as John,1 so I didn’t prepare my lecture in my head on the short walk from my office to the classroom where I taught, but I did prepare a lecture that I hoped would satisfy the above criteria. I thought that the way magnets actually work was left as a bit of a mystery in the standard course on electrodynamics. After toying around with a few magnet themes, I eventually prepared a lecture on the ferromagnetic-paramagnetic phase transition from a Landau-Ginzburg effective field theory point of view.2 The lecture had a strong theme about the importance and role of spontaneous symmetry breaking throughout physics, using magnets and magnons mainly as an example of the more general phenomenon. Since there was a lot of excitement about the discovery of the Higgs boson just a few months earlier, I also briefly hinted at the connection between Landau-Ginzburg and the Higgs. Keeping John’s excellent example in mind, my lecture consisted almost entirely of pictures with maybe an equation or two here and there.

This turned out to be one of the most rewarding experiences of my life. I taught identical recitation sections three times a week, and the section attendance dropped rapidly, probably to around 5–10 students per section, after the first few weeks of the quarter. Things were no different for my last lectures earlier in the week. But on the final day, presumably after news of what I was doing had time to spread and the students realized I was not just reviewing for the final, the classroom was packed. There were far more students than I had ever seen before, even on the first day of class. I remember saying something like, “Now your final is next week and I’m happy to answer any questions you may have on any of the material we have covered. Or if no one has any questions, I have prepared a lecture on the ferromagnetic-paramagnetic phase transition. So, does anyone have any questions?” The response was swift and decisive. One of the students sitting in the front row eagerly and immediately blurted out, “No, do that!”

So I proceeded to give my lecture, which seemed to go very well. The highlight of the lecture—at least for me because it is an example where a single physical idea explains a variety of physical phenomena occurring over a very large range of scales and because it demonstrated that at least some of the students had understood most of what I had already said—was when I had the class help me fill in and discuss Table 3.1. Many of the students were asking good questions throughout the lecture, several stayed after class to ask me more questions, and some came to my office hours after the break to ask even more questions—clearly after doing some reading on their own!

SSB

Table 3.1: Adapted from Table 6.1 of Magnetism in Condensed Matter. I added the last row and the last column myself before lecture. The first row was discussed heavily in lecture, the class helped me fill in the next two, and I briefly said some words about the last three. Tc is the critical temperature at which the phase transition occurs, M is the magnetization, P is the polarization, ρG is the Fourier component of charge density at reciprocal lattice vector G (as could be measured by Bragg diffraction), ψ and Δ are the condensate wavefunctions in a superfluid or superconductor respectively, φ is the Higgs field, BB = “Big Bang”, EW = “Electroweak”, and GUT = “Grand Unified Theory” (the only speculative entry on the list). The last two signify the phase transitions between the grand unified, electroweak, and quark epochs. Thanks to Alex Rasmussen and Dominic Else for heated arguments about the condensed matter rows that helped to clarify my understanding of them. To pre-empt Dominic yelling at me about Elitzur’s theorem, here is a very nice explanation by Francois Englert explaining, among other things, why gauge symmetries cannot technically be spontaneously broken. (This seems to be a question of semantics intimately related to how some people would say that gauge symmetries are not really symmetries—they’re redundancies. Regardless of what words you want to use, Englert’s paper makes it very clear what is physically happening.)

After that experience, I decided to completely ignore everything I was taught in TA training and to instead always try to do my best John Preskill imitation anytime I interacted with students. (Minus the voice, of course. There’s no replicating that, and even if I tried, hardly anyone would get the joke.) As I suggested earlier, a big frustration with what I was told to do was that—since UCSB has so many students with such a wide range of abilities—I should mostly aim my instruction for the middle of the bell curve, but could try to do some extra things to challenge the students on one tail and some remedial things to help the students on the other—but only if I felt like it. I’ve been told there’s no reason to “waste time” explaining how an idea fits into the bigger picture and is related to other physical concepts. Most students aren’t going to care and I could spend that time going over another explicit example showing the students exactly how to do something. I thought this was stupid advice and did it anyway, and in my experience, explaining the wider context was usually pretty helpful even to the students struggling the most.3

But when I started trying to imitate John by completely ignoring what I was supposed to do and instead just trying to explain exciting physics like he did, I seemed to become a fairly popular TA. I think most students appreciated being treated as naive equals who had the ability to learn awesome things—just as I was treated at the IQI. Instead of the null hypothesis being that your students are stupid and you need to coddle them otherwise they will be completely lost, make the null hypothesis that your students are actually pretty bright and are both interested in and able to learn exciting things, even if that means using a few concepts from other courses. But also be very willing to quickly and momentarily reject the null in a non-condescending manner. I concede the point that there are many arguments against my philosophy and that the anecdotal data to support my approach is potentially extremely biased,4 but I still think that it’s probably best to err on the side of allowing teachers to teach in the way that excites them the most since this will probably motivate the students to learn just by osmosis—in theory at least.

The last class I was a TA for (an advanced undergraduate elective on relativistic quantum mechanics) was probably the one that benefited the most from my attempts to imitate John, and me regurgitating things that I learned from him and a few others (see Figure 4.2 in the last post). The course used Dirac and Bjorken and Drell as textbooks, which are not the books that I would have chosen for the intended audience: advanced undergrads who might want to take QFT in the future.5 By that time, I was fully into ignoring what I was supposed to do and was just going to try to teach my students some interesting physics. So there was no way I was going to spend recitation sections going over, in quantitative detail, the Dirac Sea and other such outdated topics that Julian Schwinger would describe as “best regarded as a historical curiosity, and forgotten.”6 Instead, I decided right from the beginning, that I would give a series of lectures on an introduction to quantum field theory by giving my best guess as to the lectures John would have given if Steve or I had asked him questions such as “Why are spin and statistics related?,” or “Why are neutral conducting plates attracted to each other in vacuum?” Those lectures were often pretty crowded, one time so much so that I literally tripped over the outstretched legs of one of the students who was sitting on the ground in the front of the classroom as I was trying to work at the boards; all of the seats were taken and several people were already standing in the back. (In all honesty, that classroom was terrible and tiny.7 Nevertheless, at least for that one lecture, I would be surprised to learn that everyone in attendance was enrolled in the course.)

In the last post, I’ll describe some other teaching wisdom I learned from a few other good professors as well as concluding with some things I wish I had done better.


  1. Though I suspect a part of the reason that he is so quick to come up with these explanations is that he has thought deeply about them several times before when lecturing or writing papers on them. So maybe one day, after a few more decades of sustained effort, I will be able to approach his speed on certain topics that I’ve thought deeply about. 
  2. I mostly followed the discussion in Kardar: Fields for this part. When I went to write this post, I realized that there is a lecture of John’s on the same topic available here
  3. I bumped into one of my former students who occasionally struggled, but always made an honest effort to learn, about a year after I taught him last and he graduated and moved on to other things, and he felt it was necessary to thank me for doing these kinds of things. Although this is only one data point, the looks of relief on several students’ faces who I helped in office hours after explaining how what they were struggling with fit into the bigger picture leads me to believe that this is true more generally. 
  4. While almost all of the students who benefit from it are going to tell me, probably no one who would have preferred that I go over examples directly relevant to their assignments in great detail is going to tell me in front of those who are so excited about it. And I never got to see my teaching evaluations where the students may have been more comfortable criticizing my approach. 
  5. Dirac is an excellent book on non-relativistic quantum mechanics, and I remember finding it extremely useful when I read it as a supplement to Shankar and Cohen-Tannoudji while taking quantum mechanics as a sophomore at Caltech. However, the relativistic chapters are really old, though of course it’s fun to read about the Dirac equation from Dirac. Bjorken and Drell is also a classic, but it is also pretty old and, in my opinion, was way too technical for the intended audience. Quantum Field Theory for the Gifted Amateur by Lancaster and Blundell looks like a really good candidate for the class and the intended audience. In all fairness, that book hadn’t been published yet when the course was taught, but why we didn’t even use Zee or Srednicki is completely baffling to me—and both of them are even professors here, so it’s not like those books should have been unknown! (In my opinion, Zee would have been a great choice for that class, while Srednicki may be a slightly better choice for the graduate version.) 
  6. As quoted by Weinberg in Chapter 1, titled “Historical Introduction,” of The Quantum Theory of Fields. Reading that chapter made me feel like I would have been a TA for the history department rather than the physics department if I taught what I was expected to. It also made me cringe several times while typing up the solutions to the problem sets for the class, and, to atone for my sins, I felt like I had to do better in the recitation sections. 
  7. I also hated that room because it had whiteboards, instead of chalkboards, and so I couldn’t use another teaching skill, Lewin Lines, that I learned in high school from the great MIT physics professor Walter Lewin. (While I was in the editing stages of writing this post, I was very sad to learn of some recent controversy surrounding Lewin. Regardless of the awful things he has been accused of doing, if you’re going to be honest, you have to be able to separate that from his wonderful lectures which aspiring teachers should still try to learn from no matter what the truth turns out to be. I cannot rewrite history and pretend that watching all of his lectures didn’t provide me with a solid foundation to then read books such as Landau and Lifshitz or Jackson while still in high school. You also have to admit that Lewin Lines are really fun. After intense research (in collaboration with an emeritus head TA of the physics department) and much practice, I am, with low fidelity, able to make a bad approximation to the chalkboard Lewin Lines on a whiteboard—but they’re nowhere near as cool.) 

To become a good teacher, ignore everything you’re told and learn from the masters (part 2 of 4)

In the previous post in this series, I described some teaching skills that I subconsciously absorbed by reading works by Feynman. I also described some frustrations I had with what we were told to do in TA training.

What I learned from Preskill

Luckily for me, and hopefully for my students, the instructor for the first class I was a TA for (the upper division classical electrodynamics course) eventually gave me the freedom to really do things how I would want to, which, when I saw the results, led me to really ignore everything I was told. During the last week of the course, he said I could do whatever I wanted in the recitation sections since he was going to do review and talk about some advanced topics in lecture. It’s hard to know how all my experiences with good (and bad) teachers subconsciously influenced my teaching style, but the first time I remember consciously thinking about what kind of a teacher I wanted to be was the night before my first recitation section of the week, while I was planning my last lecture of the course. I distinctly remember reflecting on the following memories from my time at the IQI while trying to decide what to talk about and how to say it.

Once a week, John would run a two part group meeting. The first part consisted of everyone going around the room briefly summarizing what they had worked on for the past week, while the second consisted of a more traditional lecture about a specific topic of the speaker’s choice. At one point, one of the postdocs was in the middle of giving a series of lectures on a very technical subject. I was always struggling to follow them and stay interested and engaged in the discussion. I remember almost nothing from this particular series of lectures other than that the postdoc seemed to be trying to teach us every single detail about the subject rather than trying to give us a general feeling for what it was about. I’m sure it wasn’t so bad for the more advanced physicists in the room, and I bet I would get more out of them if I heard them again today.

But then one marvelous week, John decided he was going to give the lecture instead. He started off by saying something along the lines of, “We have heard some very technical talks recently, which is good, and I have learned a lot from them. But sometimes it’s also good just to look at pictures.” He proceeded to give an extremely interesting, coherent, and inspiring lecture on black hole complementarity. I don’t think he ever wrote down a single equation, but I remember most of what he talked about and the pictures he drew to this day. I was honestly sad1 when the hour was up and was extremely excited to learn more. As soon as I got home that night, I read many papers on the subject of black holes and information, some of them John’s of course,2 for several hours and couldn’t have managed to get more than three hours of sleep.

This was not a unique experience with John’s masterful teaching technique. The summer I spent working at the IQI was one of the happiest times of my life since I got to spend so much time with so many scientists who I wasn’t even working with, including John. The project that I was supposed to be working on at the IQI was supervised by (then postdoc) Alexey Gorshkov and dealt with two-dimensional spin systems and efforts to simulate them experimentally. Although I greatly enjoyed working on that project, which actually went pretty well and led to my first paper, by far my favorite part of working at the IQI was getting to eat lunch with John and the rest of the group, in particular (then postdoc) Steve Flammia.

Most days as we would walk back to the IQI from the cafeteria, Steve would ask John some question, usually about quantum field theory or cosmology. (For example, “Why do some classical symmetries not survive quantization?”) By the time we got back to the lounge, John would have prepared the perfect lecture to answer Steve’s question, enlightening both to the most accomplished postdoc and the most naive undergrad. These lectures were amazing. I wish we had taped them so that I could watch them again. John did not shy away from equations when they were called for in these lectures, but they were still filled with pictures. (See Steve’s recollections of these lectures here and John’s here. The closest recorded lectures to John’s lunch lectures that I am aware of are some of Lenny Susskind’s more advanced courses from The Theoretical Minimum.3 The main difference between the two styles is that, due to the large discrepancy between the two audiences, John would compress something Susskind might explain into maybe a third to a quarter of the time Susskind would take. On occasion, John might use some slightly more sophisticated technology or go into a little more detail as well, but the essential style is very similar.)

I think John knew that I loved these lectures, but what he and Alexey might not know is that they were extremely detrimental to my productivity on the project that “I was supposed to be working on,” though I don’t think John would see it that way. After each of these lectures, I was forced to spend at least an hour reading textbooks or journal articles going into more detail on the topics John had just so masterfully discussed. Since I shared an office with Alexey, I often tried to hide what I was actually doing before getting back to work; I don’t think Alexey would have cared, but I wasn’t taking any chances. When I say “I was forced,” I literally mean that listening to John put me in a state of mind where I felt I had no choice but to read more. (Maybe John’s distinctive bedtime story voice makes him a kind of “snake charmer” of physicists by hypnotizing us into reading more physics.) Since John’s lectures always cut straight to the heart of any subject, this was the perfect way to learn something new or to clarify something found confusing before. Armed with a Preskill lecture and all the nice pictures fresh in my mind, it always seemed so easy to just start reading a technical account of something that I would certainly have found difficult without John’s introduction to it.

John Preskill

Figure 2.1: John Preskill, a “snake charmer” of physicists. Artwork by my uncle Eric Wayne.

Eventually, I was allowed to ask John the question on occasion, and sometimes Steve and I would discuss what we should ask John on the way back before we left for lunch. Of course, this great responsibility only caused me to read even more about things that “I was not supposed to be working on” so that we would have a better chance of having a real gem. (Occasionally, I was devastated to learn that Steve had already asked John some question that I had thought of like, “Why is CP violated?” I really wanted to hear that lecture, but Steve had already asked it, and there were no repeats.)

At the time, Steve claimed that his upcoming research appointment at the University of Washington required him to increase his knowledge of physics outside of quantum information and that asking John these questions was the perfect way to do that. This is partially accurate, though the truth is that Steve actually “knew” the answers, in a technical sense, to most of the questions he asked John; what was really important was to get an intuitive picture that could easily be lost when hearing or reading a very in-depth technical account of something. He wanted to “hear the story from a master” and in the process learn both some awesome physics and John’s “style,” i.e. to be able to give a deep, simple, and well organized argument on a wide range of subjects at a moments notice.4 In this sense, it almost didn’t matter what subject John actually lectured on, and I think this was the main reason for the somewhat vague “Why?” format in which most of these questions were phrased. Luckily for me, and my future students, I was able to get a good enough look at John’s “style” as well that I would be able to make an (imperfect) imitation when I became a TA about a year later.

In the next post, I’ll describe how, by trying to imitate John, I seemed to become a fairly popular TA.


  1. It might sound strange to be sad when a lecture is over, but that’s really the emotion I felt. Two other professors from my Caltech days, Niles Pierce and Hirosi Ooguri, come to mind whose lectures consistently made me sad when they were over—but in an excited way analogous to how one might feel when the episode of a favorite TV show ends while waiting for the next week. (These lectures on advanced mathematical methods in physics were the Ooguri lectures that made me really sad and are actually in English.) 
  2. I particularly remember reading about how black holes are mirrors and comments on a black hole’s final state. (Though see this paper, or this talk, for updated comments on the latter in light of the recent firewall controversy.) I also remember reading Susskind et. al.’s original paper on the subject that night. 
  3. The title for his courses is in reference to a comprehensive course of physics study that Lev Landau expected all of his students to pass. See here for an amusing, and somewhat terrifying, first hand account of what it was like to be one of the 43 students “to pass Landau’s minimum.” 
  4. S. Flammia, private correspondence. 

To become a good teacher, ignore everything you’re told and learn from the masters (part 1 of 4)

Editor’s Note: Kevin Kuns, a physics concentrator in the Caltech Class of 2012, received the D. S. Kothari Prize in Physics for undergraduate research conducted at Caltech’s Institute for Quantum Information.  Now a graduate student at University of California at Santa Barbara, Kevin won an Outstanding Teaching Assistant Award there. Bursting with pride upon hearing from Kevin that his IQI experiences had contributed to his teaching success, we urged Kevin to tell his story. This took some arm twisting because of Kevin’s inherent modesty, especially when it became clear that Kevin had more to say than could comfortably fit into one blog post. But we persevered, and we’re glad we did! We hope you enjoy Kevin’s series of four posts, starting with this one.

This is the story of how—by ignoring what I was told to do and instead trying to reproduce the conditions that made me so excited about physics by trying to imitate key aspects of the people who inspired me—I won the Outstanding Teaching Assistant Award from the UCSB physics department at the end of my first year there as a grad student.1 Many of the things that inspired me to become a good teacher occurred while working at the IQI as an undergrad at Caltech, which I guess is what qualifies me to write this series of posts. Since my advice is to ignore what people tell you and to do what works for you, you should also ignore everything I say—except for the parts that work for you. While I’m certainly not a master, I hope the extensive scientific references, mainly found in the footnotes (especially in the later posts) and easily skipped by the laymen, will provide the more technically inclined readers of all levels with the resources to learn some exciting physics from the real masters.

Hat

Figure 1.1: The only baseball cap I own. There was no “and Matter” when I was there, but I remember a lot of excitement about the upcoming addition of the “M” when John announced it at the very end of my time there.

I’ll start by recounting some of the things that excited me about physics to such a degree that I was then forced to spend a considerable amount of time learning more on my own. Then I’ll move on to how I learned from them as a TA.

While there are many people who have influenced my teaching style, the two biggest influences are Richard Feynman and John Preskill. Though I never got to know Feynman, I had the extreme good fortune of actually getting to talk with John somewhat frequently. I think I used the lessons I learned from Feynman, probably mostly absorbed subconsciously, effectively in my teaching, both in lecture and one-on-one with students in office hours, to become a competent TA. But I think I became a good TA when I decided to try to imitate John.

Since Feynman came first, I’ll start with him.

What I learned from Feynman

I’ve been interested in science for as long as I can remember, but I became laser focussed on physics when my parents bought me some of the audio to The Feynman Lectures on Physics for Christmas when I was in ninth grade. I couldn’t understand much of it at the time, especially without the pictures or equations to go along with the audio, but after (I think three times) listening to the lecture on the double slit experiment, I was able to piece together what was going on. I thought I must have misunderstood—maybe I guessed the geometry of the slits, the screen, and the electron gun incorrectly?—and listened to it several more times to figure out where I went wrong. But I hadn’t misunderstood: that’s how the world really works!2

Feynman Lectures

Figure 1.2: The best Christmas present I ever had: 12 of Feynman’s lectures on quantum mechanics. In hindsight, a more appropriate gift for a complete novice who’s never taken a physics class in his life would be the version of Six Easy Pieces that comes with the audio and which includes the double slit experiment as the last chapter.

I had to know more and had absolutely no patience in doing so. Within a year, and before having ever taken a physics class, I read QED: The Strange Theory of Light and Matter,3 then The Character of Physical Law,4 then Six Easy Pieces, then Six Not-So-Easy Pieces, then I got the full three volume set. I picked up the necessary math along the way and started branching away from only reading things that Feynman wrote5 and eventually read many of the classic textbooks you would use in an undergrad course on physics. But since Feynman was the initial source of my excitement about physics, I also read much of his non-technical stuff and learned a lot of lessons from that as well.

I think the most important non-technical lesson I learned was the importance of being able to explain things simply and clearly, but without lying or changing the essence of the phenomenon in anyway. Probably inspired by a line from Cargo Cult Science, “The first principle is that you must not fool yourself—and you are the easiest person to fool,” I have, beginning in high school, developed an internal check system to ensure that I really understand new things and am not fooling myself. Right now it works like this (I’m sure Feynman said similar things that caused me to do this, but I don’t remember where at the moment): First, I think about how I would explain something to one of my colleagues—a fellow grad student. Then I think about how I would explain it to an undergrad, then to a high school student studying physics, then to my parents or my little sister. The further down I get, the better I understand something; but if I can’t get all the way down, then I don’t really understand it. There are not many things that I really understand, but there are a few.

Going into teaching with this attitude was probably pretty helpful. In my opinion, learning from your students is one of the funnest parts of teaching. There are usually many ways to understand a concept and apply it to certain problems, and sometimes there’s “an obviously easier” approach or way of thinking that is useful in certain circumstances. But to get to the point where you’re “qualified” to be a teacher, you only had to understand things in whatever way worked for you. To be a competent teacher, on the other hand, requires you to understand every way one of your students could possibly understand it so that you can explain things in whatever way is best for each individual student. Maybe if a student doesn’t understand something, it’s really your fault for not explaining it well or for never really giving them the chance in the first place. If they can’t understand something, maybe that means that you don’t really understand it as well as you thought you did—and then you get to learn something too! In this vein, I think student mistakes can also be really fun ways for even you to understand things better because it forces you to really understand why they were wrong, which can sometimes be rather subtle. I would not be surprised if I learned more from my students’ mistakes and misconceptions than they learned from me.6

The most frustrating part of TA training for me was the implicit, and sometimes explicit, message that your students are idiots who need to have their hands held through everything. I’ve been guilty of ridiculing some particularly ignorant mistakes in private myself, but, at least for me, the real frustration was that the system allowed these mistakes to propagate so far and not with the students themselves—many of whom were really trying and just not getting the help they needed in the way they needed it. All the things that we were “trained to do” basically catered to the students struggling the most and left the most talented students to fend for themselves.7 I have absolutely no problem helping the students struggling the most; I think I demonstrated that with the patience I had in dealing with some very confused questions in office hours. Sometimes this required going back through years of accumulated misunderstandings to get to the real source of the problem, but I don’t think I ever became short or condescending to them. But a lot of the things we were told to do were, quite frankly, condescending. I believe that even confused students are not stupid and can quickly recognize when they are being talked down to. There’s a difference between simplifying—what people like Feynman and Preskill do with great skill—and dumbing down; never choose the later.8

Even though I ignored this silly advice right from the beginning, I still mostly covered the kinds of things I was supposed to even if it wasn’t necessarily in the way I was supposed to do it. I tried to add some more interesting material as side comments in recitation section or as tangents in the solution sets, but, ignoring my instincts, I still mostly just did what was already done in lecture or on the homework, in painful detail, again in recitation section as I was told to do. Unless you’re completely lost, this would be incredibly boring; and if you’re really that lost, it’s probably just better to go to office hours and get one-on-one help anyway. I wouldn’t have gone to my own early recitation sections if I was an undergrad; in fact, after going to a few as a new freshman and realizing what they’re traditionally about, I never went to another during my entire undergrad career.

In the next post, I’ll describe some memories of my time at the IQI, which, when I thought about them, caused me to give recitation sections that I would have gone to as an undergrad, and which some of the UCSB undergrads seemed to appreciate as well.


  1. As I hope will become clear in the later posts, the award itself means a lot to me because of the interactions with students that suggest that I was actually successful. 
  2. I highly recommend finding the audio for that lecture; there’s no substitute for hearing Feynman describe it himself. There’s a sense of excitement, and a bit of humor, that is somewhat lost if you just read the text. (There must have been some reason why I listened to that one lecture so many times before I had any idea about what was going on.) I can still hear Feynman describe how electrons (don’t) behave, “They do not behave like waves. They do not behave like particles. They do not behave like clouds, nor like billiard balls, nor like weights on springs, nor like anything that you know anything about.” And man is the way they do behave exciting! 
  3. You can watch his original lectures here
  4. You can watch his original lectures here
  5. Or, more accurately, transcriptions of things that he said. One of the great things about Feynman is that, if you try hard enough, you can usually find the video or audio to something he “wrote,” as long as it’s not something like a journal article which he actually wrote. 
  6. My favorite example from personal experience is when almost half the class solved an electrostatics midterm problem in a completely ridiculous and wrong way. In trying to understand how so many people could have made the exact same mistake, I realized that what they were trying to do would have been essentially correct if we lived in a world with two spatial dimensions instead of three! What most of them had done, without realizing it, was equivalent to correctly using the two-dimensional Coulomb’s Law. So then I got to tell them how Coulomb’s Law in 2d falls off as r^{-1}, instead of r^{-2}, which opened the door for me to tell them about electrostatics in general dimensions. Instead of just ridiculing them for blindly plugging stuff into equations which they clearly didn’t understand, everyone got to learn something—even the students who did it correctly. 
  7. To use an overly precise simile that nevertheless qualitatively captures what I feel we were told, it was as if we were, at best, supposed to only really try to help students up to 1\sigma past the mean of the distribution. But I think the result of this was that we only helped students up to 1\sigma before the mean. 
  8. I consider QED to be a prime example of simplifying. Probably any motivated layman could understand the arguments in that book without too much difficulty. I grasped them quickly: I remember drawing all of the little arrows to reproduce Feynman’s arguments several times while bored and sitting in the back of biology in ninth grade. But there’s no way I could have come anywhere close to understanding the arguments in even the great books discussed in footnote 5 of the third post at the time: I hadn’t even learned any calculus yet, though I would within a few more months. (I still consider the books in that footnote to be good examples of simplifying; they’re just simplified for a more sophisticated audience.) It wasn’t until about six years later, when I started learning QFT for the first time, when I fully appreciated how truly brilliant Feynman’s explanation is. He really captures the essence of the physics; it’s only the technical details that are missing. For an example of dumbing down, consider some of the popular science treatments of cutting edge physics. Many (but not all!) of these explanations use oversimplified examples that miss a lot of the physics and make, for example, string theory sound a lot more silly than it really is without conveying the deep reasons why it’s an important thing to think about. Maybe we can’t all be exactly like the great simplifiers, but by trying to at least, we can probably eventually become pretty good. 

Actions do change the world.

I heard it in a college lecture about Haskell.

Haskell is a programming language akin to Latin: Learning either language expands your vocabulary and technical skills. But programmers use Haskell as often as slam poets compose dactylic hexameter.*

My professor could have understudied for the archetypal wise man: He had snowy hair, a beard, and glasses that begged to be called “spectacles.” Pointing at the code he’d projected onto a screen, he was lecturing about input/output, or I/O. The user inputs a request, and the program outputs a response.

That autumn was consuming me. Computer-science and physics courses had filled my plate. Atop the plate, I had thunked the soup tureen known as “XKCD Comes to Dartmouth”: I was coordinating a visit by Randall Munroe, creator of the science webcomic xkcd, to my college. The visit was to include a cake shaped like the Internet, a robotic velociraptor, and playpen balls.  The output I’d promised felt offputting.

My professor knew. We sat in his office for hours each week, dispatching my questions about recursion and monads and complexity. His input shaped the functional-programming skills I apply today, and my input shaped his lecture notes. He promised to attend my event.

Most objects coded in Haskell, my professor reminded us in that lecture, remain static. The Haskell world never changes. Even the objects called “variables” behave like constants. Exceptions, my professor said, crop up in I/O. Users can affect a program through objects called “actions.”

“Actions,” he said, “do change the world.”

I caught my breath. What a zinger, I thought. What a moral. What a lesson for the ages—and from the mouths of computer scientists.

My professor had no idea. He swanned on about I/O without changing tone.

That December, I gave my professor a plaque that read, “Actions do change the world.” Around his slogan, I’d littered illustrations of fractals, Pascal’s triangle, and other subjects we’d studied. My professor’s actions had changed my world for the better.

Drysdale photo 2

You can change the world by writing code that revolutionizes physics or finance. You can change the world by writing code that limps through tasks but improves your understanding. You can change the world by helping someone understand code. You can change the world by understanding someone.

A new year is beginning. Which actions will you take?

 

*Not that I recommend studying Haskell to boost your verbal SAT score.

This Video Of Scientists Splitting An Electron Will Shock You

by Jorge Cham.

Ok, this is where things get weird. If quantum computers, femtometer motions or laser alligators weren’t enough, let’s throw in fractionalized electrons, topological surfaces and strings that go to the end of time.

To be honest, the idea that an electron can’t be split hadn’t even occurred to me before my conversation with Gil and Jason. And yet, this goes back to the very essence of the word Quantum: there’s a minimum size to everything. For electrical charge, that minimum is the electron.

Or so we thought! According to my friend, Wikipedia, the discovery of the Fractional Quantum Hall Effect in the 1980’s showed that you can form quasi-particles (or “bubbles” as Gil and Jason explain in the video) that carry 1/3 of an electron charge under certain 2D conditions. The 1998 Nobel Prize was awarded for this discovery, although, ironically, they had to split it in three (two for the experimentalists who found it and one for the theorist that explained it).

perfectencoding

Typically, I leave a lot out of the final video. The conversation I recorded with Jason and Gil lasted several hours and yet the final product is only five minutes long. One aspect that we talked a lot about but that I did not include in the video above (you watched it already, right?), is the idea of “More is Different”. Here is audio of Jason explaining what it is using birds as an example:

source: we-are-star-stuff.tumblr.com

source: we-are-star-stuff.tumblr.com. Click below to hear the audio.

This is the idea of “emergent properties”: that when you combine lots of something together, you don’t just get what’s inside, you get something new. Something different. I think this is a good analogy for IQIM itself, or any such grouping of researchers under one banner. Sure, technically, each person can do great research on their own, but mix them together in one soup and more interesting things can happen that you didn’t expect.

The IQIM Family:

IQIM

Well, I hope you’ve been enjoying these videos and blog entries. I was going to title this blog post, “The Mysteries Are Just Piling Up” or “Quantum Knots”, but then I looked at the pageviews for all the other blog posts I made:

pageviews

I guess the title of your blog post matters. So, if this video didn’t shock you, I hope at least it 1/3 shocked you.

Watch the fourth installment of this series:

Jorge Cham is the creator of Piled Higher and Deeper (www.phdcomics.com).

CREDITS:

Featuring: Gil Refael and Jason Alicea
Recorded and animated by Jorge Cham

Funding provided by the National Science Foundation and the Betty and Gordon Moore Foundation.

Bell’s inequality 50 years later

This is a jubilee year.* In November 1964, John Bell submitted a paper to the obscure (and now defunct) journal Physics. That paper, entitled “On the Einstein Podolsky Rosen Paradox,” changed how we think about quantum physics.

The paper was about quantum entanglement, the characteristic correlations among parts of a quantum system that are profoundly different than correlations in classical systems. Quantum entanglement had first been explicitly discussed in a 1935 paper by Einstein, Podolsky, and Rosen (hence Bell’s title). Later that same year, the essence of entanglement was nicely and succinctly captured by Schrödinger, who said, “the best possible knowledge of a whole does not necessarily include the best possible knowledge of its parts.” Schrödinger meant that even if we have the most complete knowledge Nature will allow about the state of a highly entangled quantum system, we are still powerless to predict what we’ll see if we look at a small part of the full system. Classical systems aren’t like that — if we know everything about the whole system then we know everything about all the parts as well. I think Schrödinger’s statement is still the best way to explain quantum entanglement in a single vigorous sentence.

To Einstein, quantum entanglement was unsettling, indicating that something is missing from our understanding of the quantum world. Bell proposed thinking about quantum entanglement in a different way, not just as something weird and counter-intuitive, but as a resource that might be employed to perform useful tasks. Bell described a game that can be played by two parties, Alice and Bob. It is a cooperative game, meaning that Alice and Bob are both on the same side, trying to help one another win. In the game, Alice and Bob receive inputs from a referee, and they send outputs to the referee, winning if their outputs are correlated in a particular way which depends on the inputs they receive.

But under the rules of the game, Alice and Bob are not allowed to communicate with one another between when they receive their inputs and when they send their outputs, though they are allowed to use correlated classical bits which might have been distributed to them before the game began. For a particular version of Bell’s game, if Alice and Bob play their best possible strategy then they can win the game with a probability of success no higher than 75%, averaged uniformly over the inputs they could receive. This upper bound on the success probability is Bell’s famous inequality.**

Classical and quantum versions of Bell's game. If Alice and Bob share entangled qubits rather than classical bits, then they can win the game with a higher success probability.

Classical and quantum versions of Bell’s game. If Alice and Bob share entangled qubits rather than classical bits, then they can win the game with a higher success probability.

There is also a quantum version of the game, in which the rules are the same except that Alice and Bob are now permitted to use entangled quantum bits (“qubits”)  which were distributed before the game began. By exploiting their shared entanglement, they can play a better quantum strategy and win the game with a higher success probability, better than 85%. Thus quantum entanglement is a useful resource, enabling Alice and Bob to play the game better than if they shared only classical correlations instead of quantum correlations.

And experimental physicists have been playing the game for decades, winning with a success probability that violates Bell’s inequality. The experiments indicate that quantum correlations really are fundamentally different than, and stronger than, classical correlations.

Why is that such a big deal? Bell showed that a quantum system is more than just a probabilistic classical system, which eventually led to the realization (now widely believed though still not rigorously proven) that accurately predicting the behavior of highly entangled quantum systems is beyond the capacity of ordinary digital computers. Therefore physicists are now striving to scale up the weirdness of the microscopic world to larger and larger scales, eagerly seeking new phenomena and unprecedented technological capabilities.

1964 was a good year. Higgs and others described the Higgs mechanism, Gell-Mann and Zweig proposed the quark model, Penzias and Wilson discovered the cosmic microwave background, and I saw the Beatles on the Ed Sullivan show. Those developments continue to reverberate 50 years later. We’re still looking for evidence of new particle physics beyond the standard model, we’re still trying to unravel the large scale structure of the universe, and I still like listening to the Beatles.

Bell’s legacy is that quantum entanglement is becoming an increasingly pervasive theme of contemporary physics, important not just as the source of a quantum computer’s awesome power, but also as a crucial feature of exotic quantum phases of matter, and even as a vital element of the quantum structure of spacetime itself. 21st century physics will advance not only by probing the short-distance frontier of particle physics and the long-distance frontier of cosmology, but also by exploring the entanglement frontier, by elucidating and exploiting the properties of increasingly complex quantum states.

frontiersSometimes I wonder how the history of physics might have been different if there had been no John Bell. Without Higgs, Brout and Englert and others would have elucidated the spontaneous breakdown of gauge symmetry in 1964. Without Gell-Mann, Zweig could have formulated the quark model. Without Penzias and Wilson, Dicke and collaborators would have discovered the primordial black-body radiation at around the same time.

But it’s not obvious which contemporary of Bell, if any, would have discovered his inequality in Bell’s absence. Not so many good physicists were thinking about quantum entanglement and hidden variables at the time (though David Bohm may have been one notable exception, and his work deeply influenced Bell.) Without Bell, the broader significance of quantum entanglement would have unfolded quite differently and perhaps not until much later. We really owe Bell a great debt.

*I’m stealing the title and opening sentence of this post from Sidney Coleman’s great 1981 lectures on “The magnetic monopole 50 years later.” (I’ve waited a long time for the right opportunity.)

**I’m abusing history somewhat. Bell did not use the language of games, and this particular version of the inequality, which has since been extensively tested in experiments, was derived by Clauser, Horne, Shimony, and Holt in 1969.

I spy with my little eye…something algebraic.

Look at this picture.

Peter 1

Does any part of it surprise you? Look more closely.

Peter 2

Now? Try crossing your eyes.

Peter 3

Do you see a boy’s name?

I spell “Peter” with two e’s, but “Piotr” and “Pyotr” appear as authors’ names in papers’ headers. Finding “Petr” in a paper shouldn’t have startled me. But how often does “Gretchen” or “Amadeus” materialize in an equation?

When I was little, my reading list included Eye Spy, Where’s Waldo?, and Puzzle Castle. The books teach children to pay attention, notice details, and evaluate ambiguities.

That’s what physicists do. The first time I saw the picture above, I saw a variation on “Peter.” I was reading (when do I not?) about the intersection of quantum information and thermodynamics. The authors were discussing heat and algebra, not saints or boys who picked pecks of pickled peppers. So I looked more closely.

Each letter resolved into part of a story about a physical system. The P represents a projector. A projector is a mathematical object that narrows one’s focus to a particular space, as blinders on a horse do. The E tells us which space to focus on: a space associated with an amount E of energy, like a country associated with a GDP of $500 billion.

Some of the energy E belongs to a heat reservoir. We know so because “reservoir” begins with r, and R appears in the picture. A heat reservoir is a system, like a colossal bathtub, whose temperature remains constant. The Greek letter \tau, pronounced “tau,” represents the reservoir’s state. The reservoir occupies an equilibrium state: The bath’s large-scale properties—its average energy, volume, etc.—remain constant. Never mind about jacuzzis.

Piecing together the letters, we interpret the picture as follows: Imagine a vast, constant-temperature bathtub (R). Suppose we shut the tap long enough ago that the water in the tub has calmed (\tau). Suppose the tub neighbors a smaller system—say, a glass of Perrier.* Imagine measuring how much energy the bath-and-Perrier composite contains (P). Our measurement device reports the number E.

Quite a story to pack into five letters. Didn’t Peter deserve a second glance?

The equation’s right-hand side forms another story. I haven’t seen Peters on that side, nor Poseidons nor Gallahads. But look closely, and you will find a story.

 

The images above appear in “Fundamental limitations for quantum and nanoscale thermodynamics,” published by Michał Horodecki and Jonathan Oppenheim in Nature Communications in 2013.

 

*Experts: The ρS that appears in the first two images represents the smaller system. The tensor product represents the reservoir-and-smaller-system composite.

The Science that made Stephen Hawking famous

In anticipation of The Theory of Everything which comes out today, and in the spirit of continuing with Quantum Frontiers’ current movie theme, I wanted to provide an overview of Stephen Hawking’s pathbreaking research. Or at least to the best of my ability—not every blogger on this site has won bets against Hawking! In particular, I want to describe Hawking’s work during the late ‘60s and through the ’70s. His work during the ’60s is the backdrop for this movie and his work during the ’70s revolutionized our understanding of black holes.

stephen-hawking-release

(Portrait of Stephen Hawking outside the Department of Applied Mathematics and Theoretical Physics, Cambridge. Credit: Jason Bye)

As additional context, this movie is coming out at a fascinating time, at a time when Hawking’s contributions appear more prescient and important than ever before. I’m alluding to the firewall paradox, which is the modern reincarnation of the information paradox (which will be discussed below), and which this blog has discussed multiple times. Progress through paradox is an important motto in physics and Hawking has been at the center of arguably the most challenging paradox of the past half century. I should also mention that despite irresponsible journalism in response to Hawking’s “there are no black holes” comment back in January, that there is extremely solid evidence that black holes do in fact exist. Hawking was referring to a technical distinction concerning the horizon/boundary of black holes.

Now let’s jump back and imagine that we are all young graduate students at Cambridge in the early ‘60s. Our protagonist, a young Hawking, had recently been diagnosed with ALS, he had recently met Jane Wilde and he was looking for a thesis topic. This was an exciting time for Einstein’s Theory of General Relativity (GR). The gravitational redshift had recently been confirmed by Pound and Rebka at Harvard, which put the theory on extremely solid footing. This was the third of three “classical tests of GR.” So now that everyone was truly convinced that GR is correct, it became important to get serious about investigating its most bizarre predictions. Hawking and Penrose picked up on this theme most notably.The mathematics of GR allows for singularities which lead to things like the big bang and black holes. This mathematical possibility was known since the works of Friedmann, Lemaitre and Oppenheimer+Snyder starting all the way back in the 1920s, but these calculations involved unphysical assumptions—usually involving unrealistic symmetries. Hawking and Penrose each asked (and answered) the questions: how robust and generic are these mathematical singularities? Will they persist even if we get rid of assumptions like perfect spherical symmetry of matter? What is their interpretation in physics?

I know that I have now used the word “singularity” multiple times without defining it. However, this is for good reason—it’s very hard to assign a precise definition to the term! Some examples of singularities include regions of “infinite curvature” or with “conical deficits.”

Singularity theorems applied to cosmology: Hawking’s first major results, starting with his thesis in 1965, was proving that singularities on the cosmological scale—such as the big bang—were indeed generic phenomena and not just mathematical artifacts. This work was published immediately after, and it built upon, a seminal paper by Penrose. Also, I apologize for copping-out again, but it’s outside the scope of this post to say more about the big bang, but as a rough heuristic, imagine that if you run time backwards then you obtain regions of infinite density. Hawking and Penrose spent the next five or so years stripping away as many assumptions as they could until they were left with rather general singularity theorems. Essentially, they used MATH to say something exceptionally profound about THE BEGINNING OF THE UNIVERSE! Namely that if you start with any solution to Einstein’s equations which is consistent with our observed universe, and run the solution backwards, then you will obtain singularities (regions of infinite density at the Big Bang in this case)! However, I should mention that despite being a revolutionary leap in our understanding of cosmology, this isn’t the end of the story, and that Hawking has also pioneered an attempt to understand what happens when you add quantum effects to the mix. This is still a very active area of research.

Singularity theorems applied to black holes: the first convincing evidence for the existence of astrophysical black holes didn’t come until 1972 with the discovery of Cygnus X-1, and even this discovery was wrought with controversy. So imagine yourself as Hawking back in the late ’60s. He and Penrose had this powerful machinery which they had successfully applied to better understand THE BEGINNING OF THE UNIVERSE but there was still a question about whether or not black holes actually existed in nature (not just in mathematical fantasy land.) In the very late ‘60s and early ’70s, Hawking, Penrose, Carter and others convincingly argued that black holes should exist. Again, they used math to say something about how the most bizarre corners of the universe should behave–and then black holes were discovered observationally a few years later. Math for the win!

No hair theorem: after convincing himself that black holes exist Hawking continued his theoretical studies about their strange properties. In the early ’70s, Hawking, Carter, Israel and Robinson proved a very deep and surprising conjecture of John Wheeler–that black holes have no hair! This name isn’t the most descriptive but it’s certainly provocative. More specifically they showed that only a short time after forming, a black hole is completely described by only a few pieces of data: knowledge of its position, mass, charge, angular momentum and linear momentum (X, M, Q, J and L). It only takes a few dozen numbers to describe an exceptionally complicated object. Contrast this to, for example, 1000 dust particles where you would need tens of thousands of datum (the position and momentum of each particle, their charge, their mass, etc.) This is crazy, the number of degrees of freedom seems to decrease as objects form into black holes?

Black hole thermodynamics: around the same time, Carter, Hawking and Bardeen proved a result similar to the second law of thermodynamics (it’s debatable how realistic their assumptions are.) Recall that this is the law where “the entropy in a closed system only increases.” Hawking showed that, if only GR is taken into account, then the area of a black holes’ horizon only increases. This includes that if two black holes with areas A_1 and A_2 merge then the new area A* will be bigger than the sum of the original areas A_1+A_2.

Combining this with the no hair theorem led to a fascinating exploration of a connection between thermodynamics and black holes. Recall that thermodynamics was mainly worked out in the 1800s and it is very much a “classical theory”–one that didn’t involve either quantum mechanics or general relativity. The study of thermodynamics resulted in the thrilling realization that it could be summarized by four laws. Hawking and friends took the black hole connection seriously and conjectured that there would also be four laws of black hole mechanics.

In my opinion, the most interesting results came from trying to understand the entropy of black hole. The entropy is usually the logarithm of the number of possible states consistent with observed ‘large scale quantities’. Take the ocean for example, the entropy is humungous. There are an unbelievable number of small changes that could be made (imagine the number of ways of swapping the location of a water molecule and a grain of sand) which would be consistent with its large scale properties like it’s temperature. However, because of the no hair theorem, it appears that the entropy of a black hole is very small? What happens when some matter with a large amount of entropy falls into a black hole? Does this lead to a violation of the second law of thermodynamics? No! It leads to a generalization! Bekenstein, Hawking and others showed that there are two contributions to the entropy in the universe: the standard 1800s version of entropy associated to matter configurations, but also contributions proportional to the area of black hole horizons. When you add all of these up, a new “generalized second law of thermodynamics” emerges. Continuing to take this thermodynamic argument seriously (dE=TdS specifically), it appeared that black holes have a temperature!

As a quick aside, a deep and interesting question is what degrees of freedom contribute to this black hole entropy? In the late ’90s Strominger and Vafa made exceptional progress towards answering this question when he showed that in certain settings, the number of microstates coming from string theory exactly reproduces the correct black hole entropy.

Black holes evaporate (Hawking Radiation): again, continuing to take this thermodynamic connection seriously, if black holes have a temperature then they should radiate away energy. But what is the mechanism behind this? This is when Hawking fearlessly embarked on one of the most heroic calculations of the 20th century in which he slogged through extremely technical calculations involving “quantum mechanics in a curved space” and showed that after superimposing quantum effects on top of general relativity, there is a mechanism for particles to escape from a black hole.

This is obviously a hard thing to describe, but for a hack-job analogy, imagine you have a hot plate in a cool room. Somehow the plate “radiates” away its energy until it has the same temperature as the room. How does it do this? By definition, the reason why a plate is hot, is because its molecules are jiggling around rapidly. At the boundary of the plate, sometimes a slow moving air molecule (lower temperature) gets whacked by a molecule in the plate and leaves with a higher momentum than it started with, and in return the corresponding molecule in the plate loses energy. After this happens an enormous number of times, the temperatures equilibrate. In the context of black holes, these boundary interactions would never happen without quantum mechanics. General relativity predicts that anything inside the event horizon is causally disconnected from anything on the outside and that’s that. However, if you take quantum effects into account, then for some very technical reasons, energy can be exchanged at the horizon (interface between the “inside” and “outside” of the black hole.)

Black hole information paradox: but wait, there’s more! These calculations weren’t done using a completely accurate theory of nature (we use the phrase “quantum gravity” as a placeholder for whatever this theory will one day be.) They were done using some nightmarish amalgamation of GR and quantum mechanics. Seminal thought experiments by Hawking led to different predictions depending upon which theory one trusted more: GR or quantum mechanics. Most famously, the information paradox considered what would happen if an “encyclopedia” were thrown into the black hole. GR predicts that after the black hole has fully evaporated, such that only empty space is left behind, that the “information” contained within this encyclopedia would be destroyed. (To readers who know quantum mechanics, replace “encylopedia” with “pure state”.) This prediction unacceptably violates the assumptions of quantum mechanics, which predict that the information contained within the encyclopedia will never be destroyed. (Maybe imagine you enclosed the black hole with perfect sensing technology and measured every photon that came out of the black hole. In principle, according to quantum mechanics, you should be able to reconstruct what was initially thrown into the black hole.)

Making all of this more rigorous: Hawking spent most of the rest of the ’70s making all of this more rigorous and stripping away assumptions. One particularly otherworldly and powerful tool involved redoing many of these black hole calculations using the euclidean path integral formalism.

I’m certain that I missed some key contributions and collaborators in this short history, and I sincerely apologize for that. However, I hope that after reading this you have a deepened appreciation for how productive Hawking was during this period. He was one of humanity’s earliest pioneers into the uncharted territory that we call quantum gravity. And he has inspired at least a few generations worth of theoretical physicists, obviously, including myself.

In addition to reading many of Hawking’s original papers, an extremely fun source for this post is a book which was published after his 60th birthday conference.

When I met with Steven Spielberg to talk about Interstellar

Today I had the awesome and eagerly anticipated privilege of attending a screening of the new film Interstellar, directed by Christopher Nolan. One can’t help but be impressed by Nolan’s fertile visual imagination. But you should know that Caltech’s own Kip Thorne also had a vital role in this project. Indeed, were there no Kip Thorne, Interstellar would never have happened.

On June 2, 2006, I participated in an unusual one-day meeting at Caltech, organized by Kip and the movie producer Lynda Obst (Sleepless in Seattle, Contact, The Invention of Lying, …). Lynda and Kip, who have been close since being introduced by their mutual friend Carl Sagan decades ago, had conceived a movie project together, and had collaborated on a “treatment” outlining the story idea. The treatment adhered to a core principle that was very important to Kip — that the movie be scientifically accurate. Though the story indulged in some wild speculations, at Kip’s insistence it skirted away from any flagrant violation of the firmly established laws of Nature. This principle of scientifically constrained speculation intrigued Steven Spielberg, who was interested in directing.

The purpose of the meeting was to brainstorm about the story and the science behind it with Spielberg, Obst, and Thorne. A remarkable group assembled, including physicists (Andrei Linde, Lisa Randall, Savas Dimopoulos, Mark Wise, as well as Kip), astrobiologists (Frank Drake, David Grinspoon), planetary scientists (Alan Boss, John Spencer, Dave Stevenson), and psychologists (Jay Buckey, James Carter, David Musson). As we all chatted and got acquainted, I couldn’t help but feel that we were taking part in the opening scene of a movie about making a movie. Spielberg came late and left early, but spent about three hours with us; he even brought along his Dad (an engineer).

Time_cover_interstellarThough the official release of Interstellar is still a few days away, you may already know from numerous media reports (including the cover story in this week’s Time Magazine) the essential elements of the story, which involves traveling through a wormhole seeking a new planet for humankind, a replacement for the hopelessly ravaged earth. The narrative evolved substantially as the project progressed, but traveling through a wormhole to visit a distant planet was already central to the original story.

Inevitably, some elements of the Obst/Thorne treatment did not survive in the final film. For one, Stephen Hawking was a prominent character in the original story; he joined the mission because of his unparalleled expertise at wormhole transversal, and Stephen’s ALS symptoms eased during prolonged weightlessness, only to recur upon return to earth gravity. Also, gravitational waves played a big part in the treatment; in particular the opening scene depicted LIGO scientists discovering the wormhole by detecting the gravitational waves emanating from it.

There was plenty to discuss to fill our one-day workshop, including: the rocket technology needed for the trip, the strong but stretchy materials that would allow the ship to pass through the wormhole without being torn apart by tidal gravity, how to select a crew psychologically fit for such a dangerous mission, what exotic life forms might be found on other worlds, how to communicate with an advanced civilization which resides in a higher dimensional bulk rather than the three-dimensional brane to which we’re confined, how to build a wormhole that stays open rather than pinching off and crushing those who attempt to pass through, and whether a wormhole could enable travel backward in time.

Spielberg was quite engaged in our discussions. Upon his arrival I immediately shot off a text to my daughter Carina: “Steven Spielberg is wearing a Brown University cap!” (Carina was a Brown student at the time, as Spielberg’s daughter had been.) Steven assured us of his keen interest in the project, noting wryly that “Aliens have been very good to me,” and he mentioned some of his favorite space movies, which included some I had also enjoyed as a kid, like Forbidden Planet and (the original) The Day the Earth Stood Still. In one notable moment, Spielberg asked the group “Who believes that intelligent life exists elsewhere in the universe?” We all raised our hands. “And who believes that the earth has been visited by extraterrestrial civilizations?” No one raised a hand. Steven seemed struck by our unanimity, on both questions.

I remember tentatively suggesting that the extraterrestrials had mastered M-theory, thus attaining computational power far beyond the comprehension of earthlings, and that they themselves were really advanced robots, constructed by an earlier generation of computers. Like many of the fun story ideas floated that day, this one had no apparent impact on the final version of the film.

Spielberg later brought in Jonah Nolan to write the screenplay. When Spielberg had to abandon the project because his DreamWorks production company broke up with Paramount Pictures (which owned the story), Jonah’s brother Chris Nolan eventually took over the project. Jonah and Chris Nolan transformed the story, but continued to consult extensively with Kip, who became an Executive Producer and says he is pleased with the final result.

Of the many recent articles about Interstellar, one of the most interesting is this one in Wired by Adam Rogers, which describes how Kip worked closely with the visual effects team at Double Negative to ensure that wormholes and rapidly rotating black holes are accurately depicted in the film (though liberties were taken to avoid confusing the audience). The images produced by sophisticated ray tracing computations were so surprising that at first Kip thought there must be a bug in the software, though eventually he accepted that the calculations are correct, and he is still working hard to more fully understand the results.

ScienceofInterstellarMech.inddI can’t give away the ending of the movie, but I can safely say this: When it’s over you’re going to have a lot of questions. Fortunately for all of us, Kip’s book The Science of Interstellar will be available the same day the movie goes into wide release (November 7), so we’ll all know where to seek enlightenment.

In fact on that very same day we’ll be treated to the release of The Theory of Everything, a biopic about Stephen and Jane Hawking. So November 7 is going to be an unforgettable Black Hole Day. Enjoy!

Generally speaking

My high-school calculus teacher had a mustache like a walrus’s and shoulders like a rower’s. At 8:05 AM, he would demand my class’s questions about our homework. Students would yawn, and someone’s hand would drift into the air.

“I have a general question,” the hand’s owner would begin.

“Only private questions from you,” my teacher would snap. “You’ll be a general someday, but you’re not a colonel, or even a captain, yet.”

Then his eyes would twinkle; his voice would soften; and, after the student asked the question, his answer would epitomize why I’ve chosen a life in which I use calculus more often than laundry detergent.

http://www.sell-buy-machines.com/2013/02/why-prefer-second-hand-equipment-over-new.html

Many times though I witnessed the “general” trap, I fell into it once. Little wonder: I relish generalization as other people relish hiking or painting or Michelin-worthy relish. When inferring general principles from examples, I abstract away details as though they’re tomato stains. My veneration of generalization led me to quantum information (QI) theory. One abstract theory can model many physical systems: electrons, superconductors, ion traps, etc.

Little wonder that generalizing a QI model swallowed my summer.

QI has shed light on statistical mechanics and thermodynamics, which describe energy, information, and efficiency. Models called resource theories describe small systems’ energies, information, and efficiencies. Resource theories help us calculate a quantum system’s value—what you can and can’t create from a quantum system—if you can manipulate systems in only certain ways.

Suppose you can perform only operations that preserve energy. According to the Second Law of Thermodynamics, systems evolve toward equilibrium. Equilibrium amounts roughly to stasis: Averages of properties like energy remain constant.

Out-of-equilibrium systems have value because you can suck energy from them to power laundry machines. How much energy can you draw, on average, from a system in a constant-temperature environment? Technically: How much “work” can you draw? We denote this average work by < W >. According to thermodynamics, < W > equals the change ∆F in the system’s Helmholtz free energy. The Helmholtz free energy is a thermodynamic property similar to the energy stored in a coiled spring.

http://www.telegraph.co.uk/property/propertyadvice/jeffhowell/8013593/Home-improvements-Slime-does-come-out-in-the-wash.html

One reason to study thermodynamics?

Suppose you want to calculate more than the average extractable work. How much work will you probably extract during some particular trial? Though statistical physics offers no answer, resource theories do. One answer derived from resource theories resembles ∆F mathematically but involves one-shot information theory, which I’ve discussed elsewhere.

If you average this one-shot extractable work, you recover < W > = ∆F. “Helmholtz” resource theories recapitulate statistical-physics results while offering new insights about single trials.

Helmholtz resource theories sit atop a silver-tasseled pillow in my heart. Why not, I thought, spread the joy to the rest of statistical physics? Why not generalize thermodynamic resource theories?

The average work <W > extractable equals ∆F if heat can leak into your system. If heat and particles can leak, <W > equals the change in your system’s grand potential. The grand potential, like the Helmholtz free energy, is a free energy that resembles the energy in a coiled spring. The grand potential characterizes Bose-Einstein condensates, low-energy quantum systems that may have applications to metrology and quantum computation. If your system responds to a magnetic field, or has mass and occupies a gravitational field, or has other properties, <W > equals the change in another free energy.

A collaborator and I designed resource theories that describe heat-and-particle exchanges. In our paper “Beyond heat baths: Generalized resource theories for small-scale thermodynamics,” we propose that different thermodynamic resource theories correspond to different interactions, environments, and free energies. I detailed the proposal in “Beyond heat baths II: Framework for generalized thermodynamic resource theories.”

“II” generalizes enough to satisfy my craving for patterns and universals. “II” generalizes enough to merit a hand-slap of a pun from my calculus teacher. We can test abstract theories only by applying them to specific systems. If thermodynamic resource theories describe situations as diverse as heat-and-particle exchanges, magnetic fields, and polymers, some specific system should shed light on resource theories’ accuracy.

If you find such a system, let me know. Much as generalization pleases aesthetically, the detergent is in the details.