The mechanics of thanksgiving

You and a friend are driving in a car. You’ve almost reached an intersection. The stoplight turns red. 

My teacher had handwritten the narrative on my twelfth-grade physics midterm. Many mechanics problems involve cars: Drivers smash into each other at this angle or that angle, interlocking their vehicles. The Principle of Conservation of Linear Momentum governs how the wreck moves. Students deduce how quickly the wreck skids, and in which direction.

Few mechanics problems involve the second person. I have almost reached an intersection?

You’re late for an event, and stopping would cost you several minutes. What do you do?

We’re probably a few meters from the light, I thought. How quickly are we driving? I could calculate the acceleration needed to—

(a) Speed through the red light.

(b) Hit the brakes. Fume about missing the light while you wait.

(c) Stop in front of the intersection. Chat with your friend, making the most of the situation. Resolve to leave your house earlier next time.

Pencils scritched, and students shifted in their chairs. I looked up from the choices.


Our classroom differed from most high-school physics classrooms. Sure, posters about Einstein and Nobel prizes decorated the walls. Circuit elements congregated in a corner. But they didn’t draw the eye the moment one stepped through the doorway.

A giant yellow smiley face did.

It sat atop the cupboards that faced the door. Next to the smiley stood a placard that read, “Say please and thank you.” Another placard hung above the chalkboard: “Are you showing your good grace and character?”

Our instructor taught mechanics and electromagnetism. He wanted to teach us more. He pronounced the topic in a southern sing-song: “an attitude of gratitude.”

Teenagers populate high-school classrooms. The cynicism in a roomful of teenagers could have rivaled the cynicism in Hemingway’s Paris. Students regarded his digressions as oddities. My high school fostered more manners than most. But a “Can you believe…?” tone accompanied recountings of the detours.

Yet our teacher’s drawl held steady as he read students’ answers to a bonus question on a test (“What are you grateful for?”). He bade us gaze at a box of Wheaties—the breakfast of champions—on whose front hung a mirror. He awarded Symbolic Lollipops for the top grades on tests and for acts of kindness. All with a straight face.

Except, once or twice over the years, I thought I saw his mouth tweak into a smile.

I’ve puzzled out momentum problems since graduating from that physics class. I haven’t puzzled out how to regard the class. As mawkish or moral? Heroic or humorous? I might never answer those questions. But the class led me toward a career in physics, and physicists value data. One datum stands out: I didn’t pack my senior-year high-school physics midterm when moving to Pasadena. But the midterm remains with me.


Happy Halloween from…the discrete Wigner function?

Do you hope to feel a breath of cold air on the back of your neck this Halloween? I’ve felt one literally: I earned my Masters in the icebox called “Ontario,” at the Perimeter Institute for Theoretical Physics. Perimeter’s colloquia1 take place in an auditorium blacker than a Quentin Tarantino film. Aephraim Steinberg presented a colloquium one air-conditioned May.

Steinberg experiments on ultracold atoms and quantum optics2 at the University of Toronto. He introduced an idea that reminds me of biting into an apple whose coating you’d thought consisted of caramel, then tasting blood: a negative (quasi)probability.

Probabilities usually range from zero upward. Consider Shirley Jackson’s short story The Lottery. Villagers in a 20th-century American village prepare slips of paper. The number of slips equals the number of families in the village. One slip bears a black spot. Each family receives a slip. Each family has a probability p > 0  of receiving the marked slip. What happens to the family that receives the black spot? Read Jackson’s story—if you can stomach more than a Tarantino film.

Jackson peeled off skin to reveal the offal of human nature. Steinberg’s experiments reveal the offal of Nature. I’d expect humaneness of Jackson’s villagers and nonnegativity of probabilities. But what looks like a probability and smells like a probability might be hiding its odor with Special-Edition Autumn-Harvest Febreeze.


A quantum state resembles a set of classical3 probabilities. Consider a classical system that has too many components for us to track them all. Consider, for example, the cold breath on the back of your neck. The breath consists of air molecules at some temperature T. Suppose we measured the molecules’ positions and momenta. We’d have some probability p_1 of finding this particle here with this momentum, that particle there with that momentum, and so on. We’d have a probability p_2 of finding this particle there with that momentum, that particle here with this momentum, and so on. These probabilities form the air’s state.

We can tell a similar story about a quantum system. Consider the quantum light prepared in a Toronto lab. The light has properties analogous to position and momentum. We can represent the light’s state with a mathematical object similar to the air’s probability density.4 But this probability-like object can sink below zero. We call the object a quasiprobability, denoted by \mu.

If a \mu sinks below zero, the quantum state it represents encodes entanglement. Entanglement is a correlation stronger than any achievable with nonquantum systems. Quantum information scientists use entanglement to teleport information, encrypt messages, and probe the nature of space-time. I usually avoid this cliché, but since Halloween is approaching: Einstein called entanglement “spooky action at a distance.”


Eugene Wigner and others defined quasiprobabilities shortly before Shirley Jackson wrote The Lottery. Quantum opticians use these \mu’s, because quantum optics and quasiprobabilities involve continuous variables. Examples of continuous variables include position: An air molecule can sit at this point (e.g., x = 0) or at that point (e.g., x = 1) or anywhere between the two (e.g., x = 0.001). The possible positions form a continuous set. Continuous variables model quantum optics as they model air molecules’ positions.

Information scientists use continuous variables less than we use discrete variables. A discrete variable assumes one of just a few possible values, such as 0 or 1, or trick or treat.


How a quantum-information theorist views Halloween.

Quantum-information scientists study discrete systems, such as electron spins. Can we represent discrete quantum systems with quasiprobabilities \mu as we represent continuous quantum systems? You bet your barmbrack.

Bill Wootters and others have designed quasiprobabilities for discrete systems. Wootters stipulated that his \mu have certain properties. The properties appear in this review.  Most physicists label properties “1,” “2,” etc. or “Prop. 1,” “Prop. 2,” etc. The Wootters properties in this review have labels suited to Halloween.


Seeing (quasi)probabilities sink below zero feels like biting into an apple that you think has a caramel coating, then tasting blood. Did you eat caramel apples around age six? Caramel apples dislodge baby teeth. When baby teeth fall out, so does blood. Tasting blood can mark growth—as does the squeamishness induced by a colloquium that spooks a student. Who needs haunted mansions when you have negative quasiprobabilities?


For nonexperts:

1Weekly research presentations attended by a department.


3Nonquantum (basically).

4Think “set of probabilities.”

Tripping over my own inner product

A scrape stood out on the back of my left hand. The scrape had turned greenish-purple, I noticed while opening the lecture-hall door. I’d jounced the hand against my dining-room table while standing up after breakfast. The table’s corners form ninety-degree angles. The backs of hands do not.

Earlier, when presenting a seminar, I’d forgotten to reference papers by colleagues. Earlier, I’d offended an old friend without knowing how. Some people put their feet in their mouths. I felt liable to swallow a clog.

The lecture was for Ph 219: Quantum ComputationI was TAing (working as a teaching assistant for) the course. John Preskill was discussing quantum error correction.

Computers suffer from errors as humans do: Imagine setting a hard drive on a table. Coffee might spill on the table (as it probably would have if I’d been holding a mug near the table that week). If the table is in my California dining room, an earthquake might judder the table. Juddering bangs the hard drive against the wood, breaking molecular bonds and deforming the hardware. The information stored in computers degrades.

How can we protect information? By encoding it—by translating the message into a longer, encrypted message. An earthquake might judder the encoded message. We can reverse some of the damage by error-correcting.

Different types of math describe different codes. John introduced a type of math called symplectic vector spaces. “Symplectic vector space” sounds to me like a garden of spiny cacti (on which I’d probably have pricked fingers that week). Symplectic vector spaces help us translate between the original and encoded messages.


Symplectic vector space?

Say that an earthquake has juddered our hard drive. We want to assess how the earthquake corrupted the encoded message and to error-correct. Our encryption scheme dictates which operations we should perform. Each possible operation, we represent with a mathematical object called a vector. A vector can take the form of a list of numbers.

We construct the code’s vectors like so. Say that our quantum hard drive consists of seven phosphorus nuclei atop a strip of silicon. Each nucleus has two observables, or measurable properties. Let’s call the observables Z and X.

Suppose that we should measure the first nucleus’s Z. The first number in our symplectic vector is 1. If we shouldn’t measure the first nucleus’s Z, the first number is 0. If we should measure the second nucleus’s Z, the second number is 1; if not, 0; and so on for the other nuclei. We’ve assembled the first seven numbers in our vector. The final seven numbers dictate which nuclei’s Xs we measure. An example vector looks like this: ( 1, \, 0, \, 1, \, 0, \, 1, \, 0, \, 1 \; | \; 0, \, 0, \, 0, \, 0, \, 0, \, 0, \, 0 ).

The vector dictates that we measure four Zs and no Xs.


Symplectic vectors represent the operations we should perform to correct errors.

A vector space is a collection of vectors. Many problems—not only codes—involve vector spaces. Have you used Google Maps? Google illustrates the step that you should take next with an arrow. We can represent that arrow with a vector. A vector, recall, can take the form of a list of numbers. The step’s list of twonumbers indicates whether you should walk ( \text{Northward or not} \; | \; \text{Westward or not} ).


I’d forgotten about my scrape by this point in the lecture. John’s next point wiped even cacti from my mind.

Say you want to know how similar two vectors are. You usually calculate an inner product. A vector v tends to have a large inner product with any vector w that points parallel to v.


Parallel vectors tend to have a large inner product.

The vector v tends to have an inner product of zero with any vector w that points perpendicularly. Such v and w are said to annihilate each other. By the end of a three-hour marathon of a research conversation, we might say that v and w “destroy” each other. v is orthogonal to w.


Two orthogonal vectors, having an inner product of zero, annihilate each other.

You might expect a vector v to have a huge inner product with itself, since v points parallel to v. Quantum-code vectors defy expectations. In a symplectic vector space, John said, “you can be orthogonal to yourself.”

A symplectic vector2 can annihilate itself, destroy itself, stand in its own way. A vector can oppose itself, contradict itself, trip over its own feet. I felt like I was tripping over my feet that week. But I’m human. A vector is a mathematical ideal. If a mathematical ideal could be orthogonal to itself, I could allow myself space to err.


Tripping over my own inner product.

Lloyd Alexander wrote one of my favorite books, the children’s novel The Book of Three. The novel features a stout old farmer called Coll. Coll admonishes an apprentice who’s burned his fingers: “See much, study much, suffer much.” We smart while growing smarter.

An ant-sized scar remains on the back of my left hand. The scar has been fading, or so I like to believe. I embed references to colleagues’ work in seminar Powerpoints, so that I don’t forget to cite anyone. I apologized to the friend, and I know about symplectic vector spaces. We all deserve space to err, provided that we correct ourselves. Here’s to standing up more carefully after breakfast.


1Not that I advocate for limiting each coordinate to one bit in a Google Maps vector. The two-bit assumption simplifies the example.

2Not only symplectic vectors are orthogonal to themselves, John pointed out. Consider a string of bits that contains an even number of ones. Examples include (0, 0, 0, 0, 1, 1). Each such string has a bit-wise inner product, over the field {\mathbb Z}_2, of zero with itself.

Greg Kuperberg’s calculus problem

“How good are you at calculus?”

This was the opening sentence of Greg Kuperberg’s Facebook status on July 4th, 2016.

“I have a joint paper (on isoperimetric inequalities in differential geometry) in which we need to know that

(\sin\theta)^3 xy + ((\cos\theta)^3 -3\cos\theta +2) (x+y) - (\sin\theta)^3-6\sin\theta -6\theta + 6\pi \\ \\- 6\arctan(x) +2x/(1+x^2) -6\arctan(y) +2y/(1+y^2)

is non-negative for x and y non-negative and \theta between 0 and \pi. Also, the minimum only occurs for x=y=1/(\tan(\theta/2).”

Let’s take a moment to appreciate the complexity of the mathematical statement above. It is a non-linear inequality in three variables, mixing trigonometry with algebra and throwing in some arc-tangents for good measure. Greg, continued:

“We proved it, but only with the aid of symbolic algebra to factor an algebraic variety into irreducible components. The human part of our proof is also not really a cake walk.

A simpler proof would be way cool.”

I was hooked. The cubic terms looked a little intimidating, but if I converted x and y into \tan(\theta_x) and \tan(\theta_y), respectively, as one of the comments on Facebook promptly suggested, I could at least get rid of the annoying arc-tangents and then calculus and trigonometry would take me the rest of the way. Greg replied to my initial comment outlining a quick route to the proof: “Let me just caution that we found the problem unyielding.” Hmm… Then, Greg revealed that the paper containing the original proof was over three years old (had he been thinking about this since then? that’s what true love must be like.) Titled “The Cartan-Hadamard Conjecture and The Little Prince“, the above inequality makes its appearance as Lemma 7.1 on page 45 (of 63). To quote the paper: “Although the lemma is evident from contour plots, the authors found it surprisingly tricky to prove rigorously.”

As I filled pages of calculations and memorized every trigonometric identity known to man, I realized that Greg was right: the problem was highly intractable. The quick solution that was supposed to take me two to three days turned into two weeks of hell, until I decided to drop the original approach and stick to doing calculus with the known unknowns, x and y. The next week led me to a set of three non-linear equations mixing trigonometric functions with fourth powers of x and y, at which point I thought of giving up. I knew what I needed to do to finish the proof, but it looked freaking insane. Still, like the masochist that I am, I continued calculating away until my brain was mush. And then, yesterday, during a moment of clarity, I decided to go back to one of the three equations and rewrite it in a different way. That is when I noticed the error. I had solved for \cos\theta in terms of x and y, but I had made a mistake that had cost me 10 days of intense work with no end in sight. Once I found the mistake, the whole proof came together within about an hour. At that moment, I felt a mix of happiness (duh), but also sadness, as if someone I had grown fond of no longer had a reason to spend time with me and, at the same time, I had ran out of made-up reasons to hang out with them. But, yeah, I mostly felt happiness.

Greg Kuperberg pondering about the universe of mathematics.

Greg Kuperberg pondering about the universe of mathematics.

Before I present the proof below, I want to take a moment to say a few words about Greg, whom I consider to be the John Preskill of mathematics: a lodestar of sanity in a sea of hyperbole (to paraphrase Scott Aaronson). When I started grad school at UC Davis back in 2003, quantum information theory and quantum computing were becoming “a thing” among some of the top universities around the US. So, I went to several of the mathematics faculty in the department asking if there was a course on quantum information theory I could take. The answer was to “read Nielsen and Chuang and then go talk to Professor Kuperberg”. Being a foolish young man, I skipped the first part and went straight to Greg to ask him to teach me (and four other brave souls) quantum “stuff”. Greg obliged with a course on… quantum probability and quantum groups. Not what I had in mind. This guy was hardcore. Needless to say, the five brave souls taking the class (mostly fourth year graduate students and me, the noob) quickly became three, then two gluttons for punishment (the other masochist became one of my best friends in grad school). I could not drop the class, not because I had asked Greg to do this as a favor to me, but because I knew that I was in the presence of greatness (or maybe it was Stockholm syndrome). My goal then, as an aspiring mathematician, became to one day have a conversation with Greg where, for some brief moment, I would not sound stupid. A man of incredible intelligence, Greg is that rare individual whose character matches his intellect. Much like the anti-heroes portrayed by Humphrey Bogart in Casablanca and the Maltese Falcon, Greg keeps a low-profile, seems almost cynical at times, but in the end, he works harder than everyone else to help those in need. For example, on MathOverflow, a question and answer website for professional mathematicians around the world, Greg is listed as one of the top contributors of all time.

But, back to the problem. The past four weeks thinking about it have oscillated between phases of “this is the most fun I’ve had in years!” to “this is Greg’s way of telling me I should drop math and become a go-go dancer”. Now that the ordeal is over, I can confidently say that the problem is anything but “dull” (which is how Greg felt others on MathOverflow would perceive it, so he never posted it there). In fact, if I ever have to teach Calculus, I will subject my students to the step-by-step proof of this problem. OK, here is the proof. This one is for you Greg. Thanks for being such a great role model. Sorry I didn’t get to tell you until now. And you are right not to offer a “bounty” for the solution. The journey (more like, a trip to Mordor and back) was all the money.

The proof: The first thing to note (and if I had read Greg’s paper earlier than today, I would have known as much weeks ago) is that the following equality holds (which can be verified quickly by differentiating both sides):

4 x - 6\arctan(x) +2x/(1+x^2) = 4 \int_0^x \frac{s^4}{(1+s^2)^2} ds.

Using the above equality (and the equivalent one for y), we get:

F(\theta,x,y) = (\sin\theta)^3 xy + ((\cos\theta)^3 -3\cos\theta -2) (x+y) - (\sin\theta)^3-6\sin\theta -6\theta + 6\pi \\ \\4 \int_0^x \frac{s^4}{(1+s^2)^2} ds+4 \int_0^y \frac{s^4}{(1+s^2)^2} ds.

Now comes the fun part. We differentiate with respect to \theta, x and y, and set to zero to find all the maxima and minima of F(\theta,x,y) (though we are only interested in the global minimum, which is supposed to be at x=y=\tan^{-1}(\theta/2)). Some high-school level calculus yields:

\partial_\theta F(\theta,x,y) = 0 \implies \sin^2(\theta) (\cos(\theta) xy + \sin(\theta)(x+y)) = \\ \\ 2 (1+\cos(\theta))+\sin^2(\theta)\cos(\theta).

At this point, the most well-known trigonometric identity of all time, \sin^2(\theta)+\cos^2(\theta)=1, can be used to show that the right-hand-side can be re-written as:

2(1+\cos(\theta))+\sin^2(\theta)\cos(\theta) = \sin^2(\theta) (\cos\theta \tan^{-2}(\theta/2) + 2\sin\theta \tan^{-1}(\theta/2)),

where I used (my now favorite) trigonometric identity: \tan^{-1}(\theta/2) = (1+\cos\theta)/\sin(\theta) (note to the reader: \tan^{-1}(\theta) = \cot(\theta)). Putting it all together, we now have the very suggestive condition:

\sin^2(\theta) (\cos(\theta) (xy-\tan^{-2}(\theta/2)) + \sin(\theta)(x+y-2\tan^{-1}(\theta/2))) = 0,

noting that, despite appearances, \theta = 0 is not a solution (as can be checked from the original form of this equality, unless x and y are infinite, in which case the expression is clearly non-negative, as we show towards the end of this post). This leaves us with \theta = \pi and

\cos(\theta) (\tan^{-2}(\theta/2)-xy) = \sin(\theta)(x+y-2\tan^{-1}(\theta/2)),

as candidates for where the minimum may be. A quick check shows that:

F(\pi,x,y) = 4 \int_0^x \frac{s^4}{(1+s^2)^2} ds+4 \int_0^y \frac{s^4}{(1+s^2)^2} ds \ge 0,

since x and y are non-negative. The following obvious substitution becomes our greatest ally for the rest of the proof:

x= \alpha \tan^{-1}(\theta/2), \, y = \beta \tan^{-1}(\theta/2).

Substituting the above in the remaining condition for \partial_\theta F(\theta,x,y) = 0, and using again that \tan^{-1}(\theta/2) = (1+\cos\theta)/\sin\theta, we get:

\cos\theta (1-\alpha\beta) = (1-\cos\theta) ((\alpha-1) + (\beta-1)),

which can be further simplified to (if you are paying attention to minus signs and don’t waste a week on a wild-goose chase like I did):

\cos\theta = \frac{1}{1-\beta}+\frac{1}{1-\alpha}.

As Greg loves to say, we are finally cooking with gas. Note that the expression is symmetric in \alpha and \beta, which should be obvious from the symmetry of F(\theta,x,y) in x and y. That observation will come in handy when we take derivatives with respect to x and y now. Factoring (\cos\theta)^3 -3\cos\theta -2 = - (1+\cos\theta)^2(2-\cos\theta), we get:

\partial_x F(\theta,x,y) = 0 \implies \sin^3(\theta) y + 4\frac{x^4}{(1+x^2)^2} = (1+\cos\theta)^2 + \sin^2\theta (1+\cos\theta).

Substituting x and y with \alpha \tan^{-1}(\theta/2), \beta \tan^{-1}(\theta/2), respectively and using the identities \tan^{-1}(\theta/2) = (1+\cos\theta)/\sin\theta and \tan^{-2}(\theta/2) = (1+\cos\theta)/(1-\cos\theta), the above expression simplifies significantly to the following expression:

4\alpha^4 =\left((\alpha^2-1)\cos\theta+\alpha^2+1\right)^2 \left(1 + (1-\beta)(1-\cos\theta)\right).

Using \cos\theta = \frac{1}{1-\beta}+\frac{1}{1-\alpha}, which we derived earlier by looking at the extrema of F(\theta,x,y) with respect to \theta, and noting that the global minimum would have to be an extremum with respect to all three variables, we get:

4\alpha^4 (1-\beta) = \alpha (\alpha-1) (1+\alpha + \alpha(1-\beta))^2,

where we used 1 + (1-\beta)(1-\cos\theta) = \alpha (1-\beta) (\alpha-1)^{-1} and

(\alpha^2-1)\cos\theta+\alpha^2+1 = (\alpha+1)((\alpha-1)\cos\theta+1)+\alpha(\alpha-1) = \\ (\alpha-1)(1-\beta)^{-1} (2\alpha + 1-\alpha\beta).

We may assume, without loss of generality, that x \ge y. If \alpha = 0, then \alpha = \beta = 0, which leads to the contradiction \cos\theta = 2, unless the other condition, \theta = \pi, holds, which leads to F(\pi,0,0) = 0. Dividing through by \alpha and re-writing 4\alpha^3(1-\beta) = 4\alpha(1+\alpha)(\alpha-1)(1-\beta) + 4\alpha(1-\beta), yields:

4\alpha (1-\beta) = (\alpha-1) (1+\alpha - \alpha(1-\beta))^2 = (\alpha-1)(1+\alpha\beta)^2,

which can be further modified to:

4\alpha +(1-\alpha\beta)^2 = \alpha (1+\alpha\beta)^2,

and, similarly for \beta (due to symmetry):

4\beta +(1-\alpha\beta)^2 = \beta (1+\alpha\beta)^2.

Subtracting the two equations from each other, we get:

4(\alpha-\beta) = (\alpha-\beta)(1+\alpha\beta)^2,

which implies that \alpha = \beta and/or \alpha\beta =1. The first leads to 4\alpha (1-\alpha) = (\alpha-1)(1+\alpha^2)^2, which immediately implies \alpha = 1 = \beta (since the left and right side of the equality have opposite signs otherwise). The second one implies that either \alpha+\beta =2, or \cos\theta =1, which follows from the earlier equation \cos\theta (1-\alpha\beta) = (1-\cos\theta) ((\alpha-1) + (\beta-1)). If \alpha+\beta =2 and 1 = \alpha\beta, it is easy to see that \alpha=\beta=1 is the only solution by expanding (\sqrt{\alpha}-\sqrt{\beta})^2=0. If, on the other hand, \cos\theta = 1, then looking at the original form of F(\theta,x,y), we see that F(0,x,y) = 6\pi - 6\arctan(x) +2x/(1+x^2) -6\arctan(y) +2y/(1+y^2) \ge 0, since x,y \ge 0 \implies \arctan(x)+\arctan(y) \le \pi.

And that concludes the proof, since the only cases for which all three conditions are met lead to \alpha = \beta = 1 and, hence, x=y=\tan^{-1}(\theta/2). The minimum of F(\theta, x,y) at these values is always zero. That’s right, all this work to end up with “nothing”. But, at least, the last four weeks have been anything but dull.

Update: Greg offered Lemma 7.4 from the same paper as another challenge (the sines, cosines and tangents are now transformed into hyperbolic trigonometric functions, with a few other changes, mostly in signs, thrown in there). This is a more hardcore-looking inequality, but the proof turns out to follow the steps of Lemma 7.1 almost identically. In particular, all the conditions for extrema are exactly the same, with the only difference being that cosine becomes hyperbolic cosine. It is an awesome exercise in calculus to check this for yourself. Do it. Unless you have something better to do.

Bringing the heat to Cal State LA

John Baez is a tough act to follow.

The mathematical physicist presented a colloquium at Cal State LA this May.1 The talk’s title: “My Favorite Number.” The advertisement image: A purple “24” superimposed atop two egg cartons.


The colloquium concerned string theory. String theorists attempt to reconcile Einstein’s general relativity with quantum mechanics. Relativity concerns the large and the fast, like the sun and light. Quantum mechanics concerns the small, like atoms. Relativity and with quantum mechanics individually suggest that space-time consists of four dimensions: up-down, left-right, forward-backward, and time. String theory suggests that space-time has more than four dimensions. Counting dimensions leads theorists to John Baez’s favorite number.

His topic struck me as bold, simple, and deep. As an otherworldly window onto the pedestrian. John Baez became, when I saw the colloquium ad, a hero of mine.

And a tough act to follow.

I presented Cal State LA’s physics colloquium the week after John Baez. My title: “Quantum steampunk: Quantum information applied to thermodynamics.” Steampunk is a literary, artistic, and film genre. Stories take place during the 1800s—the Victorian era; the Industrial era; an age of soot, grime, innovation, and adventure. Into the 1800s, steampunkers transplant modern and beyond-modern technologies: automata, airships, time machines, etc. Example steampunk works include Will Smith’s 1999 film Wild Wild West. Steampunk weds the new with the old.

So does quantum information applied to thermodynamics. Thermodynamics budded off from the Industrial Revolution: The steam engine crowned industrial technology. Thinkers wondered how efficiently engines could run. Thinkers continue to wonder. But the steam engine no longer crowns technology; quantum physics (with other discoveries) does. Quantum information scientists study the roles of information, measurement, and correlations in heat, energy, entropy, and time. We wed the new with the old.


What image could encapsulate my talk? I couldn’t lean on egg cartons. I proposed a steampunk warrior—cravatted, begoggled, and spouting electricity. The proposal met with a polite cough of an email. Not all department members, Milan Mijic pointed out, had heard of steampunk.

Steampunk warrior

Milan is a Cal State LA professor and my erstwhile host. We toured the palm-speckled campus around colloquium time. What, he asked, can quantum information contribute to thermodynamics?

Heat offers an example. Imagine a classical (nonquantum) system of particles. The particles carry kinetic energy, or energy of motion: They jiggle. Particles that bump into each other can exchange energy. We call that energy heat. Heat vexes engineers, breaking transistors and lowering engines’ efficiencies.

Like heat, work consists of energy. Work has more “orderliness” than the heat transferred by random jiggles. Examples of work exertion include the compression of a gas: A piston forces the particles to move in one direction, in concert. Consider, as another example, driving electrons around a circuit with an electric field. The field forces the electrons to move in the same direction. Work and heat account for all the changes in a system’s energy. So states the First Law of Thermodynamics.

Suppose that the system is quantum. It doesn’t necessarily have a well-defined energy. But we can stick the system in an electric field, and the system can exchange motional-type energy with other systems. How should we define “work” and “heat”?

Quantum information offers insights, such as via entropies. Entropies quantify how “mixed” or “disordered” states are. Disorder grows as heat suffuses a system. Entropies help us extend the First Law to quantum theory.

First slide

So I explained during the colloquium. Rarely have I relished engaging with an audience as much as I relished engaging with Cal State LA’s. Attendees made eye contact, posed questions, commented after the talk, and wrote notes. A student in a corner appeared to be writing homework solutions. But a presenter couldn’t have asked for more from the rest. One exclamation arrested me like a coin in the cogs of a grandfather clock.

I’d peppered my slides with steampunk art: paintings, drawings, stills from movies. The peppering had staved off boredom as I’d created the talk. I hoped that the peppering would stave off my audience’s boredom. I apologized about the trimmings.

“No!” cried a woman near the front. “It’s lovely!”

I was about to discuss experiments by Jukka Pekola’s group. Pekola’s group probes quantum thermodynamics using electronic circuits. The group measures heat by counting the electrons that hop from one part of the circuit to another. Single-electron transistors track tunneling (quantum movements) of single particles.

Heat complicates engineering, calculations, and California living. Heat scrambles signals, breaks devices, and lowers efficiencies. Quantum heat can evade definition. Thermodynamicists grind their teeth over heat.

“No!” the woman near the front had cried. “It’s lovely!”

She was referring to steampunk art. But her exclamation applied to my subject. Heat has not only practical importance, but also fundamental: Heat influences every law of thermodynamics. Thermodynamic law underpins much of physics as 24 underpins much of string theory. Lovely, I thought, indeed.

Cal State LA offered a new view of my subfield, an otherworldly window onto the pedestrian. The more pedestrian an idea—the more often the idea surfaces, the more of our world the idea accounts for—the deeper the physics. Heat seems as pedestrian as a Pokémon Go player. But maybe, someday, I’ll present an idea as simple, bold, and deep as the number 24.


A window onto Cal State LA.

With gratitude to Milan Mijic, and to Cal State LA’s Department of Physics and Astronomy, for their hospitality.

1For nonacademics: A typical physics department hosts several presentations per week. A seminar relates research that the speaker has undertaken. The audience consists of department members who specialize in the speaker’s subfield. A department’s astrophysicists might host a Monday seminar; its quantum theorists, a Wednesday seminar; etc. One colloquium happens per week. Listeners gather from across the department. The speaker introduces a subfield, like the correction of errors made by quantum computers. Course lectures target students. Endowed lectures, often named after donors, target researchers.

The physics of Trump?? Election renormalization.


Two things were high in my mind this last quarter: My course on advanced statistical mechanics and phase transitions, and the bizarre general elections that raged all around. It is no wonder then, that I would start to conflate the Ising model, Landau mean field, and renormalization group, with the election process, and just think of each and every one of us as a tiny magnet, that needs to say up or down – Trump or Cruz, Clinton or Sanders (a more appetizing choice, somehow), and .. you get the drift.

Elections and magnetic phase transitions are very much alike. The latter, I will argue, teaches us something very important about the former.

The physics of magnetic phase transitions is amazing. If I hadn’t thought this way, I wouldn’t be a condensed matter physicist. Models of magnets consider a bunch of spins – each one a small magnet – that talk only to their nearest neighbor, as happens in typical magnets. At the onset of magnetic order (the Curie temperature), when the symmetry of the spins becomes broken, it turns out that the spin correlation length diverges. Even though Interaction length = lattice constant, we get correlation length = infinity.

To understand how ridiculous this is, you should understand what a correlation length is. The correlation tells you a simple thing. If you are a spin, trying to make it out in life, and trying to figure out where to point, your pals around you are certainly going to influence you. Their pals will influence them, and therefore you. The correlation length tells you how distant can a spin be, and still manage to nudge you to point up or down. In physics-speak, it is the reduced correlation length. It makes sense that somebody in you neighborhood, or your office, or even your town, will do something that will affect you – after all – you always interact with people that distant. But the analogy to the spins is that there is always a given circumstance where some random person in Incheon, South Korea, could influence your vote. A diverging correlation length is the Butterfly effect for real.

And yet, spins do this. At the critical temperature, just as the spins decide whether they want to point along the north pole or towards Venus, every nonsense of a fluctuation that one of them makes leagues away may galvanize things one way or another. Without ever even remotely directly talking to even their father’s brother’s nephew’s cousin’s former roommate! Every fluctuation, no matter where, factors into the symmetry breaking process.

A bit of physics, before I’m blamed for being crude in my interpretation. The correlation length at the Curie point, and almost all symmetry-breaking continuous transitions, diverges as some inverse power of the temperature difference to the critical point: \frac{1}{|T-T_c|}^{\nu}. The faster it diverges (the higher the power \nu) , actually the more feeble the symmetry breaking is. Why is that? After I argued that this is an amazing phenomenon? Well, if 10^2 voices can shift you one way or another, each voice is worth something. If 10^{20} voices are able to push you around, I’m not really buying influence on you by bribing ten of these. Each voice is worth less. Why? The correlation length is also a measure of the uncertainty before the moment of truth – when the battle starts and we don’t know who wins. Big correlation length – any little element of the battlefield can change something, and many souls are involved and active. Small correlation length – the battle was already decided since one of the sides has a single bomb that will evaporate the world. Who knew that Dr. Strangelove could be a condensed matter physicist?

This lore of correlations led to one of the most breathtaking developments of 20th century physics. I’m a condensed matter guy, so it is natural that Ken Wilson, as well as Ben Widom, Michael Fisher, and Leo Kadanoff are my superheros. They came up with an idea so simple yet profound – scaling. If you have a system (say, of spins) that you can’t figure out – maybe because it is fluctuating, and because it is interacting – regardless, all you need to do is to move away from it. Let averaging (aka, central limit theorem) do the job and suppress fluctuations. Let us just zoom out. If we change the scale by a factor of 2, so that all spins look more crowded, then the correlation length also look half as big. The system looks less critical. It is as if we managed to move away from the critical temperature – either cooling towards T=0 , or heating up towards T=\infty. Both limits are easy to solve. How do we make this into a framework? If the pre-zoom-out volume had 8 spins, we can average them into a representative single spin. This way you’ll end up with a system that looks pretty much like the one you had before – same spin density, same interaction, same physics – but at a different temperature, and further from the phase transition. It turns out you can do this, and you can figure out how much changed in the process. Together, this tells you how the correlation length depends on T-T_c. This is the renormalization group, aka, RG.

Interestingly, this RG procedure informs us that criticality and symmetry breaking are more feeble the lower the dimension. There are no 1d permanent magnets, and magnetism in 2d is very frail. Why? Well, the more dimensions there are, the more nearest neighbors each spin has, and more neighbors your neighbors have. Think about the 6-degrees of separation game. 3d is okay for magnets, as we know. It turns out, however, that in physical systems above 4 dimensions, critical phenomena is the same as that of a fully connected (infinite dimensional) network. The uncertainty stage is very small, correlations length diverge slowly. Even at distance 1 there are enough people or spins to bend your will one way or another. Magnetization is just a question of time elapsed from the beginning of the experiment.

Spins, votes, what’s the difference? You won’t be surprised to find that the term renormalization has permeated every aspect of economics and social science as well. What is voting Republican vs Democrat if not a symmetry breaking? Well, it is not that bad yet – the parties are different. No real symmetry there, you would think. Unless you ask the ‘undecided voter’.

And if elections are affected by such correlated dynamics, what about revolutions? Here the analogy with phase transitions is so much more prevalent even in our language – resistance to a regime solidifies, crystallizes, and aligns – just like solids and magnets. When people are fed up with a regime, the crucial question is – if I would go to the streets, will I be joined by enough people to affect a change?

Revolutions, therefore, seem to rise out of strong fluctuations in the populace. If you wish, think of revolutions as domains where the frustration is so high, which give a political movement the inertia it needs.

Domains-: that’s exactly what the correlation length is about. The correlation length is the size of correlated magnetic domains, i.e.,groups of spins that point in the same direction. And now we remember that close to a phase transition, the correlation length diverges as some power of the distance ot the transition: \frac{1}{|T-T_c|^{\nu}}. Take a magnet just above its Curie temperature. The closer we are to a phase transition, the larger the correlation length is, and the bigger are the fluctuating magnetized domains. The parameter \nu is the correlation-length critical exponent and something of a holy grail for practitioners of statistical mechanics. Everyone wants to calculate it for various phase transition. It is not that easy. That’s partially why I have a job.

The correlation length aside, how many spins are involved in a domain? \left[1/|T-T_c|^d\right]^{\nu} . Actually, we know roughly what \nu is. For systems with dimension $latex  d>4$, it is ½. For systems with a lower dimensionality it is roughly $latex  2/d$. (Comment for the experts: I’m really not kidding – this fits the Ising model for 2 and 3 dimensions, and it fits the xy model for 3d).

So the number of spins in a domain in systems below 4d is 1/|T-T_c|^2, independent of dimension. On the other hand, four d and up it is 1/|T-T_c|^{d/2}. Increasing rapidly with dimension, when we are close to the critical point.

Back to voters. In a climate of undecided elections, analogous to a magnet near its Curie point, the spins are the voters, and domain walls are the crowds supporting this candidate or that policy; domain walls are what becomes large demonstrations in the Washington Mall. And you would think that the world we live in is clearly 2d – a surface of a 3d sphere (and yes – that includes Manhattan!). So a political domain size just diverges as a simple moderate 1/|T-T_c|^2 during times of contested elections.

Something happened, however, in the past two decades: the internet. The connectivity of the world has changed dramatically.

No more 2d. Now, our effective dimension is determined by our web based social network. Facebook perhaps? Roughly speaking, the dimensionality of the Facebook network is that number of friends we have, divided by the number of mutual friends. I venture to say this averages at about 10. With about a 150 friends in tow, out of which 15 are mutual. So our world, for election purposes, is 10 dimensional big!

Let’s simulate what this means for our political system. Any event – a terrorist attack, or a recession, etc. will cause a fluctuation that will involve a large group of people – a domain. Take a time when T-T_c is a healthy 0.1 for instance. In the good old 2d world this would involve 100 friends times 1/0.1^2\sim 10000 people. Now it would be more like 100\cdot 1/0.1^{10/2}\sim 10-millions. So any small perturbation of conditions could make entire states turn one way or another.

When response to slight shifts in prevailing conditions encompasses entire states, rather than entire neighborhoods, polarization follows. Over all, a state where each neighborhood has a slightly different opinion will be rather moderate – extreme opinions will only resonate locally. Single voices could only sway so many people. But nowadays, well – we’ve all seen Trump and the like on the march. Millions. It’s not even their fault – its physics!

Can we do anything about it? It’s up for debate. Maybe cancel the electoral college, to make the selecting unit larger than the typical size of a fluctuating domain. Maybe carry out a time averaged election: make an election year where each month there is a contest for the grand prize. Or maybe just move to Canada.

What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Steaming soup

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Bridge - theory

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Bridge - exprmt

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Dunk 2

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter \delta. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number N_\delta of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform N_\delta trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

Schwinger headstone

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

Bread knife

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)