Good news everyone! Flatland is non-contextual!

Quantum mechanics is weird! Imagine for a second that you want to make an experiment and that the result of your experiment depends on what your colleague is doing in the next room. It would be crazy to live in such a world! This is the world we live in, at least at the quantum scale. The result of an experiment cannot be described in a way that is independent of the context. The neighbor is sticking his nose in our experiment!

Before telling you why quantum mechanics is contextual, let me give you an experiment that admits a simple non-contextual explanation. This story takes place in Flatland, a two-dimensional world inhabited by polygons. Our protagonist is a square who became famous after claiming that he met a sphere.

flatland

 

This square, call him Mr Square for convenience, met a sphere, Miss Sphere. When you live in a planar world like Flatland, this kind of event is not only rare, but it is also quite weird! For people of Flatland, only the intersection of Miss Sphere’s body with the plane is visible. Depending on the position of the sphere, its shape in Flatland will either be a point, a circle, or it could even be empty.

fut_egg

During their trip to flatland, Professor Farnsworth explains to Bender: “If we were in the third dimension looking down, we would be able to see an unhatched chick in it. Just as a chick in a 3-dimensional egg could be seen by an observer in the fourth dimension.’

Not convinced by Miss Sphere’s arguments, Mr Square tried to prove that she cannot exist – Square was a mathematician – and failed miserably. Let’s imagine a more realistic story, a story where spheres cannot speak. In this story, Mr Square will be a physicist, familiar with hidden variable models. Mr Square met a sphere, but a tongue-tied sphere! Confronted with this mysterious event, he did what any other citizen of Flatland would have done. He took a selfie with Miss Sphere. Mr Square was kind enough to let us use some of his photos to illustrate our story.
pics.png

Picture taken by Mr Square, with his Flatland-camera. (a) The sphere. (b) Selfie of Square (left) with the sphere (right).

As you can see on these photos, when you are stuck in Flatland and you take a picture of a sphere, only a segment is visible. What aroused Mr Square’s curiosity is the fact that the length of this segment changes constantly. Each picture shows a segment of a different length, due to the movement of the sphere along the z-axis, invisible to him. However, although they look random, Square discovered that these changing lengths can be explained without randomness by introducing a hidden variable living in a hypothetical third dimension. The apparent randomness is simply a consequence of his incomplete knowledge of the system: The position along the hidden variable axis z is inaccessible! Of course, this is only a model, this third dimension is purely theoretical, and no one from Flatland will ever visit it.

What about quantum mechanics?

Measurement outcomes are random as well in the quantum realm. Can we explain the randomness in quantum measurements by a hidden variable? Surprisingly, the answer is no! Von Neumann, one of the greatest scientists of the 20th century, was the first one to make this claim in 1932. His attempt to prove this result is known today as “Von Neumann’s silly mistake”. It was not until 1966 that Bell convinced the community that Von Neumann’s argument relies on a silly assumption.

Consider first a system of a single quantum bit, or qubit. A qubit is a 2-level system. It can be either in a ground state or in an excited state, but also in a quantum superposition |\psi\rangle = \alpha |g\rangle + \beta|e\rangle of these two states, where \alpha and \beta are complex numbers such that |\alpha|^2 + |\beta|^2 = 1. We can see this quantum state as a 2-dimensional vector (\alpha, \beta), where the ground state is |g\rangle=(1,0) and the excited state is |e\rangle=(0,1).

projection

The probability of an outcome depends on the projection of the quantum state onto the ground state and the excited state.

What can we measure about this qubit? First, imagine that we want to know if our quantum state is in the ground state or in the excited state. There is a quantum measurement that returns a random outcome, which is g with probability P(g) = |\alpha|^2 and e with probability P(e) = |\beta|^2.

Let us try to reinterpret this measurement in a different way. Inspired by Mr Square’s idea, we extend our description of the state |\psi\rangle of the system to include the outcome as an extra parameter. In this model, a state is a pair of the form (|\psi\rangle, \lambda) where \lambda is either e or g. Our quantum state can be seen as being in position (|\psi\rangle, g) with probability P(g) or in position (|\psi\rangle, e) with probability P(e). Measuring only reveals the value of the hidden variable \lambda. By introducing a hidden variable, we made this measurement deterministic. This proves that the randomness can be moved to the level of the description of the state, just as in Flatland. The weirdness of quantum mechanics goes away.

Contextuality of quantum mechanics

Let us try to extend our hidden variable model to all quantum measurements. We can associate a measurement with a particular kind of matrix A, called an observable. Measuring an observable returns randomly one of its eigenvalue. For instance, the Pauli matrices

Z =  \begin{pmatrix}  1 & 0\\  0 & -1\\  \end{pmatrix}  \quad \text{ and } \quad  X =  \begin{pmatrix}  0 & 1\\  1 & 0\\  \end{pmatrix},

as well as Y = iZX and the identity matrix I, are 1-qubit observables with eigenvalues (i.e. measurement outcomes) \pm 1. Now, take a system of 2 qubits. Since each of the 2 qubits can be either excited or not, our quantum state is a 4-dimensional vector

|\psi\rangle = \alpha |g_1\rangle \otimes |g_2\rangle  + \beta |g_1\rangle \otimes |e_2\rangle  + \gamma |e_1\rangle \otimes |g_2\rangle  + \delta |e_1\rangle \otimes |e_2\rangle.

Therein, the 4 vectors |x\rangle \otimes |y\rangle can be identified with the vectors of the canonical basis (1000), (0100), (0010) and (0001). We will consider the measurement of 2-qubit observables of the form A \otimes B defined by A \otimes B |x\rangle \otimes |y\rangle = A |x\rangle \otimes B |y\rangle. In other words, A acts on the first qubit and B acts on the second one. Later, we will look into the observables X \otimes I, Z \otimes I, I \otimes X, I \otimes Z and their products.

What happens when two observables are measured simultaneously? In quantum mechanics, we can measure simultaneously multiple observables if these observables commute with each other. In that case, measuring O then O', or measuring O' first and then O, doesn’t make any difference. Therefore, we say that these observables are measured simultaneously, the outcome being a pair (\lambda,\lambda'), composed of an eigenvalue of O and an eigenvalue of O'. Their product O'' = OO', which commutes with both O and O', can also be measured in the same time. Measuring this triple returns a triple of eigenvalues (\lambda,\lambda',\lambda'') corresponding respectively to O, O' and O''. The relation O'' = OO' imposes the constraint

(1)               \qquad \lambda'' = \lambda \lambda'

on the outcomes.

Assume that one can describe the result of all quantum measurements with a model such that, for all observables O and for all states \nu of the model, a deterministic outcome \lambda_\nu(O) exists. Here, \nu is our ‘extended’, not necessarily physical, description of the state of the system. When O and O' are commuting, it is reasonable to assume that the relation (1) holds also at the level of the hidden variable model, namely

(2)                \lambda_\nu(OO') = \lambda_\nu(O) \cdot \lambda_\nu(O').

Such a model is called a non-contextual hidden variable model. Von Neumann proved that no such value \lambda_\nu exists by considering these relations for all pairs O, O' of observables. This shows that quantum mechanics is contextual! Hum… Wait a minute. It seems silly to impose such a constraint for all pairs of observable, including those that cannot be measured simultaneously. This is “Von Neumann’s silly assumption’. Only pairs of commuting observables should be considered.

mermin

Peres-Mermin proof of contextuality

One can resurrect Von Neumann’s argument, assuming Eq.(2) only for commuting observables. Peres-Mermin’s square provides an elegant proof of this result. Form a 3 \times 3 array with these observables. It is constructed in such a way that

(i) The eigenvalues of all the observables in Peres-Mermin’s square are ±1,

(ii) Each row and each column is a triple of commuting observables,

(iii) The last element of each row and each column is the product of the 2 first observables, except in the last column where Y \otimes Y = -(Z \otimes Z)(X \otimes X).

If a non-contextual hidden variable exists, it associates fixed eigenvalues a, b, c, d (which are either 1 or -1) with the 4 observables X \otimes I, Z \otimes I, I \otimes X, I \otimes Z. Applying Eq.(2) to the first 2 rows and to the first 2 columns, one deduces the values of all the observables of the square, except Y \otimes Y . Finally, what value should be attributed to Y \otimes Y? By (iii), applying Eq.(2) to the last row, one gets \lambda_\nu(Y \otimes Y) = abcd. However, using the last column, (iii) and Eq.(2) yield the opposite value \lambda_\nu (Y \otimes Y ) = -abcd. This is the expected contradiction, proving that there is no non-contextual value \lambda_\nu. Quantum mechanics is contextual!

We saw that the randomness in quantum measurements cannot be explained in a ‘classical’ way. Besides its fundamental importance, this result also influences quantum technologies. What I really care about is how to construct a quantum computer, or more generally, I would like to understand what kind of quantum device could be superior to its classical counterpart for certain tasks. Such a quantum advantage can only be reached by exploiting the weirdness of quantum mechanics, such as contextuality 1,2,3,4,5. Understanding these weird phenomena is one of the first tasks to accomplish.

Happy Halloween from…the discrete Wigner function?

Do you hope to feel a breath of cold air on the back of your neck this Halloween? I’ve felt one literally: I earned my Masters in the icebox called “Ontario,” at the Perimeter Institute for Theoretical Physics. Perimeter’s colloquia1 take place in an auditorium blacker than a Quentin Tarantino film. Aephraim Steinberg presented a colloquium one air-conditioned May.

Steinberg experiments on ultracold atoms and quantum optics2 at the University of Toronto. He introduced an idea that reminds me of biting into an apple whose coating you’d thought consisted of caramel, then tasting blood: a negative (quasi)probability.

Probabilities usually range from zero upward. Consider Shirley Jackson’s short story The Lottery. Villagers in a 20th-century American village prepare slips of paper. The number of slips equals the number of families in the village. One slip bears a black spot. Each family receives a slip. Each family has a probability p > 0  of receiving the marked slip. What happens to the family that receives the black spot? Read Jackson’s story—if you can stomach more than a Tarantino film.

Jackson peeled off skin to reveal the offal of human nature. Steinberg’s experiments reveal the offal of Nature. I’d expect humaneness of Jackson’s villagers and nonnegativity of probabilities. But what looks like a probability and smells like a probability might be hiding its odor with Special-Edition Autumn-Harvest Febreeze.

febreeze

A quantum state resembles a set of classical3 probabilities. Consider a classical system that has too many components for us to track them all. Consider, for example, the cold breath on the back of your neck. The breath consists of air molecules at some temperature T. Suppose we measured the molecules’ positions and momenta. We’d have some probability p_1 of finding this particle here with this momentum, that particle there with that momentum, and so on. We’d have a probability p_2 of finding this particle there with that momentum, that particle here with this momentum, and so on. These probabilities form the air’s state.

We can tell a similar story about a quantum system. Consider the quantum light prepared in a Toronto lab. The light has properties analogous to position and momentum. We can represent the light’s state with a mathematical object similar to the air’s probability density.4 But this probability-like object can sink below zero. We call the object a quasiprobability, denoted by \mu.

If a \mu sinks below zero, the quantum state it represents encodes entanglement. Entanglement is a correlation stronger than any achievable with nonquantum systems. Quantum information scientists use entanglement to teleport information, encrypt messages, and probe the nature of space-time. I usually avoid this cliché, but since Halloween is approaching: Einstein called entanglement “spooky action at a distance.”

too-cute

Eugene Wigner and others defined quasiprobabilities shortly before Shirley Jackson wrote The Lottery. Quantum opticians use these \mu’s, because quantum optics and quasiprobabilities involve continuous variables. Examples of continuous variables include position: An air molecule can sit at this point (e.g., x = 0) or at that point (e.g., x = 1) or anywhere between the two (e.g., x = 0.001). The possible positions form a continuous set. Continuous variables model quantum optics as they model air molecules’ positions.

Information scientists use continuous variables less than we use discrete variables. A discrete variable assumes one of just a few possible values, such as 0 or 1, or trick or treat.

discrete

How a quantum-information theorist views Halloween.

Quantum-information scientists study discrete systems, such as electron spins. Can we represent discrete quantum systems with quasiprobabilities \mu as we represent continuous quantum systems? You bet your barmbrack.

Bill Wootters and others have designed quasiprobabilities for discrete systems. Wootters stipulated that his \mu have certain properties. The properties appear in this review.  Most physicists label properties “1,” “2,” etc. or “Prop. 1,” “Prop. 2,” etc. The Wootters properties in this review have labels suited to Halloween.

woo

Seeing (quasi)probabilities sink below zero feels like biting into an apple that you think has a caramel coating, then tasting blood. Did you eat caramel apples around age six? Caramel apples dislodge baby teeth. When baby teeth fall out, so does blood. Tasting blood can mark growth—as does the squeamishness induced by a colloquium that spooks a student. Who needs haunted mansions when you have negative quasiprobabilities?

 

For nonexperts:

1Weekly research presentations attended by a department.

2Light.

3Nonquantum (basically).

4Think “set of probabilities.”

Tripping over my own inner product

A scrape stood out on the back of my left hand. The scrape had turned greenish-purple, I noticed while opening the lecture-hall door. I’d jounced the hand against my dining-room table while standing up after breakfast. The table’s corners form ninety-degree angles. The backs of hands do not.

Earlier, when presenting a seminar, I’d forgotten to reference papers by colleagues. Earlier, I’d offended an old friend without knowing how. Some people put their feet in their mouths. I felt liable to swallow a clog.

The lecture was for Ph 219: Quantum ComputationI was TAing (working as a teaching assistant for) the course. John Preskill was discussing quantum error correction.

Computers suffer from errors as humans do: Imagine setting a hard drive on a table. Coffee might spill on the table (as it probably would have if I’d been holding a mug near the table that week). If the table is in my California dining room, an earthquake might judder the table. Juddering bangs the hard drive against the wood, breaking molecular bonds and deforming the hardware. The information stored in computers degrades.

How can we protect information? By encoding it—by translating the message into a longer, encrypted message. An earthquake might judder the encoded message. We can reverse some of the damage by error-correcting.

Different types of math describe different codes. John introduced a type of math called symplectic vector spaces. “Symplectic vector space” sounds to me like a garden of spiny cacti (on which I’d probably have pricked fingers that week). Symplectic vector spaces help us translate between the original and encoded messages.

cactus-garden

Symplectic vector space?

Say that an earthquake has juddered our hard drive. We want to assess how the earthquake corrupted the encoded message and to error-correct. Our encryption scheme dictates which operations we should perform. Each possible operation, we represent with a mathematical object called a vector. A vector can take the form of a list of numbers.

We construct the code’s vectors like so. Say that our quantum hard drive consists of seven phosphorus nuclei atop a strip of silicon. Each nucleus has two observables, or measurable properties. Let’s call the observables Z and X.

Suppose that we should measure the first nucleus’s Z. The first number in our symplectic vector is 1. If we shouldn’t measure the first nucleus’s Z, the first number is 0. If we should measure the second nucleus’s Z, the second number is 1; if not, 0; and so on for the other nuclei. We’ve assembled the first seven numbers in our vector. The final seven numbers dictate which nuclei’s Xs we measure. An example vector looks like this: ( 1, \, 0, \, 1, \, 0, \, 1, \, 0, \, 1 \; | \; 0, \, 0, \, 0, \, 0, \, 0, \, 0, \, 0 ).

The vector dictates that we measure four Zs and no Xs.

instructions

Symplectic vectors represent the operations we should perform to correct errors.

A vector space is a collection of vectors. Many problems—not only codes—involve vector spaces. Have you used Google Maps? Google illustrates the step that you should take next with an arrow. We can represent that arrow with a vector. A vector, recall, can take the form of a list of numbers. The step’s list of twonumbers indicates whether you should walk ( \text{Northward or not} \; | \; \text{Westward or not} ).

google-maps

I’d forgotten about my scrape by this point in the lecture. John’s next point wiped even cacti from my mind.

Say you want to know how similar two vectors are. You usually calculate an inner product. A vector v tends to have a large inner product with any vector w that points parallel to v.

parallel

Parallel vectors tend to have a large inner product.

The vector v tends to have an inner product of zero with any vector w that points perpendicularly. Such v and w are said to annihilate each other. By the end of a three-hour marathon of a research conversation, we might say that v and w “destroy” each other. v is orthogonal to w.

cars

Two orthogonal vectors, having an inner product of zero, annihilate each other.

You might expect a vector v to have a huge inner product with itself, since v points parallel to v. Quantum-code vectors defy expectations. In a symplectic vector space, John said, “you can be orthogonal to yourself.”

A symplectic vector2 can annihilate itself, destroy itself, stand in its own way. A vector can oppose itself, contradict itself, trip over its own feet. I felt like I was tripping over my feet that week. But I’m human. A vector is a mathematical ideal. If a mathematical ideal could be orthogonal to itself, I could allow myself space to err.

perp-to-self

Tripping over my own inner product.

Lloyd Alexander wrote one of my favorite books, the children’s novel The Book of Three. The novel features a stout old farmer called Coll. Coll admonishes an apprentice who’s burned his fingers: “See much, study much, suffer much.” We smart while growing smarter.

An ant-sized scar remains on the back of my left hand. The scar has been fading, or so I like to believe. I embed references to colleagues’ work in seminar Powerpoints, so that I don’t forget to cite anyone. I apologized to the friend, and I know about symplectic vector spaces. We all deserve space to err, provided that we correct ourselves. Here’s to standing up more carefully after breakfast.

table-corner

1Not that I advocate for limiting each coordinate to one bit in a Google Maps vector. The two-bit assumption simplifies the example.

2Not only symplectic vectors are orthogonal to themselves, John pointed out. Consider a string of bits that contains an even number of ones. Examples include (0, 0, 0, 0, 1, 1). Each such string has a bit-wise inner product, over the field {\mathbb Z}_2, of zero with itself.

Greg Kuperberg’s calculus problem

“How good are you at calculus?”

This was the opening sentence of Greg Kuperberg’s Facebook status on July 4th, 2016.

“I have a joint paper (on isoperimetric inequalities in differential geometry) in which we need to know that

(\sin\theta)^3 xy + ((\cos\theta)^3 -3\cos\theta +2) (x+y) - (\sin\theta)^3-6\sin\theta -6\theta + 6\pi \\ \\- 6\arctan(x) +2x/(1+x^2) -6\arctan(y) +2y/(1+y^2)

is non-negative for x and y non-negative and \theta between 0 and \pi. Also, the minimum only occurs for x=y=1/(\tan(\theta/2).”

Let’s take a moment to appreciate the complexity of the mathematical statement above. It is a non-linear inequality in three variables, mixing trigonometry with algebra and throwing in some arc-tangents for good measure. Greg, continued:

“We proved it, but only with the aid of symbolic algebra to factor an algebraic variety into irreducible components. The human part of our proof is also not really a cake walk.

A simpler proof would be way cool.”

I was hooked. The cubic terms looked a little intimidating, but if I converted x and y into \tan(\theta_x) and \tan(\theta_y), respectively, as one of the comments on Facebook promptly suggested, I could at least get rid of the annoying arc-tangents and then calculus and trigonometry would take me the rest of the way. Greg replied to my initial comment outlining a quick route to the proof: “Let me just caution that we found the problem unyielding.” Hmm… Then, Greg revealed that the paper containing the original proof was over three years old (had he been thinking about this since then? that’s what true love must be like.) Titled “The Cartan-Hadamard Conjecture and The Little Prince“, the above inequality makes its appearance as Lemma 7.1 on page 45 (of 63). To quote the paper: “Although the lemma is evident from contour plots, the authors found it surprisingly tricky to prove rigorously.”

As I filled pages of calculations and memorized every trigonometric identity known to man, I realized that Greg was right: the problem was highly intractable. The quick solution that was supposed to take me two to three days turned into two weeks of hell, until I decided to drop the original approach and stick to doing calculus with the known unknowns, x and y. The next week led me to a set of three non-linear equations mixing trigonometric functions with fourth powers of x and y, at which point I thought of giving up. I knew what I needed to do to finish the proof, but it looked freaking insane. Still, like the masochist that I am, I continued calculating away until my brain was mush. And then, yesterday, during a moment of clarity, I decided to go back to one of the three equations and rewrite it in a different way. That is when I noticed the error. I had solved for \cos\theta in terms of x and y, but I had made a mistake that had cost me 10 days of intense work with no end in sight. Once I found the mistake, the whole proof came together within about an hour. At that moment, I felt a mix of happiness (duh), but also sadness, as if someone I had grown fond of no longer had a reason to spend time with me and, at the same time, I had ran out of made-up reasons to hang out with them. But, yeah, I mostly felt happiness.

Greg Kuperberg pondering about the universe of mathematics.

Greg Kuperberg pondering about the universe of mathematics.

Before I present the proof below, I want to take a moment to say a few words about Greg, whom I consider to be the John Preskill of mathematics: a lodestar of sanity in a sea of hyperbole (to paraphrase Scott Aaronson). When I started grad school at UC Davis back in 2003, quantum information theory and quantum computing were becoming “a thing” among some of the top universities around the US. So, I went to several of the mathematics faculty in the department asking if there was a course on quantum information theory I could take. The answer was to “read Nielsen and Chuang and then go talk to Professor Kuperberg”. Being a foolish young man, I skipped the first part and went straight to Greg to ask him to teach me (and four other brave souls) quantum “stuff”. Greg obliged with a course on… quantum probability and quantum groups. Not what I had in mind. This guy was hardcore. Needless to say, the five brave souls taking the class (mostly fourth year graduate students and me, the noob) quickly became three, then two gluttons for punishment (the other masochist became one of my best friends in grad school). I could not drop the class, not because I had asked Greg to do this as a favor to me, but because I knew that I was in the presence of greatness (or maybe it was Stockholm syndrome). My goal then, as an aspiring mathematician, became to one day have a conversation with Greg where, for some brief moment, I would not sound stupid. A man of incredible intelligence, Greg is that rare individual whose character matches his intellect. Much like the anti-heroes portrayed by Humphrey Bogart in Casablanca and the Maltese Falcon, Greg keeps a low-profile, seems almost cynical at times, but in the end, he works harder than everyone else to help those in need. For example, on MathOverflow, a question and answer website for professional mathematicians around the world, Greg is listed as one of the top contributors of all time.

But, back to the problem. The past four weeks thinking about it have oscillated between phases of “this is the most fun I’ve had in years!” to “this is Greg’s way of telling me I should drop math and become a go-go dancer”. Now that the ordeal is over, I can confidently say that the problem is anything but “dull” (which is how Greg felt others on MathOverflow would perceive it, so he never posted it there). In fact, if I ever have to teach Calculus, I will subject my students to the step-by-step proof of this problem. OK, here is the proof. This one is for you Greg. Thanks for being such a great role model. Sorry I didn’t get to tell you until now. And you are right not to offer a “bounty” for the solution. The journey (more like, a trip to Mordor and back) was all the money.

The proof: The first thing to note (and if I had read Greg’s paper earlier than today, I would have known as much weeks ago) is that the following equality holds (which can be verified quickly by differentiating both sides):

4 x - 6\arctan(x) +2x/(1+x^2) = 4 \int_0^x \frac{s^4}{(1+s^2)^2} ds.

Using the above equality (and the equivalent one for y), we get:

F(\theta,x,y) = (\sin\theta)^3 xy + ((\cos\theta)^3 -3\cos\theta -2) (x+y) - (\sin\theta)^3-6\sin\theta -6\theta + 6\pi \\ \\4 \int_0^x \frac{s^4}{(1+s^2)^2} ds+4 \int_0^y \frac{s^4}{(1+s^2)^2} ds.

Now comes the fun part. We differentiate with respect to \theta, x and y, and set to zero to find all the maxima and minima of F(\theta,x,y) (though we are only interested in the global minimum, which is supposed to be at x=y=\tan^{-1}(\theta/2)). Some high-school level calculus yields:

\partial_\theta F(\theta,x,y) = 0 \implies \sin^2(\theta) (\cos(\theta) xy + \sin(\theta)(x+y)) = \\ \\ 2 (1+\cos(\theta))+\sin^2(\theta)\cos(\theta).

At this point, the most well-known trigonometric identity of all time, \sin^2(\theta)+\cos^2(\theta)=1, can be used to show that the right-hand-side can be re-written as:

2(1+\cos(\theta))+\sin^2(\theta)\cos(\theta) = \sin^2(\theta) (\cos\theta \tan^{-2}(\theta/2) + 2\sin\theta \tan^{-1}(\theta/2)),

where I used (my now favorite) trigonometric identity: \tan^{-1}(\theta/2) = (1+\cos\theta)/\sin(\theta) (note to the reader: \tan^{-1}(\theta) = \cot(\theta)). Putting it all together, we now have the very suggestive condition:

\sin^2(\theta) (\cos(\theta) (xy-\tan^{-2}(\theta/2)) + \sin(\theta)(x+y-2\tan^{-1}(\theta/2))) = 0,

noting that, despite appearances, \theta = 0 is not a solution (as can be checked from the original form of this equality, unless x and y are infinite, in which case the expression is clearly non-negative, as we show towards the end of this post). This leaves us with \theta = \pi and

\cos(\theta) (\tan^{-2}(\theta/2)-xy) = \sin(\theta)(x+y-2\tan^{-1}(\theta/2)),

as candidates for where the minimum may be. A quick check shows that:

F(\pi,x,y) = 4 \int_0^x \frac{s^4}{(1+s^2)^2} ds+4 \int_0^y \frac{s^4}{(1+s^2)^2} ds \ge 0,

since x and y are non-negative. The following obvious substitution becomes our greatest ally for the rest of the proof:

x= \alpha \tan^{-1}(\theta/2), \, y = \beta \tan^{-1}(\theta/2).

Substituting the above in the remaining condition for \partial_\theta F(\theta,x,y) = 0, and using again that \tan^{-1}(\theta/2) = (1+\cos\theta)/\sin\theta, we get:

\cos\theta (1-\alpha\beta) = (1-\cos\theta) ((\alpha-1) + (\beta-1)),

which can be further simplified to (if you are paying attention to minus signs and don’t waste a week on a wild-goose chase like I did):

\cos\theta = \frac{1}{1-\beta}+\frac{1}{1-\alpha}.

As Greg loves to say, we are finally cooking with gas. Note that the expression is symmetric in \alpha and \beta, which should be obvious from the symmetry of F(\theta,x,y) in x and y. That observation will come in handy when we take derivatives with respect to x and y now. Factoring (\cos\theta)^3 -3\cos\theta -2 = - (1+\cos\theta)^2(2-\cos\theta), we get:

\partial_x F(\theta,x,y) = 0 \implies \sin^3(\theta) y + 4\frac{x^4}{(1+x^2)^2} = (1+\cos\theta)^2 + \sin^2\theta (1+\cos\theta).

Substituting x and y with \alpha \tan^{-1}(\theta/2), \beta \tan^{-1}(\theta/2), respectively and using the identities \tan^{-1}(\theta/2) = (1+\cos\theta)/\sin\theta and \tan^{-2}(\theta/2) = (1+\cos\theta)/(1-\cos\theta), the above expression simplifies significantly to the following expression:

4\alpha^4 =\left((\alpha^2-1)\cos\theta+\alpha^2+1\right)^2 \left(1 + (1-\beta)(1-\cos\theta)\right).

Using \cos\theta = \frac{1}{1-\beta}+\frac{1}{1-\alpha}, which we derived earlier by looking at the extrema of F(\theta,x,y) with respect to \theta, and noting that the global minimum would have to be an extremum with respect to all three variables, we get:

4\alpha^4 (1-\beta) = \alpha (\alpha-1) (1+\alpha + \alpha(1-\beta))^2,

where we used 1 + (1-\beta)(1-\cos\theta) = \alpha (1-\beta) (\alpha-1)^{-1} and

(\alpha^2-1)\cos\theta+\alpha^2+1 = (\alpha+1)((\alpha-1)\cos\theta+1)+\alpha(\alpha-1) = \\ (\alpha-1)(1-\beta)^{-1} (2\alpha + 1-\alpha\beta).

We may assume, without loss of generality, that x \ge y. If \alpha = 0, then \alpha = \beta = 0, which leads to the contradiction \cos\theta = 2, unless the other condition, \theta = \pi, holds, which leads to F(\pi,0,0) = 0. Dividing through by \alpha and re-writing 4\alpha^3(1-\beta) = 4\alpha(1+\alpha)(\alpha-1)(1-\beta) + 4\alpha(1-\beta), yields:

4\alpha (1-\beta) = (\alpha-1) (1+\alpha - \alpha(1-\beta))^2 = (\alpha-1)(1+\alpha\beta)^2,

which can be further modified to:

4\alpha +(1-\alpha\beta)^2 = \alpha (1+\alpha\beta)^2,

and, similarly for \beta (due to symmetry):

4\beta +(1-\alpha\beta)^2 = \beta (1+\alpha\beta)^2.

Subtracting the two equations from each other, we get:

4(\alpha-\beta) = (\alpha-\beta)(1+\alpha\beta)^2,

which implies that \alpha = \beta and/or \alpha\beta =1. The first leads to 4\alpha (1-\alpha) = (\alpha-1)(1+\alpha^2)^2, which immediately implies \alpha = 1 = \beta (since the left and right side of the equality have opposite signs otherwise). The second one implies that either \alpha+\beta =2, or \cos\theta =1, which follows from the earlier equation \cos\theta (1-\alpha\beta) = (1-\cos\theta) ((\alpha-1) + (\beta-1)). If \alpha+\beta =2 and 1 = \alpha\beta, it is easy to see that \alpha=\beta=1 is the only solution by expanding (\sqrt{\alpha}-\sqrt{\beta})^2=0. If, on the other hand, \cos\theta = 1, then looking at the original form of F(\theta,x,y), we see that F(0,x,y) = 6\pi - 6\arctan(x) +2x/(1+x^2) -6\arctan(y) +2y/(1+y^2) \ge 0, since x,y \ge 0 \implies \arctan(x)+\arctan(y) \le \pi.

And that concludes the proof, since the only cases for which all three conditions are met lead to \alpha = \beta = 1 and, hence, x=y=\tan^{-1}(\theta/2). The minimum of F(\theta, x,y) at these values is always zero. That’s right, all this work to end up with “nothing”. But, at least, the last four weeks have been anything but dull.

Update: Greg offered Lemma 7.4 from the same paper as another challenge (the sines, cosines and tangents are now transformed into hyperbolic trigonometric functions, with a few other changes, mostly in signs, thrown in there). This is a more hardcore-looking inequality, but the proof turns out to follow the steps of Lemma 7.1 almost identically. In particular, all the conditions for extrema are exactly the same, with the only difference being that cosine becomes hyperbolic cosine. It is an awesome exercise in calculus to check this for yourself. Do it. Unless you have something better to do.

Bringing the heat to Cal State LA

John Baez is a tough act to follow.

The mathematical physicist presented a colloquium at Cal State LA this May.1 The talk’s title: “My Favorite Number.” The advertisement image: A purple “24” superimposed atop two egg cartons.

Baez300px

The colloquium concerned string theory. String theorists attempt to reconcile Einstein’s general relativity with quantum mechanics. Relativity concerns the large and the fast, like the sun and light. Quantum mechanics concerns the small, like atoms. Relativity and with quantum mechanics individually suggest that space-time consists of four dimensions: up-down, left-right, forward-backward, and time. String theory suggests that space-time has more than four dimensions. Counting dimensions leads theorists to John Baez’s favorite number.

His topic struck me as bold, simple, and deep. As an otherworldly window onto the pedestrian. John Baez became, when I saw the colloquium ad, a hero of mine.

And a tough act to follow.

I presented Cal State LA’s physics colloquium the week after John Baez. My title: “Quantum steampunk: Quantum information applied to thermodynamics.” Steampunk is a literary, artistic, and film genre. Stories take place during the 1800s—the Victorian era; the Industrial era; an age of soot, grime, innovation, and adventure. Into the 1800s, steampunkers transplant modern and beyond-modern technologies: automata, airships, time machines, etc. Example steampunk works include Will Smith’s 1999 film Wild Wild West. Steampunk weds the new with the old.

So does quantum information applied to thermodynamics. Thermodynamics budded off from the Industrial Revolution: The steam engine crowned industrial technology. Thinkers wondered how efficiently engines could run. Thinkers continue to wonder. But the steam engine no longer crowns technology; quantum physics (with other discoveries) does. Quantum information scientists study the roles of information, measurement, and correlations in heat, energy, entropy, and time. We wed the new with the old.

Posters

What image could encapsulate my talk? I couldn’t lean on egg cartons. I proposed a steampunk warrior—cravatted, begoggled, and spouting electricity. The proposal met with a polite cough of an email. Not all department members, Milan Mijic pointed out, had heard of steampunk.

Steampunk warrior

Milan is a Cal State LA professor and my erstwhile host. We toured the palm-speckled campus around colloquium time. What, he asked, can quantum information contribute to thermodynamics?

Heat offers an example. Imagine a classical (nonquantum) system of particles. The particles carry kinetic energy, or energy of motion: They jiggle. Particles that bump into each other can exchange energy. We call that energy heat. Heat vexes engineers, breaking transistors and lowering engines’ efficiencies.

Like heat, work consists of energy. Work has more “orderliness” than the heat transferred by random jiggles. Examples of work exertion include the compression of a gas: A piston forces the particles to move in one direction, in concert. Consider, as another example, driving electrons around a circuit with an electric field. The field forces the electrons to move in the same direction. Work and heat account for all the changes in a system’s energy. So states the First Law of Thermodynamics.

Suppose that the system is quantum. It doesn’t necessarily have a well-defined energy. But we can stick the system in an electric field, and the system can exchange motional-type energy with other systems. How should we define “work” and “heat”?

Quantum information offers insights, such as via entropies. Entropies quantify how “mixed” or “disordered” states are. Disorder grows as heat suffuses a system. Entropies help us extend the First Law to quantum theory.

First slide

So I explained during the colloquium. Rarely have I relished engaging with an audience as much as I relished engaging with Cal State LA’s. Attendees made eye contact, posed questions, commented after the talk, and wrote notes. A student in a corner appeared to be writing homework solutions. But a presenter couldn’t have asked for more from the rest. One exclamation arrested me like a coin in the cogs of a grandfather clock.

I’d peppered my slides with steampunk art: paintings, drawings, stills from movies. The peppering had staved off boredom as I’d created the talk. I hoped that the peppering would stave off my audience’s boredom. I apologized about the trimmings.

“No!” cried a woman near the front. “It’s lovely!”

I was about to discuss experiments by Jukka Pekola’s group. Pekola’s group probes quantum thermodynamics using electronic circuits. The group measures heat by counting the electrons that hop from one part of the circuit to another. Single-electron transistors track tunneling (quantum movements) of single particles.

Heat complicates engineering, calculations, and California living. Heat scrambles signals, breaks devices, and lowers efficiencies. Quantum heat can evade definition. Thermodynamicists grind their teeth over heat.

“No!” the woman near the front had cried. “It’s lovely!”

She was referring to steampunk art. But her exclamation applied to my subject. Heat has not only practical importance, but also fundamental: Heat influences every law of thermodynamics. Thermodynamic law underpins much of physics as 24 underpins much of string theory. Lovely, I thought, indeed.

Cal State LA offered a new view of my subfield, an otherworldly window onto the pedestrian. The more pedestrian an idea—the more often the idea surfaces, the more of our world the idea accounts for—the deeper the physics. Heat seems as pedestrian as a Pokémon Go player. But maybe, someday, I’ll present an idea as simple, bold, and deep as the number 24.

Window

A window onto Cal State LA.

With gratitude to Milan Mijic, and to Cal State LA’s Department of Physics and Astronomy, for their hospitality.

1For nonacademics: A typical physics department hosts several presentations per week. A seminar relates research that the speaker has undertaken. The audience consists of department members who specialize in the speaker’s subfield. A department’s astrophysicists might host a Monday seminar; its quantum theorists, a Wednesday seminar; etc. One colloquium happens per week. Listeners gather from across the department. The speaker introduces a subfield, like the correction of errors made by quantum computers. Course lectures target students. Endowed lectures, often named after donors, target researchers.

What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Steaming soup

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Bridge - theory

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Bridge - exprmt

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Dunk 2

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter \delta. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number N_\delta of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform N_\delta trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

Schwinger headstone

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

Bread knife

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)

Quantum braiding: It’s all in (and on) your head.

Morning sunlight illuminated John Preskill’s lecture notes. The notes concern Caltech’s quantum-computation course, Ph 219. I’m TAing (the teaching assistant for) Ph 219. I previewed lecture material one sun-kissed Sunday.

Pasadena sunlight spilled through my window. So did the howling of a dog that’s deepened my appreciation for Billy Collins’s poem “Another reason why I don’t keep a gun in the house.” My desk space warmed up, and I unbuttoned my jacket. I underlined a phrase, braided my hair so my neck could cool, and flipped a page.

I flipped back. The phrase concerned a mathematical statement called “the Yang-Baxter relation.” A sunbeam had winked on in my mind: The Yang-Baxter relation described my hair.

The Yang-Baxter relation belongs to a branch of math called “topology.” Topology resembles geometry in its focus on shapes. Topologists study spheres, doughnuts, knots, and braids.

Topology describes some quantum physics. Scientists are harnessing this physics to build quantum computers. Alexei Kitaev largely dreamed up the harness. Alexei, a Caltech professor, is teaching Ph 219 this spring.1 His computational scheme works like this.

We can encode information in radio signals, in letters printed on a page, in the pursing of one’s lips as one passes a howling dog’s owner, and in quantum particles. Imagine three particles on a tabletop.

Peas 1

Consider pushing the particles around like peas on a dinner plate. You could push peas 1 and 2 until they swapped places. The swap represents a computation, in Alexei’s scheme.2

The diagram below shows how the peas move. Imagine slicing the figure into horizontal strips. Each strip would show one instant in time. Letting time run amounts to following the diagram from bottom to top.

Peas 2

Arrows copied from John Preskill’s lecture notes. Peas added by the author.

Imagine swapping peas 1 and 3.

Peas 3

Humor me with one more swap, an interchange of 2 and 3.

Peas 4

Congratulations! You’ve modeled a significant quantum computation. You’ve also braided particles.

2 braids

The author models a quantum computation.

Let’s recap: You began with peas 1, 2, and 3. You swapped 1 with 2, then 1 with 3, and then 2 with 3. The peas end up ordered oppositely the way they began—end up ordered as 3, 2, 1.

You could, instead, morph 1-2-3 into 3-2-1 via a different sequence of swaps. That sequence, or braid, appears below.

Peas 5

Congratulations! You’ve begun proving the Yang-Baxter relation. You’ve shown that  each braid turns 1-2-3 into 3-2-1.

The relation states also that 1-2-3 is topologically equivalent to 3-2-1: Imagine standing atop pea 2 during the 1-2-3 braiding. You’d see peas 1 and 3 circle around you counterclockwise. You’d see the same circling if you stood atop pea 2 during the 3-2-1 braiding.

That Sunday morning, I looked at John’s swap diagrams. I looked at the hair draped over my left shoulder. I looked at John’s swap diagrams.

“Yang-Baxter relation” might sound, to nonspecialists, like a mouthful of tweed. It might sound like a sneeze in a musty library. But an eight-year-old could grasp the half the relation. When I braid my hair, I pass my left hand over the back of my neck. Then, I pass my right hand over. But I could have passed the right hand first, then the left. The braid would have ended the same way. The braidings would look identical to a beetle hiding atop what had begun as the middle hunk of hair.

Yang-Baxter

The Yang-Baxter relation.

I tried to keep reading John’s lecture notes, but the analogy mushroomed. Imagine spinning one pea atop the table.

Pea 6

A 360° rotation returns the pea to its initial orientation. You can’t distinguish the pea’s final state from its first. But a quantum particle’s state can change during a 360° rotation. Physicists illustrate such rotations with corkscrews.

 

Pachos corkscrew 2

A quantum corkscrew (“twisted worldribbon,” in technical jargon)

Like the corkscrews formed as I twirled my hair around a finger. I hadn’t realized that I was fidgeting till I found John’s analysis.

Version 2

I gave up on his lecture notes as the analogy sprouted legs.

I’ve never mastered the fishtail braid. What computation might it represent? What about the French braid? You begin French-braiding by selecting a clump of hair. You add strands to the clump while braiding. The addition brings to mind particles created (and annihilated) during a topological quantum computation.

Ancient Greek statues wear elaborate hairstyles, replete with braids and twists.  Could you decode a Greek hairdo? Might it represent the first 18 digits in pi? How long an algorithm could you run on Rapunzel’s hair?

Call me one bobby pin short of a bun. But shouldn’t a scientist find inspiration in every fiber of nature? The sunlight spilling through a window illuminates no less than the hair spilling over a shoulder. What grows on a quantum physicist’s head informs what grows in it.

 

1Alexei and John trade off on teaching Ph 219. Alexei recommends the notes that John wrote while teaching in previous years.

2When your mother ordered you to quit playing with your food, you could have objected, “I’m modeling computations!”