My teacher had handwritten the narrative on my twelfth-grade physics midterm. Many mechanics problems involve cars: Drivers smash into each other at this angle or that angle, interlocking their vehicles. The Principle of Conservation of Linear Momentum governs how the wreck moves. Students deduce how quickly the wreck skids, and in which direction.

Few mechanics problems involve the second person. *I* have almost reached an intersection?

*You’re late for an event, and stopping would cost you several minutes. What do you do?*

We’re probably a few meters from the light, I thought. How quickly are we driving? I could calculate the acceleration needed to—

*(a) Speed through the red light.*

*(b) Hit the brakes. Fume about missing the light while you wait.*

*(c) Stop in front of the intersection. Chat with your friend, making the most of the situation. Resolve to leave your house earlier next time.*

Pencils scritched, and students shifted in their chairs. I looked up from the choices.

Our classroom differed from most high-school physics classrooms. Sure, posters about Einstein and Nobel prizes decorated the walls. Circuit elements congregated in a corner. But they didn’t draw the eye the moment one stepped through the doorway.

A giant yellow smiley face did.

It sat atop the cupboards that faced the door. Next to the smiley stood a placard that read, “Say please and thank you.” Another placard hung above the chalkboard: “Are you showing your good grace and character?”

Our instructor taught mechanics and electromagnetism. He wanted to teach us more. He pronounced the topic in a southern sing-song: “an *at*titude of *gra*titude.”

Teenagers populate high-school classrooms. The cynicism in a roomful of teenagers could have rivaled the cynicism in Hemingway’s Paris. Students regarded his digressions as oddities. My high school fostered more manners than most. But a “Can you believe…?” tone accompanied recountings of the detours.

Yet our teacher’s drawl held steady as he read students’ answers to a bonus question on a test (“What are you grateful for?”). He bade us gaze at a box of Wheaties—the breakfast of champions—on whose front hung a mirror. He awarded Symbolic Lollipops for the top grades on tests and for acts of kindness. All with a straight face.

Except, once or twice over the years, I thought I saw his mouth tweak into a smile.

I’ve puzzled out momentum problems since graduating from that physics class. I haven’t puzzled out how to regard the class. As mawkish or moral? Heroic or humorous? I might never answer those questions. But the class led me toward a career in physics, and physicists value data. One datum stands out: I didn’t pack my senior-year high-school physics midterm when moving to Pasadena. But the midterm remains with me.

]]>

Quantum mechanics is weird! Imagine for a second that you want to make an experiment and that the result of your experiment depends on what your colleague is doing in the next room. It would be crazy to live in such a world! This is the world we live in, at least at the quantum scale. The result of an experiment cannot be described in a way that is independent of the context. The neighbor is sticking his nose in our experiment!

Before telling you why quantum mechanics is contextual, let me give you an experiment that admits a simple non-contextual explanation. This story takes place in Flatland, a two-dimensional world inhabited by polygons. Our protagonist is a square who became famous after claiming that he met a sphere.

This square, call him Mr Square for convenience, met a sphere, Miss Sphere. When you live in a planar world like Flatland, this kind of event is not only rare, but it is also quite weird! For people of Flatland, only the intersection of Miss Sphere’s body with the plane is visible. Depending on the position of the sphere, its shape in Flatland will either be a point, a circle, or it could even be empty.

Not convinced by Miss Sphere’s arguments, Mr Square tried to prove that she cannot exist – Square was a mathematician – and failed miserably. Let’s imagine a more realistic story, a story where spheres cannot speak. In this story, Mr Square will be a physicist, familiar with hidden variable models. Mr Square met a sphere, but a tongue-tied sphere! Confronted with this mysterious event, he did what any other citizen of Flatland would have done. He took a selfie with Miss Sphere. Mr Square was kind enough to let us use some of his photos to illustrate our story.

As you can see on these photos, when you are stuck in Flatland and you take a picture of a sphere, only a segment is visible. What aroused Mr Square’s curiosity is the fact that the length of this segment changes constantly. Each picture shows a segment of a different length, due to the movement of the sphere along the z-axis, invisible to him. However, although they look random, Square discovered that these changing lengths can be explained without randomness by introducing a hidden variable living in a hypothetical third dimension. The apparent randomness is simply a consequence of his incomplete knowledge of the system: The position along the hidden variable axis z is inaccessible! Of course, this is only a model, this third dimension is purely theoretical, and no one from Flatland will ever visit it.

**What about quantum mechanics?**

Measurement outcomes are random as well in the quantum realm. Can we explain the randomness in quantum measurements by a hidden variable? Surprisingly, the answer is no! Von Neumann, one of the greatest scientists of the 20th century, was the first one to make this claim in 1932. His attempt to prove this result is known today as “Von Neumann’s silly mistake”. It was not until 1966 that Bell convinced the community that Von Neumann’s argument relies on a silly assumption.

Consider first a system of a single quantum bit, or qubit. A qubit is a 2-level system. It can be either in a ground state or in an excited state, but also in a quantum superposition of these two states, where and are complex numbers such that . We can see this quantum state as a 2-dimensional vector , where the ground state is and the excited state is .

What can we measure about this qubit? First, imagine that we want to know if our quantum state is in the ground state or in the excited state. There is a quantum measurement that returns a random outcome, which is with probability and with probability .

Let us try to reinterpret this measurement in a different way. Inspired by Mr Square’s idea, we extend our description of the state of the system to include the outcome as an extra parameter. In this model, a state is a pair of the form where is either or . Our quantum state can be seen as being in position with probability or in position with probability . Measuring only reveals the value of the hidden variable . By introducing a hidden variable, we made this measurement deterministic. This proves that the randomness can be moved to the level of the description of the state, just as in Flatland. The weirdness of quantum mechanics goes away.

**Contextuality of quantum mechanics**

Let us try to extend our hidden variable model to all quantum measurements. We can associate a measurement with a particular kind of matrix , called an observable. Measuring an observable returns randomly one of its eigenvalue. For instance, the Pauli matrices

as well as and the identity matrix , are 1-qubit observables with eigenvalues (i.e. measurement outcomes) . Now, take a system of 2 qubits. Since each of the 2 qubits can be either excited or not, our quantum state is a 4-dimensional vector

Therein, the 4 vectors can be identified with the vectors of the canonical basis and . We will consider the measurement of 2-qubit observables of the form defined by . In other words, acts on the first qubit and acts on the second one. Later, we will look into the observables , , , and their products.

What happens when two observables are measured simultaneously? In quantum mechanics, we can measure simultaneously multiple observables if these observables commute with each other. In that case, measuring then , or measuring first and then , doesn’t make any difference. Therefore, we say that these observables are measured simultaneously, the outcome being a pair , composed of an eigenvalue of and an eigenvalue of . Their product , which commutes with both and , can also be measured in the same time. Measuring this triple returns a triple of eigenvalues corresponding respectively to , and . The relation imposes the constraint

(1)

on the outcomes.

Assume that one can describe the result of all quantum measurements with a model such that, for all observables and for all states of the model, a deterministic outcome exists. Here, is our ‘extended’, not necessarily physical, description of the state of the system. When and are commuting, it is reasonable to assume that the relation (1) holds also at the level of the hidden variable model, namely

(2)

Such a model is called a non-contextual hidden variable model. Von Neumann proved that no such value exists by considering these relations for all pairs , of observables. This shows that quantum mechanics is contextual! Hum… Wait a minute. It seems silly to impose such a constraint for all pairs of observable, including those that cannot be measured simultaneously. This is “Von Neumann’s silly assumption’. Only pairs of commuting observables should be considered.

One can resurrect Von Neumann’s argument, assuming Eq.(2) only for commuting observables. Peres-Mermin’s square provides an elegant proof of this result. Form a array with these observables. It is constructed in such a way that

(i) The eigenvalues of all the observables in Peres-Mermin’s square are ±1,

(ii) Each row and each column is a triple of commuting observables,

(iii) The last element of each row and each column is the product of the 2 first observables, except in the last column where .

If a non-contextual hidden variable exists, it associates fixed eigenvalues , , , (which are either 1 or -1) with the 4 observables , , , . Applying Eq.(2) to the first 2 rows and to the first 2 columns, one deduces the values of all the observables of the square, except . Finally, what value should be attributed to ? By (iii), applying Eq.(2) to the last row, one gets . However, using the last column, (iii) and Eq.(2) yield the opposite value . This is the expected contradiction, proving that there is no non-contextual value . Quantum mechanics is contextual!

^{1,}^{2,}^{3,}^{4,}^{5}. Understanding these weird phenomena is one of the first tasks to accomplish.

]]>

Steinberg experiments on ultracold atoms and quantum optics^{2} at the University of Toronto. He introduced an idea that reminds me of biting into an apple whose coating you’d thought consisted of caramel, then tasting blood: a negative (quasi)probability.

Probabilities usually range from zero upward. Consider Shirley Jackson’s short story *The Lottery*. Villagers in a 20th-century American village prepare slips of paper. The number of slips equals the number of families in the village. One slip bears a black spot. Each family receives a slip. Each family has a probability of receiving the marked slip. What happens to the family that receives the black spot? Read Jackson’s story—if you can stomach more than a Tarantino film.

Jackson peeled off skin to reveal the offal of human nature. Steinberg’s experiments reveal the offal of Nature. I’d expect humaneness of Jackson’s villagers and nonnegativity of probabilities. But what looks like a probability and smells like a probability might be hiding its odor with Special-Edition Autumn-Harvest Febreeze.

A quantum state resembles a set of classical^{3} probabilities. Consider a classical system that has too many components for us to track them all. Consider, for example, the cold breath on the back of your neck. The breath consists of air molecules at some temperature . Suppose we measured the molecules’ positions and momenta. We’d have some probability of finding *this *particle *here* with *this* momentum, *that *particle *there* with *that* momentum, and so on. We’d have a probability of finding *this* particle *there* with *that *momentum, *that* particle *here* with *this* momentum, and so on. These probabilities form the air’s state.

We can tell a similar story about a quantum system. Consider the quantum light prepared in a Toronto lab. The light has properties analogous to position and momentum. We can represent the light’s state with a mathematical object similar to the air’s probability density.^{4} But this probability-like object can sink below zero. We call the object a *quasiprobability*, denoted by .

If a sinks below zero, the quantum state it represents encodes entanglement. Entanglement is a correlation stronger than any achievable with nonquantum systems. Quantum information scientists use entanglement to teleport information, encrypt messages, and probe the nature of space-time. I usually avoid this cliché, but since Halloween is approaching: Einstein called entanglement “spooky action at a distance.”

Eugene Wigner and others defined quasiprobabilities shortly before Shirley Jackson wrote *The Lottery*. Quantum opticians use these ’s, because quantum optics and quasiprobabilities involve continuous variables. Examples of continuous variables include position: An air molecule can sit at *this* point (e.g., ) or at *that* point (e.g., ) or anywhere between the two (e.g., ). The possible positions form a continuous set. Continuous variables model quantum optics as they model air molecules’ positions.

Information scientists use continuous variables less than we use discrete variables. A discrete variable assumes one of just a few possible values, such as or , or trick or treat.

Quantum-information scientists study discrete systems, such as electron spins. Can we represent discrete quantum systems with quasiprobabilities as we represent continuous quantum systems? You bet your barmbrack.

Bill Wootters and others have designed quasiprobabilities for discrete systems. Wootters stipulated that his have certain properties. The properties appear in this review. Most physicists label properties “1,” “2,” etc. or “Prop. 1,” “Prop. 2,” etc. The Wootters properties in this review have labels suited to Halloween.

Seeing (quasi)probabilities sink below zero feels like biting into an apple that you think has a caramel coating, then tasting blood. Did you eat caramel apples around age six? Caramel apples dislodge baby teeth. When baby teeth fall out, so does blood. Tasting blood can mark growth—as does the squeamishness induced by a colloquium that spooks a student. Who needs haunted mansions when you have negative quasiprobabilities?

*For nonexperts:*

^{1}Weekly research presentations attended by a department.

^{2}Light.

^{3}Nonquantum (basically).

^{4}Think “set of probabilities.”

]]>

Earlier, when presenting a seminar, I’d forgotten to reference papers by colleagues. Earlier, I’d offended an old friend without knowing how. Some people put their feet in their mouths. I felt liable to swallow a clog.

The lecture was for Ph 219: Quantum Computation. I was TAing (working as a teaching assistant for) the course. John Preskill was discussing quantum error correction.

Computers suffer from errors as humans do: Imagine setting a hard drive on a table. Coffee might spill on the table (as it probably would have if I’d been holding a mug near the table that week). If the table is in my California dining room, an earthquake might judder the table. Juddering bangs the hard drive against the wood, breaking molecular bonds and deforming the hardware. The information stored in computers degrades.

How can we protect information? By encoding it—by translating the message into a longer, encrypted message. An earthquake might judder the encoded message. We can reverse some of the damage by *error-correcting*.

Different types of math describe different codes. John introduced a type of math called *symplectic vector spaces*. “Symplectic vector space” sounds to me like a garden of spiny cacti (on which I’d probably have pricked fingers that week). Symplectic vector spaces help us translate between the original and encoded messages.

Say that an earthquake has juddered our hard drive. We want to assess how the earthquake corrupted the encoded message and to error-correct. Our encryption scheme dictates which operations we should perform. Each possible operation, we represent with a mathematical object called a *vector*. A vector can take the form of a list of numbers.

We construct the code’s vectors like so. Say that our quantum hard drive consists of seven phosphorus nuclei atop a strip of silicon. Each nucleus has two *observables*, or measurable properties. Let’s call the observables *Z* and *X*.

Suppose that we should measure the first nucleus’s *Z*. The first number in our symplectic vector is 1. If we shouldn’t measure the first nucleus’s *Z*, the first number is 0. If we should measure the second nucleus’s *Z*, the second number is 1; if not, 0; and so on for the other nuclei. We’ve assembled the first seven numbers in our vector. The final seven numbers dictate which nuclei’s *X*s we measure. An example vector looks like this: .

The vector dictates that we measure four *Z*s and no *X*s.

A *vector space* is a collection of vectors. Many problems—not only codes—involve vector spaces. Have you used Google Maps? Google illustrates the step that you should take next with an arrow. We can represent that arrow with a vector. A vector, recall, can take the form of a list of numbers. The step’s list of two^{1 }numbers indicates whether you should walk^{ }.

I’d forgotten about my scrape by this point in the lecture. John’s next point wiped even cacti from my mind.

Say you want to know how similar two vectors are. You usually calculate an *inner product*. A vector *v* tends to have a large inner product with any vector *w* that points parallel to *v*.

The vector *v* tends to have an inner product of zero with any vector *w* that points perpendicularly. Such *v* and *w* are said to *annihilate* each other. By the end of a three-hour marathon of a research conversation, we might say that *v* and *w* “destroy” each other. *v* is *orthogonal* to *w*.

You might expect a vector *v* to have a huge inner product with itself, since *v* points parallel to *v*. Quantum-code vectors defy expectations. In a symplectic vector space, John said, “you can be orthogonal to yourself.”

A symplectic vector^{2}^{ }can annihilate itself, destroy itself, stand in its own way. A vector can oppose itself, contradict itself, trip over its own feet. I felt like I was tripping over my feet that week. But I’m human. A vector is a mathematical ideal. If a mathematical ideal could be orthogonal to itself, I could allow myself space to err.

Lloyd Alexander wrote one of my favorite books, the children’s novel *The Book of Three*. The novel features a stout old farmer called Coll. Coll admonishes an apprentice who’s burned his fingers: “See much, study much, suffer much.” We smart while growing smarter.

An ant-sized scar remains on the back of my left hand. The scar has been fading, or so I like to believe. I embed references to colleagues’ work in seminar Powerpoints, so that I don’t forget to cite anyone. I apologized to the friend, and I know about symplectic vector spaces. We all deserve space to err, provided that we correct ourselves. Here’s to standing up more carefully after breakfast.

^{1}Not that I advocate for limiting each coordinate to one bit in a Google Maps vector. The two-bit assumption simplifies the example.

^{2}Not only symplectic vectors are orthogonal to themselves, John pointed out. Consider a string of bits that contains an even number of ones. Examples include (0, 0, 0, 0, 1, 1). Each such string has a bit-wise inner product, over the field , of zero with itself.

]]>

There is no set manual for real scientific research, for uncharted territory. Exciting, new research has no “right” answer upon which to compare your data. And building your own, unique experimental set-up inherently requires much time to minimize new issues. It is interesting to me that when there is less guidance based on previous research, there is a larger possibility for great, new discoveries.

This summer I again retreated from the summer heat, plunging into the Caltech sub basements to further my understanding of the freshest research, efficient laboratory techniques, and culture in Physics research. The quiet hum of the air conditioner and lights marked an eerie contrast to the non-stop, bustling life of the classroom. It was an even more stark contrast to my 16-month-old daughter’s incessant joyful and curious exploration of the world.

My first project this summer focused on helping to get the SEM (Scanning Electron Microscope) up and running. Once the SEM is functional the first samples it will scan are solar cells comprised of graphene nanotubes. If grand scaled and mass produced, methane may be one source of the necessary carbon for graphene. What if we contained methane gases that are already problematically being released into our greenhouse-gas-ridden atmosphere and subsequently used them to make graphene solar cells? What a win-win solution to help the daunting problem of global climate change?

Helping to set up the SEM involved a variety of interesting tasks: I found the working distance from the SEM gun to the sample holder that would soon be loaded into the chamber. I researched Pirani gauge parts and later rubber pads to help with damping. I helped to install copper ConFlat flanges for making low pressure seals. We used sonification to clean parts used at the SEM lab. We found and installed a nitrogen (N2) line to flush out moisture in the SEM chamber. There were numerous rounds of baking out moisture that may have collected in the chamber in the years since this SEM was last in use.

During “down time”, such as when the SEM chamber was being pumped down to less than one-part-per-billion pressure with multiple vacuum pumps, we directed our attention to two other projects. The first was making parts for the tube scanner head. Due to the possibility of burning out scanner heads in the alignment process when we first turn on the SEM gun, we needed to be prepared with alternative STM parts. This involved drilling, epoxying, baking, sanding, and soldering tiny pieces. A diminutive coaxial cable with multiple insulating layers was stripped apart so that we could properly connect the gold conducting wire from within.

During the last week I focused my efforts by returning to an interferometer set up in the sub-basement of Sloan. Last summer, part of my time was spent learning about and setting up an interferometer system in order to measure the shift of a piezoelectric stack when particular voltages were applied. Once calibrated, these piezos will be used to control the motion of the tips in our lab’s STM (Scanning Tunneling Microscope). This summer was different because we had additional equipment from Thorlabs in order to move further along with the project.

On the day of arrival of the much-needed parts, I felt like a child at Christmas. Ready, set, go. Racing against the impending end of the internship and start of the upcoming academic year, I worked to assemble our equipment.

This same procedure was completed roughly a decade ago by graduate students in our lab. Now, though, the remaining calibrated piezos have been used. In order to continue doing important STM measurements, new piezo stacks need to be calibrated.

A ray of red, coherent light from our LASER is directed to a beamsplitter. One arm of light is directed to a mirror and reflected back to the beamsplitter. Another arm of light is directed to a mirror fixed upon the piezoelectric stack. Depending on the applied voltage and particular piezo stacks, the orientation and magnitude of the shear varies. A signal generator and amplifier are connected to the opposite end of the piezoelectric stacks to carefully control the voltage signal applied to the piezos. Once the beams are recombined at the beamsplitter, they should interfere. An interference pattern should be detected on the oscilloscope.

At first it was plain fun setting up the various parts, like fitting puzzle pieces with the various optics devices. The difficulty came later in troubleshooting. I had little issue with adjusting the set-up so that both beams from the LASER landed directly onto the photodetector. Getting a beautiful interference pattern was another case. Making sense of the output signal from the photodetector on the oscilloscope was also a process. Finding joy and benefit in the learning process as opposed to frustration in a trying time is an important lesson in life. Of course it is inevitable that there will be difficulties in life. Can we grow from the learning opportunity as opposed to complaining about the struggle?

The irony is that just like my students, I wanted an easy, beautiful interference pattern that could be interpreted on our oscilloscope. I had the opportunity to learn through trial and error and from additional research on interferometers. I look forward to hearing from the lab group about the progress that is made on this project during the academic year while I am in the classroom. I am grateful to IQIM and the Yeh Lab Group for allowing me to continue participating in this exciting program.

]]>

One day into my visit, equilibrium shattered my equanimity.

I was participating in the conference “Beyond i.i.d. in information theory.” What “beyond i.i.d.” means is explained in these articles. I was to present about resource theories for thermodynamics. Resource theories are simple models developed in quantum information theory. The original thermodynamic resource theory modeled systems that exchange energy and information.

Imagine a quantum computer built from tiny, cold, quantum circuits. An air particle might bounce off the circuit. The bounce can slow the particle down, transferring energy from particle to circuit. The bounce can *entangle* the particle with the circuit, transferring quantum information from computer to particle.

Suppose that particles bounced off the computer for ages. The computer would thermalize, or reach equilibrium: The computer’s energy would flatline. The computer would reach a state called the *canonical ensemble*. The canonical ensemble looks like this: .

Joe Renes and I had extended these resource theories. Thermodynamic systems can exchange quantities other than energy and information. Imagine white-bean soup cooling on a stovetop. Gas condenses on the pot’s walls, and liquid evaporates. The soup exchanges not only heat, but also particles, with its environment. Imagine letting the soup cool for ages. It would thermalize to the *grand canonical ensemble, *. Joe and I had modeled systems that exchange diverse thermodynamic observables.^{*}

What if, fellow beyond-i.i.d.-er Jonathan Oppenheim asked, those observables didn’t commute with each other?

Mathematical objects called *operators* represent observables. Let represent a system’s energy, and let represent the number of particles in the system. The operators fail to commute if multiplying them in one order differs from multiplying them in the opposite order: .

Suppose that our quantum circuit has observables represented by noncommuting operators and . The circuit cannot have a well-defined energy and a well-defined particle number simultaneously. Physicists call this inability the* Uncertainty Principle*. Uncertainty and noncommutation infuse quantum mechanics as a Cashmere Glow^{TM} infuses a Downy fabric softener.

I glowed at Jonathan: All the coolness in Canada couldn’t have pleased me more than finding someone interested in that question.^{**} Suppose that a quantum system exchanges observables and with the environment. Suppose that and don’t commute, like components and of quantum angular momentum. Would the system thermalize? Would the thermal state have the form ? Could we model the system with a resource theory?

Jonathan proposed that we chat.

The chat sucked in beyond-i.i.d.-ers Philippe Faist and Andreas Winter. We debated strategies while walking to dinner. We exchanged results on the conference building’s veranda. We huddled over a breakfast table after colleagues had pushed their chairs back. Information flowed from chalkboard to notebook; energy flowed in the form of coffee; food particles flowed onto the floor as we brushed crumbs from our notebooks.

The idea morphed and split. It crystallized months later. We characterized, in three ways, the thermal state of a quantum system that exchanges noncommuting observables with its environment.

First, we generalized the microcanonical ensemble. The *microcanonical ensemble* is the thermal state of an isolated system. An isolated system exchanges no observables with any other system. The quantum computer and the air molecules can form an isolated system. So can the white-bean soup and its kitchen. Our quantum system and its environment form an isolated system. But they cannot necessarily occupy a microcanonical ensemble, thanks to noncommutation.

We generalized the microcanonical ensemble. The generalization involves approximation, unlikely measurement outcomes, and error tolerances. The microcanonical ensemble has a simple definition—sharp and clean as Banff air. We relaxed the definition to accommodate noncommutation. If the microcanonical ensemble resembles laundry detergent, our generalization resembles fabric softener.

Suppose that our system and its environment occupy this approximate microcanonical ensemble. Tracing out (mathematically ignoring) the environment yields the system’s thermal state. The thermal state basically has the form we expected, .

This exponential state, we argued, follows also from time evolution. The white-bean soup equilibrates upon exchanging heat and particles with the kitchen air for ages. Our quantum system can exchange observables with its environment for ages. The system equilibrates, we argued, to the state . The argument relies on a quantum-information tool called *canonical typicality.*

Third, we defined a resource theory for thermodynamic exchanges of noncommuting observables. In a thermodynamic resource theory, the thermal states are the worthless states: From a thermal state, one can’t extract energy usable to lift a weight or to power a laptop. The worthless states, we showed, have the form of .

Three path lead to the form of the thermal state of a quantum system that exchanges noncommuting observables with its environment. We published the results this summer.

Not only was Team Banff spilling coffee over . So were teams at Imperial College London and the University of Bristol. Our conclusions overlap, suggesting that everyone calculated correctly. Our methodologies differ, generating openings for exploration. The mountain passes between our peaks call out for mapping.

So does the path to physical reality. Do these thermal states form in labs? Could they? Cold atoms offer promise for realizations. In addition to experiments and simulations, master equations merit study. Dynamical typicality, Team Banff argued, suggests that results from equilibration. Master equations model equilibration. Does some Davies-type master equation have as its fixed point? Email me if you have leads!

A photo of Banff could illustrate Merriam-Webster’s entry for “equanimity.” Banff equanimity deepened our understanding of quantum equilibrium. But we wouldn’t have understood quantum equilibrium if questions hadn’t shattered our tranquility. Give me the disequilibrium of recognizing problems, I pray, and the equilibrium to solve them.

^{*}By “observable,” I mean “property that you can measure.”

^{**}Teams at Imperial College London and Bristol asked that question, too. More pleasing than three times the coolness in Canada!

]]>

The Interagency Working Group on Quantum Information Science (IWG on QIS), which began its work in late 2014, was charged “to assess Federal programs in QIS, monitor the state of the field, provide a forum for interagency coordination and collaboration, and engage in strategic planning of Federal QIS activities and investments.” The IWG recently released a well-crafted report, *Advancing Quantum Information Science: National Challenges and Opportunities*. The report recommends that “quantum information science be considered a priority for Federal coordination and investment.”

All the major US government agencies supporting QIS were represented on the IWG, which was co-chaired by officials from DOE, NSF, and NIST:

- Steve Binkley, who heads the Advanced Scientific Computing Research (ASCR) program in the Department of Energy Office of Science,

- Denise Caldwell, who directs the Physics Division of the National Science Foundation,

- Carl Williams, Deputy Director of the Physical Measurement Laboratory at the National Institute for Standards and Technology.

Denise and Carl have been effective supporters of QIS over many years of government service. Steve has recently emerged as another eloquent advocate for the field’s promise and importance.

At our request, the three co-chairs fielded questions about the report, with the understanding that their responses would be broadly disseminated. Their comments reinforced the message of the report — that all cognizant agencies favor a “coherent, all-of-government approach to QIS.”

Science funding in the US differs from elsewhere in the world. QIS is a prime example — for over 20 years, various US government agencies, each with its own mission, goals, and culture, have had a stake in QIS research. By providing more options for supporting innovative ideas, the existence of diverse autonomous funding agencies can be a blessing. But it can also be bewildering for scientists seeking support, and it poses challenges for formulating and executing effective national science policy. It’s significant that many different agencies worked together in the IWG, and were able to align with a shared vision.

“I think that everybody in the group has the same goals,” Denise told us. “The nation has a tremendous opportunity here. This is a terrifically important field for all of us involved, and we all want to see it succeed.” Carl added, “All of us believe that this is an area in which the US must be competitive, it is very important for both scientific and technological reasons … The differences [among agencies] are minor.”

Asked about the timing of the IWG and its report, Carl noted the recent trend toward “emerging niche applications” of QIS such as quantum sensors, and Denise remarked that government agencies are responding to a plea from industry for a cross-disciplinary work force broadly trained in QIS. At the same time, Denise emphasized, the IWG recognizes that “there are still many open basic science questions that are important for this field, and we need to focus investment onto these basic science questions, as well as look at investments or opportunities that lead into the first applications.”

DOE’s FY2017 budget request includes $10M to fund a new QIS research program, coordinated with NIST and NSF. Steve explained the thinking behind that request: “There are problems in the physical science space, spanned by DOE Office of Science programs, where quantum computation would be a useful a tool. This is the time to start making investments in that area.” Asked about the longer term commitment of DOE to QIS research, Steve was cautious. “What it will grow into over time is hard to tell — we’re right at the beginning.”

What can the rest of us in the QIS community do to amplify the impact of the report? Carl advised: “All of us should continue getting the excitement of the field out there, [and point to] the potential long term payoffs, whether they be in searches for dark matter or building better clocks or better GPS systems or better sensors. Making everybody aware of all the potential is good for our economy, for our country, and for all of us.”

Taking an even longer view, Denise reminded us that effective advocacy for QIS can get young people “excited about a field they can work in, where they can get jobs, where they can pursue science — that can be critically important. If we all think back to our own beginning careers, at some point in time we got excited about science. And so whatever one can do to excite the next generation about science and technology, with the hope of bringing them into studying and developing careers in this field, to me this is tremendously valuable. ”

All of us in the quantum information science community owe a debt to the IWG for their hard work and eloquent report, and to the agencies they represent for their vision and support. And we are all fortunate to be participating in the early stages of a new quantum revolution. As the IWG report makes clear, the best is yet to come.

]]>

This was the opening sentence of Greg Kuperberg’s Facebook status on July 4th, 2016.

“I have a joint paper (on isoperimetric inequalities in differential geometry) in which we need to know that

is non-negative for x and y non-negative and between and . Also, the minimum only occurs for .”

Let’s take a moment to appreciate the complexity of the mathematical statement above. It is a non-linear inequality in three variables, mixing trigonometry with algebra and throwing in some arc-tangents for good measure. Greg, continued:

“We proved it, but only with the aid of symbolic algebra to factor an algebraic variety into irreducible components. The human part of our proof is also not really a cake walk.

A simpler proof would be way cool.”

I was hooked. The cubic terms looked a little intimidating, but if I converted x and y into and , respectively, as one of the comments on Facebook promptly suggested, I could at least get rid of the annoying arc-tangents and then calculus and trigonometry would take me the rest of the way. Greg replied to my initial comment outlining a quick route to the proof: “Let me just caution that we found the problem unyielding.” Hmm… Then, Greg revealed that the paper containing the original proof was over three years old (had he been thinking about this since then? that’s what true love must be like.) Titled “The Cartan-Hadamard Conjecture and The Little Prince“, the above inequality makes its appearance as Lemma 7.1 on page 45 (of 63). To quote the paper: “Although the lemma is evident from contour plots, the authors found it surprisingly tricky to prove rigorously.”

As I filled pages of calculations and memorized every trigonometric identity known to man, I realized that Greg was right: the problem was highly intractable. The quick solution that was supposed to take me two to three days turned into two weeks of hell, until I decided to drop the original approach and stick to doing calculus with the known unknowns, x and y. The next week led me to a set of three non-linear equations mixing trigonometric functions with fourth powers of x and y, at which point I thought of giving up. I knew what I needed to do to finish the proof, but it looked freaking insane. Still, like the masochist that I am, I continued calculating away until my brain was mush. And then, yesterday, during a moment of clarity, I decided to go back to one of the three equations and rewrite it in a different way. That is when I noticed the error. I had solved for in terms of x and y, but I had made a mistake that had cost me 10 days of intense work with no end in sight. Once I found the mistake, the whole proof came together within about an hour. At that moment, I felt a mix of happiness (duh), but also sadness, as if someone I had grown fond of no longer had a reason to spend time with me and, at the same time, I had ran out of made-up reasons to hang out with them. But, yeah, I mostly felt happiness.

Before I present the proof below, I want to take a moment to say a few words about Greg, whom I consider to be the John Preskill of mathematics: a* lodestar of sanity in a sea of hyperbole *(to paraphrase Scott Aaronson). When I started grad school at UC Davis back in 2003, quantum information theory and quantum computing were becoming “a thing” among some of the top universities around the US. So, I went to several of the mathematics faculty in the department asking if there was a course on quantum information theory I could take. The answer was to “read Nielsen and Chuang and *then* go talk to Professor Kuperberg”. Being a foolish young man, I skipped the first part and went straight to Greg to ask him to teach me (and four other brave souls) quantum “stuff”. Greg obliged with a course on… *quantum probability *and* quantum groups*. Not what I had in mind. This guy was hardcore. Needless to say, the five brave souls taking the class (mostly fourth year graduate students and me, the noob) quickly became three, then two gluttons for punishment (the other masochist became one of my best friends in grad school). I could not drop the class, not because I had asked Greg to do this as a favor to me, but because I knew that I was in the presence of greatness (or maybe it was Stockholm syndrome). My goal then, as an aspiring mathematician, became to one day have a conversation with Greg where, for some brief moment, I would not sound stupid. A man of incredible intelligence, Greg is that rare individual whose character matches his intellect. Much like the anti-heroes portrayed by Humphrey Bogart in Casablanca and the Maltese Falcon, Greg keeps a low-profile, seems almost cynical at times, but in the end, he works harder than everyone else to help those in need. For example, on MathOverflow, a question and answer website for professional mathematicians around the world, Greg is listed as one of the top contributors of all time.

But, back to the problem. The past four weeks thinking about it have oscillated between phases of “this is the most fun I’ve had in years!” to “this is Greg’s way of telling me I should drop math and become a go-go dancer”. Now that the ordeal is over, I can confidently say that the problem is anything but “dull” (which is how Greg felt others on MathOverflow would perceive it, so he never posted it there). In fact, if I ever have to teach Calculus, I will subject my students to the step-by-step proof of this problem. OK, here is the proof. This one is for you Greg. Thanks for being such a great role model. Sorry I didn’t get to tell you until now. And you are right not to offer a “bounty” for the solution. The journey (more like, a trip to Mordor and back) was all the money.

**The proof**: The first thing to note (and if I had read Greg’s paper earlier than today, I would have known as much weeks ago) is that the following equality holds (which can be verified quickly by differentiating both sides):

.

Using the above equality (and the equivalent one for y), we get:

Now comes the fun part. We differentiate with respect to , x and y, and set to zero to find all the maxima and minima of (though we are only interested in the global minimum, which is supposed to be at . Some high-school level calculus yields:

At this point, the most well-known trigonometric identity of all time, , can be used to show that the right-hand-side can be re-written as:

where I used (my now favorite) trigonometric identity: (note to the reader: ). Putting it all together, we now have the very suggestive condition:

noting that, despite appearances, is not a solution (as can be checked from the original form of this equality, unless and are infinite, in which case the expression is clearly non-negative, as we show towards the end of this post). This leaves us with and

as candidates for where the minimum may be. A quick check shows that:

since x and y are non-negative. The following obvious substitution becomes our greatest ally for the rest of the proof:

Substituting the above in the remaining condition for , and using again that , we get:

which can be further simplified to (if you are paying attention to minus signs and don’t waste a week on a wild-goose chase like I did):

.

As Greg loves to say, *we are finally cooking with gas*. Note that the expression is symmetric in and , which should be obvious from the symmetry of in x and y. That observation will come in handy when we take derivatives with respect to x and y now. Factoring , we get:

Substituting x and y with , respectively and using the identities and the above expression simplifies significantly to the following expression:

Using , which we derived earlier by looking at the extrema of with respect to , and noting that the global minimum would have to be an extremum with respect to all three variables, we get:

where we used and

We may assume, without loss of generality, that . If , then , which leads to the contradiction , unless the other condition, , holds, which leads to . Dividing through by and re-writing , yields:

which can be further modified to:

and, similarly for (due to symmetry):

Subtracting the two equations from each other, we get:

,

which implies that and/or . The first leads to which immediately implies (since the left and right side of the equality have opposite signs otherwise). The second one implies that either , or , which follows from the earlier equation . If and , it is easy to see that is the only solution by expanding . If, on the other hand, , then looking at the original form of , we see that , since .

And that concludes the proof, since the only cases for which all three conditions are met lead to and, hence, . The minimum of at these values is always zero. That’s right, all this work to end up with “nothing”. But, at least, the last four weeks have been anything but dull.

**Update: **Greg offered Lemma 7.4 from the same paper as another challenge (the sines, cosines and tangents are now transformed into hyperbolic trigonometric functions, with a few other changes, mostly in signs, thrown in there). This is a more hardcore-looking inequality, but the proof turns out to follow the steps of Lemma 7.1 almost identically. In particular, all the conditions for extrema are exactly the same, with the only difference being that cosine becomes hyperbolic cosine. It is an awesome exercise in calculus to check this for yourself. Do it. Unless you have something better to do.

]]>

More than 15 years ago, Microsoft decided to jump into the quantum computing business betting big on topological quantum computing as the next big thing. The new website of Microsoft’s Station Q shows that keeping a low profile is no longer an option. This is a sentiment that Google clearly shared, when back in 2013, they decided to promote their new partnership with NASA Ames and D-Wave, known as the Quantum A.I. Lab, through a YouTube video that went viral (disclosure: they do own Youtube.) In fact, IQIM worked with Google at the time to get kids excited about the quantum world by developing qCraft, a mod introducing quantum physics into the world of Minecraft. Then, a few months ago, IBM unveiled the quantum experience website, which captured the public’s imagination by offering a do-it-yourself opportunity to run an algorithm on a 5-qubit quantum chip in the cloud.

But, looking at the opportunities for investment in academic groups working on quantum computing, companies like Microsoft were/are investing heavily in experimental labs across the pond, such as Leo Kowenhoven’s group at TU Delft and Charlie Marcus’ group in Copenhagen, with smaller investments here in the US. This may just reflect the fact that the best efforts to build topological qubits are in Europe, but it still begs the question why a fantastic idea like topologically protected majorana zero modes, by Yale University’s Nick Read and Dmitry Green, which inspired the now famous Majorana wire paper by Alexei Kitaev when he was a researcher at Microsoft’s Redmond research lab, and whose transition from theory to experiment took off with contributions from Maryland and IQIM researchers, was outsourced to European labs for experimental verification and further development. The one example of a large investment in a US academic research group has been Google’s hiring of John Martinis away from UCSB. In fact, John and I met a couple of years ago to discuss investment into his superconducting quantum computing efforts, because government funding for academic efforts to actually build a quantum computer was lacking. China was investing, Canada was investing, Europe went a little crazy, but the US was relying on visionary agencies like IARPA, DARPA and the NSF to foot the bill (without which Physics Frontiers Centers like IQIM wouldn’t be around). In short, there was no top-down policy directive to focus national attention and inter-agency Federal funding on winning the quantum supremacy race.

Until now.

The National Science and Technology Council, which is chaired by the President of the United States and “is the principal means within the executive branch to coordinate science and technology policy across the diverse entities that make up the Federal research and development enterprise”, just released the following report:

Advancing Quantum Information Science: National Challenges and Opportunities

The White House blog post does a good job at describing the high-level view of what the report is about and what the policy recommendations are. There is mention of quantum sensors and metrology, of the promise of quantum computing to material science and basic science, and they even go into the exciting connections between quantum error-correcting codes and emergent spacetime, by IQIM’s Pastawski, et al.

But the big news is that the report recommends significant and sustained investment in Quantum Information Science. The blog post reports that the administration intends “to engage academia, industry, and government in the upcoming months to … exchange views on key needs and opportunities, and consider how to maintain vibrant and robust national ecosystems for QIS research and development and for high-performance computing.”

Personally, I am excited to see how the fierce competition at the academic, industrial and now international level will lead to a race for quantum supremacy. The rivals are all worthy of respect, especially because they are vying for supremacy not just over each other, but over a problem so big and so interesting, that anyone’s success is everyone’s success. After all, anyone can quantum, and if things go according to plan, we will soon have the first generation of kids trained on hourofquantum.com (it doesn’t exist yet), as well as hourofcode.com. Until then, quantum chess and qCraft will have to do.

]]>

The mathematical physicist presented a colloquium at Cal State LA this May.^{1} The talk’s title: “My Favorite Number.” The advertisement image: A purple “24” superimposed atop two egg cartons.

The colloquium concerned string theory. String theorists attempt to reconcile Einstein’s general relativity with quantum mechanics. Relativity concerns the large and the fast, like the sun and light. Quantum mechanics concerns the small, like atoms. Relativity and with quantum mechanics individually suggest that space-time consists of four dimensions: up-down, left-right, forward-backward, and time. String theory suggests that space-time has more than four dimensions. Counting dimensions leads theorists to John Baez’s favorite number.

His topic struck me as bold, simple, and deep. As an otherworldly window onto the pedestrian. John Baez became, when I saw the colloquium ad, a hero of mine.

And a tough act to follow.

I presented Cal State LA’s physics colloquium the week after John Baez. My title: “Quantum steampunk: Quantum information applied to thermodynamics.” Steampunk is a literary, artistic, and film genre. Stories take place during the 1800s—the Victorian era; the Industrial era; an age of soot, grime, innovation, and adventure. Into the 1800s, steampunkers transplant modern and beyond-modern technologies: automata, airships, time machines, etc. Example steampunk works include Will Smith’s 1999 film *Wild Wild West*. Steampunk weds the new with the old.

So does quantum information applied to thermodynamics. Thermodynamics budded off from the Industrial Revolution: The steam engine crowned industrial technology. Thinkers wondered how efficiently engines could run. Thinkers continue to wonder. But the steam engine no longer crowns technology; quantum physics (with other discoveries) does. Quantum information scientists study the roles of information, measurement, and correlations in heat, energy, entropy, and time. We wed the new with the old.

What image could encapsulate my talk? I couldn’t lean on egg cartons. I proposed a steampunk warrior—cravatted, begoggled, and spouting electricity. The proposal met with a polite cough of an email. Not all department members, Milan Mijic pointed out, had heard of steampunk.

Milan is a Cal State LA professor and my erstwhile host. We toured the palm-speckled campus around colloquium time. What, he asked, can quantum information contribute to thermodynamics?

Heat offers an example. Imagine a classical (nonquantum) system of particles. The particles carry kinetic energy, or energy of motion: They jiggle. Particles that bump into each other can exchange energy. We call that energy *heat*. Heat vexes engineers, breaking transistors and lowering engines’ efficiencies.

Like heat, *work* consists of energy. Work has more “orderliness” than the heat transferred by random jiggles. Examples of work exertion include the compression of a gas: A piston forces the particles to move in one direction, in concert. Consider, as another example, driving electrons around a circuit with an electric field. The field forces the electrons to move in the same direction. Work and heat account for all the changes in a system’s energy. So states the First Law of Thermodynamics.

Suppose that the system is quantum. It doesn’t necessarily have a well-defined energy. But we can stick the system in an electric field, and the system can exchange motional-type energy with other systems. How should we define “work” and “heat”?

Quantum information offers insights, such as via entropies. Entropies quantify how “mixed” or “disordered” states are. Disorder grows as heat suffuses a system. Entropies help us extend the First Law to quantum theory.

So I explained during the colloquium. Rarely have I relished engaging with an audience as much as I relished engaging with Cal State LA’s. Attendees made eye contact, posed questions, commented after the talk, and wrote notes. A student in a corner appeared to be writing homework solutions. But a presenter couldn’t have asked for more from the rest. One exclamation arrested me like a coin in the cogs of a grandfather clock.

I’d peppered my slides with steampunk art: paintings, drawings, stills from movies. The peppering had staved off boredom as I’d created the talk. I hoped that the peppering would stave off my audience’s boredom. I apologized about the trimmings.

“No!” cried a woman near the front. “It’s lovely!”

I was about to discuss experiments by Jukka Pekola’s group. Pekola’s group probes quantum thermodynamics using electronic circuits. The group measures heat by counting the electrons that hop from one part of the circuit to another. *Single-electron transistors* track tunneling (quantum movements) of single particles.

Heat complicates engineering, calculations, and California living. Heat scrambles signals, breaks devices, and lowers efficiencies. Quantum heat can evade definition. Thermodynamicists grind their teeth over heat.

“No!” the woman near the front had cried. “It’s lovely!”

She was referring to steampunk art. But her exclamation applied to my subject. Heat has not only practical importance, but also fundamental: Heat influences every law of thermodynamics. Thermodynamic law underpins much of physics as 24 underpins much of string theory. Lovely, I thought, indeed.

Cal State LA offered a new view of my subfield, an otherworldly window onto the pedestrian. The more pedestrian an idea—the more often the idea surfaces, the more of our world the idea accounts for—the deeper the physics. Heat seems as pedestrian as a Pokémon Go player. But maybe, someday, I’ll present an idea as simple, bold, and deep as the number 24.

*With gratitude to Milan Mijic, and to Cal State LA’s Department of Physics and Astronomy, for their hospitality.*

^{1}For nonacademics: A typical physics department hosts several presentations per week. A *seminar* relates research that the speaker has undertaken. The audience consists of department members who specialize in the speaker’s subfield. A department’s astrophysicists might host a Monday seminar; its quantum theorists, a Wednesday seminar; etc. One *colloquium* happens per week. Listeners gather from across the department. The speaker introduces a subfield, like the correction of errors made by quantum computers. Course lectures target students. Endowed lectures, often named after donors, target researchers.

]]>