About Nicole Yunger Halpern

I'm pursuing a physics PhD with the buccaneers of Quantum Frontiers. Before moving to Caltech, I studied at Dartmouth College and the Perimeter Institute for Theoretical Physics. I apply quantum-information tools to thermodynamics and statistical mechanics (the study of heat, work, information, and time), particularly at small scales. I like my quantum information physical, my math algebraic, and my spins rotated but not stirred.

Tripping over my own inner product

A scrape stood out on the back of my left hand. The scrape had turned greenish-purple, I noticed while opening the lecture-hall door. I’d jounced the hand against my dining-room table while standing up after breakfast. The table’s corners form ninety-degree angles. The backs of hands do not.

Earlier, when presenting a seminar, I’d forgotten to reference papers by colleagues. Earlier, I’d offended an old friend without knowing how. Some people put their feet in their mouths. I felt liable to swallow a clog.

The lecture was for Ph 219: Quantum ComputationI was TAing (working as a teaching assistant for) the course. John Preskill was discussing quantum error correction.

Computers suffer from errors as humans do: Imagine setting a hard drive on a table. Coffee might spill on the table (as it probably would have if I’d been holding a mug near the table that week). If the table is in my California dining room, an earthquake might judder the table. Juddering bangs the hard drive against the wood, breaking molecular bonds and deforming the hardware. The information stored in computers degrades.

How can we protect information? By encoding it—by translating the message into a longer, encrypted message. An earthquake might judder the encoded message. We can reverse some of the damage by error-correcting.

Different types of math describe different codes. John introduced a type of math called symplectic vector spaces. “Symplectic vector space” sounds to me like a garden of spiny cacti (on which I’d probably have pricked fingers that week). Symplectic vector spaces help us translate between the original and encoded messages.


Symplectic vector space?

Say that an earthquake has juddered our hard drive. We want to assess how the earthquake corrupted the encoded message and to error-correct. Our encryption scheme dictates which operations we should perform. Each possible operation, we represent with a mathematical object called a vector. A vector can take the form of a list of numbers.

We construct the code’s vectors like so. Say that our quantum hard drive consists of seven phosphorus nuclei atop a strip of silicon. Each nucleus has two observables, or measurable properties. Let’s call the observables Z and X.

Suppose that we should measure the first nucleus’s Z. The first number in our symplectic vector is 1. If we shouldn’t measure the first nucleus’s Z, the first number is 0. If we should measure the second nucleus’s Z, the second number is 1; if not, 0; and so on for the other nuclei. We’ve assembled the first seven numbers in our vector. The final seven numbers dictate which nuclei’s Xs we measure. An example vector looks like this: ( 1, \, 0, \, 1, \, 0, \, 1, \, 0, \, 1 \; | \; 0, \, 0, \, 0, \, 0, \, 0, \, 0, \, 0 ).

The vector dictates that we measure four Zs and no Xs.


Symplectic vectors represent the operations we should perform to correct errors.

A vector space is a collection of vectors. Many problems—not only codes—involve vector spaces. Have you used Google Maps? Google illustrates the step that you should take next with an arrow. We can represent that arrow with a vector. A vector, recall, can take the form of a list of numbers. The step’s list of twonumbers indicates whether you should walk ( \text{Northward or not} \; | \; \text{Westward or not} ).


I’d forgotten about my scrape by this point in the lecture. John’s next point wiped even cacti from my mind.

Say you want to know how similar two vectors are. You usually calculate an inner product. A vector v tends to have a large inner product with any vector w that points parallel to v.


Parallel vectors tend to have a large inner product.

The vector v tends to have an inner product of zero with any vector w that points perpendicularly. Such v and w are said to annihilate each other. By the end of a three-hour marathon of a research conversation, we might say that v and w “destroy” each other. v is orthogonal to w.


Two orthogonal vectors, having an inner product of zero, annihilate each other.

You might expect a vector v to have a huge inner product with itself, since v points parallel to v. Quantum-code vectors defy expectations. In a symplectic vector space, John said, “you can be orthogonal to yourself.”

A symplectic vector2 can annihilate itself, destroy itself, stand in its own way. A vector can oppose itself, contradict itself, trip over its own feet. I felt like I was tripping over my feet that week. But I’m human. A vector is a mathematical ideal. If a mathematical ideal could be orthogonal to itself, I could allow myself space to err.


Tripping over my own inner product.

Lloyd Alexander wrote one of my favorite books, the children’s novel The Book of Three. The novel features a stout old farmer called Coll. Coll admonishes an apprentice who’s burned his fingers: “See much, study much, suffer much.” We smart while growing smarter.

An ant-sized scar remains on the back of my left hand. The scar has been fading, or so I like to believe. I embed references to colleagues’ work in seminar Powerpoints, so that I don’t forget to cite anyone. I apologized to the friend, and I know about symplectic vector spaces. We all deserve space to err, provided that we correct ourselves. Here’s to standing up more carefully after breakfast.


1Not that I advocate for limiting each coordinate to one bit in a Google Maps vector. The two-bit assumption simplifies the example.

2Not only symplectic vectors are orthogonal to themselves, John pointed out. Consider a string of bits that contains an even number of ones. Examples include (0, 0, 0, 0, 1, 1). Each such string has a bit-wise inner product, over the field {\mathbb Z}_2, of zero with itself.

Upending my equilibrium

Few settings foster equanimity like Canada’s Banff International Research Station (BIRS). Mountains tower above the center, softened by pines. Mornings have a crispness that would turn air fresheners evergreen with envy. The sky looks designed for a laundry-detergent label.


Doesn’t it?

One day into my visit, equilibrium shattered my equanimity.

I was participating in the conference “Beyond i.i.d. in information theory.” What “beyond i.i.d.” means is explained in these articles.  I was to present about resource theories for thermodynamics. Resource theories are simple models developed in quantum information theory. The original thermodynamic resource theory modeled systems that exchange energy and information.

Imagine a quantum computer built from tiny, cold, quantum circuits. An air particle might bounce off the circuit. The bounce can slow the particle down, transferring energy from particle to circuit. The bounce can entangle the particle with the circuit, transferring quantum information from computer to particle.

Suppose that particles bounced off the computer for ages. The computer would thermalize, or reach equilibrium: The computer’s energy would flatline. The computer would reach a state called the canonical ensemble. The canonical ensemble looks like this:  e^{ - \beta H }  / { Z }.

Joe Renes and I had extended these resource theories. Thermodynamic systems can exchange quantities other than energy and information. Imagine white-bean soup cooling on a stovetop. Gas condenses on the pot’s walls, and liquid evaporates. The soup exchanges not only heat, but also particles, with its environment. Imagine letting the soup cool for ages. It would thermalize to the grand canonical ensemble, e^{ - \beta (H - \mu N) } / { Z }. Joe and I had modeled systems that exchange diverse thermodynamic observables.*

What if, fellow beyond-i.i.d.-er Jonathan Oppenheim asked, those observables didn’t commute with each other?

Mathematical objects called operators represent observables. Let \hat{H} represent a system’s energy, and let \hat{N} represent the number of particles in the system. The operators fail to commute if multiplying them in one order differs from multiplying them in the opposite order: \hat{H}  \hat{N}  \neq  \hat{N}  \hat{H}.

Suppose that our quantum circuit has observables represented by noncommuting operators \hat{H} and \hat{N}. The circuit cannot have a well-defined energy and a well-defined particle number simultaneously. Physicists call this inability the Uncertainty Principle. Uncertainty and noncommutation infuse quantum mechanics as a Cashmere GlowTM infuses a Downy fabric softener.

Downy Cashmere Glow 2

Quantum uncertainty and noncommutation.

I glowed at Jonathan: All the coolness in Canada couldn’t have pleased me more than finding someone interested in that question.** Suppose that a quantum system exchanges observables \hat{Q}_1 and \hat{Q}_2 with the environment. Suppose that \hat{Q}_1 and \hat{Q}_2 don’t commute, like components \hat{S}_x and \hat{S}_y of quantum angular momentum. Would the system thermalize? Would the thermal state have the form e^{ \mu_1 \hat{Q}_1 + \mu_2 \hat{Q}_2 } / { Z }? Could we model the system with a resource theory?

Jonathan proposed that we chat.

The chat sucked in beyond-i.i.d.-ers Philippe Faist and Andreas Winter. We debated strategies while walking to dinner. We exchanged results on the conference building’s veranda. We huddled over a breakfast table after colleagues had pushed their chairs back. Information flowed from chalkboard to notebook; energy flowed in the form of coffee; food particles flowed onto the floor as we brushed crumbs from our notebooks.

Coffee, crumbs

Exchanges of energy and particles.

The idea morphed and split. It crystallized months later. We characterized, in three ways, the thermal state of a quantum system that exchanges noncommuting observables with its environment.

First, we generalized the microcanonical ensemble. The microcanonical ensemble is the thermal state of an isolated system. An isolated system exchanges no observables with any other system. The quantum computer and the air molecules can form an isolated system. So can the white-bean soup and its kitchen. Our quantum system and its environment form an isolated system. But they cannot necessarily occupy a microcanonical ensemble, thanks to noncommutation.

We generalized the microcanonical ensemble. The generalization involves approximation, unlikely measurement outcomes, and error tolerances. The microcanonical ensemble has a simple definition—sharp and clean as Banff air. We relaxed the definition to accommodate noncommutation. If the microcanonical ensemble resembles laundry detergent, our generalization resembles fabric softener.

Detergent vs. softener

Suppose that our system and its environment occupy this approximate microcanonical ensemble. Tracing out (mathematically ignoring) the environment yields the system’s thermal state. The thermal state basically has the form we expected, \gamma = e^{ \sum_j  \mu_j \hat{Q}_j } / { Z }.

This exponential state, we argued, follows also from time evolution. The white-bean soup equilibrates upon exchanging heat and particles with the kitchen air for ages. Our quantum system can exchange observables \hat{Q}_j with its environment for ages. The system equilibrates, we argued, to the state \gamma. The argument relies on a quantum-information tool called canonical typicality.

Third, we defined a resource theory for thermodynamic exchanges of noncommuting observables. In a thermodynamic resource theory, the thermal states are the worthless states: From a thermal state, one can’t extract energy usable to lift a weight or to power a laptop. The worthless states, we showed, have the form of \gamma.

Three path lead to the form \gamma of the thermal state of a quantum system that exchanges noncommuting observables with its environment. We published the results this summer.

Not only was Team Banff spilling coffee over \gamma. So were teams at Imperial College London and the University of Bristol. Our conclusions overlap, suggesting that everyone calculated correctly. Our methodologies differ, generating openings for exploration. The mountain passes between our peaks call out for mapping.

So does the path to physical reality. Do these thermal states form in labs? Could they? Cold atoms offer promise for realizations. In addition to experiments and simulations, master equations merit study. Dynamical typicality, Team Banff argued, suggests that \gamma results from equilibration. Master equations model equilibration. Does some Davies-type master equation have \gamma as its fixed point? Email me if you have leads!


Experimentalists, can you realize the thermal state e^{ \sum_j \mu_j \hat{Q}_j } / Z whose charges \hat{Q}_j don’t commute?


A photo of Banff could illustrate Merriam-Webster’s entry for “equanimity.” Banff equanimity deepened our understanding of quantum equilibrium. But we wouldn’t have understood quantum equilibrium if questions hadn’t shattered our tranquility. Give me the disequilibrium of recognizing problems, I pray, and the equilibrium to solve them.


*By “observable,” I mean “property that you can measure.”

**Teams at Imperial College London and Bristol asked that question, too. More pleasing than three times the coolness in Canada!

Bringing the heat to Cal State LA

John Baez is a tough act to follow.

The mathematical physicist presented a colloquium at Cal State LA this May.1 The talk’s title: “My Favorite Number.” The advertisement image: A purple “24” superimposed atop two egg cartons.


The colloquium concerned string theory. String theorists attempt to reconcile Einstein’s general relativity with quantum mechanics. Relativity concerns the large and the fast, like the sun and light. Quantum mechanics concerns the small, like atoms. Relativity and with quantum mechanics individually suggest that space-time consists of four dimensions: up-down, left-right, forward-backward, and time. String theory suggests that space-time has more than four dimensions. Counting dimensions leads theorists to John Baez’s favorite number.

His topic struck me as bold, simple, and deep. As an otherworldly window onto the pedestrian. John Baez became, when I saw the colloquium ad, a hero of mine.

And a tough act to follow.

I presented Cal State LA’s physics colloquium the week after John Baez. My title: “Quantum steampunk: Quantum information applied to thermodynamics.” Steampunk is a literary, artistic, and film genre. Stories take place during the 1800s—the Victorian era; the Industrial era; an age of soot, grime, innovation, and adventure. Into the 1800s, steampunkers transplant modern and beyond-modern technologies: automata, airships, time machines, etc. Example steampunk works include Will Smith’s 1999 film Wild Wild West. Steampunk weds the new with the old.

So does quantum information applied to thermodynamics. Thermodynamics budded off from the Industrial Revolution: The steam engine crowned industrial technology. Thinkers wondered how efficiently engines could run. Thinkers continue to wonder. But the steam engine no longer crowns technology; quantum physics (with other discoveries) does. Quantum information scientists study the roles of information, measurement, and correlations in heat, energy, entropy, and time. We wed the new with the old.


What image could encapsulate my talk? I couldn’t lean on egg cartons. I proposed a steampunk warrior—cravatted, begoggled, and spouting electricity. The proposal met with a polite cough of an email. Not all department members, Milan Mijic pointed out, had heard of steampunk.

Steampunk warrior

Milan is a Cal State LA professor and my erstwhile host. We toured the palm-speckled campus around colloquium time. What, he asked, can quantum information contribute to thermodynamics?

Heat offers an example. Imagine a classical (nonquantum) system of particles. The particles carry kinetic energy, or energy of motion: They jiggle. Particles that bump into each other can exchange energy. We call that energy heat. Heat vexes engineers, breaking transistors and lowering engines’ efficiencies.

Like heat, work consists of energy. Work has more “orderliness” than the heat transferred by random jiggles. Examples of work exertion include the compression of a gas: A piston forces the particles to move in one direction, in concert. Consider, as another example, driving electrons around a circuit with an electric field. The field forces the electrons to move in the same direction. Work and heat account for all the changes in a system’s energy. So states the First Law of Thermodynamics.

Suppose that the system is quantum. It doesn’t necessarily have a well-defined energy. But we can stick the system in an electric field, and the system can exchange motional-type energy with other systems. How should we define “work” and “heat”?

Quantum information offers insights, such as via entropies. Entropies quantify how “mixed” or “disordered” states are. Disorder grows as heat suffuses a system. Entropies help us extend the First Law to quantum theory.

First slide

So I explained during the colloquium. Rarely have I relished engaging with an audience as much as I relished engaging with Cal State LA’s. Attendees made eye contact, posed questions, commented after the talk, and wrote notes. A student in a corner appeared to be writing homework solutions. But a presenter couldn’t have asked for more from the rest. One exclamation arrested me like a coin in the cogs of a grandfather clock.

I’d peppered my slides with steampunk art: paintings, drawings, stills from movies. The peppering had staved off boredom as I’d created the talk. I hoped that the peppering would stave off my audience’s boredom. I apologized about the trimmings.

“No!” cried a woman near the front. “It’s lovely!”

I was about to discuss experiments by Jukka Pekola’s group. Pekola’s group probes quantum thermodynamics using electronic circuits. The group measures heat by counting the electrons that hop from one part of the circuit to another. Single-electron transistors track tunneling (quantum movements) of single particles.

Heat complicates engineering, calculations, and California living. Heat scrambles signals, breaks devices, and lowers efficiencies. Quantum heat can evade definition. Thermodynamicists grind their teeth over heat.

“No!” the woman near the front had cried. “It’s lovely!”

She was referring to steampunk art. But her exclamation applied to my subject. Heat has not only practical importance, but also fundamental: Heat influences every law of thermodynamics. Thermodynamic law underpins much of physics as 24 underpins much of string theory. Lovely, I thought, indeed.

Cal State LA offered a new view of my subfield, an otherworldly window onto the pedestrian. The more pedestrian an idea—the more often the idea surfaces, the more of our world the idea accounts for—the deeper the physics. Heat seems as pedestrian as a Pokémon Go player. But maybe, someday, I’ll present an idea as simple, bold, and deep as the number 24.


A window onto Cal State LA.

With gratitude to Milan Mijic, and to Cal State LA’s Department of Physics and Astronomy, for their hospitality.

1For nonacademics: A typical physics department hosts several presentations per week. A seminar relates research that the speaker has undertaken. The audience consists of department members who specialize in the speaker’s subfield. A department’s astrophysicists might host a Monday seminar; its quantum theorists, a Wednesday seminar; etc. One colloquium happens per week. Listeners gather from across the department. The speaker introduces a subfield, like the correction of errors made by quantum computers. Course lectures target students. Endowed lectures, often named after donors, target researchers.

What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Steaming soup

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Bridge - theory

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Bridge - exprmt

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Dunk 2

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter \delta. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number N_\delta of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform N_\delta trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

Schwinger headstone

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

Bread knife

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)

Quantum braiding: It’s all in (and on) your head.

Morning sunlight illuminated John Preskill’s lecture notes. The notes concern Caltech’s quantum-computation course, Ph 219. I’m TAing (the teaching assistant for) Ph 219. I previewed lecture material one sun-kissed Sunday.

Pasadena sunlight spilled through my window. So did the howling of a dog that’s deepened my appreciation for Billy Collins’s poem “Another reason why I don’t keep a gun in the house.” My desk space warmed up, and I unbuttoned my jacket. I underlined a phrase, braided my hair so my neck could cool, and flipped a page.

I flipped back. The phrase concerned a mathematical statement called “the Yang-Baxter relation.” A sunbeam had winked on in my mind: The Yang-Baxter relation described my hair.

The Yang-Baxter relation belongs to a branch of math called “topology.” Topology resembles geometry in its focus on shapes. Topologists study spheres, doughnuts, knots, and braids.

Topology describes some quantum physics. Scientists are harnessing this physics to build quantum computers. Alexei Kitaev largely dreamed up the harness. Alexei, a Caltech professor, is teaching Ph 219 this spring.1 His computational scheme works like this.

We can encode information in radio signals, in letters printed on a page, in the pursing of one’s lips as one passes a howling dog’s owner, and in quantum particles. Imagine three particles on a tabletop.

Peas 1

Consider pushing the particles around like peas on a dinner plate. You could push peas 1 and 2 until they swapped places. The swap represents a computation, in Alexei’s scheme.2

The diagram below shows how the peas move. Imagine slicing the figure into horizontal strips. Each strip would show one instant in time. Letting time run amounts to following the diagram from bottom to top.

Peas 2

Arrows copied from John Preskill’s lecture notes. Peas added by the author.

Imagine swapping peas 1 and 3.

Peas 3

Humor me with one more swap, an interchange of 2 and 3.

Peas 4

Congratulations! You’ve modeled a significant quantum computation. You’ve also braided particles.

2 braids

The author models a quantum computation.

Let’s recap: You began with peas 1, 2, and 3. You swapped 1 with 2, then 1 with 3, and then 2 with 3. The peas end up ordered oppositely the way they began—end up ordered as 3, 2, 1.

You could, instead, morph 1-2-3 into 3-2-1 via a different sequence of swaps. That sequence, or braid, appears below.

Peas 5

Congratulations! You’ve begun proving the Yang-Baxter relation. You’ve shown that  each braid turns 1-2-3 into 3-2-1.

The relation states also that 1-2-3 is topologically equivalent to 3-2-1: Imagine standing atop pea 2 during the 1-2-3 braiding. You’d see peas 1 and 3 circle around you counterclockwise. You’d see the same circling if you stood atop pea 2 during the 3-2-1 braiding.

That Sunday morning, I looked at John’s swap diagrams. I looked at the hair draped over my left shoulder. I looked at John’s swap diagrams.

“Yang-Baxter relation” might sound, to nonspecialists, like a mouthful of tweed. It might sound like a sneeze in a musty library. But an eight-year-old could grasp the half the relation. When I braid my hair, I pass my left hand over the back of my neck. Then, I pass my right hand over. But I could have passed the right hand first, then the left. The braid would have ended the same way. The braidings would look identical to a beetle hiding atop what had begun as the middle hunk of hair.


The Yang-Baxter relation.

I tried to keep reading John’s lecture notes, but the analogy mushroomed. Imagine spinning one pea atop the table.

Pea 6

A 360° rotation returns the pea to its initial orientation. You can’t distinguish the pea’s final state from its first. But a quantum particle’s state can change during a 360° rotation. Physicists illustrate such rotations with corkscrews.


Pachos corkscrew 2

A quantum corkscrew (“twisted worldribbon,” in technical jargon)

Like the corkscrews formed as I twirled my hair around a finger. I hadn’t realized that I was fidgeting till I found John’s analysis.

Version 2

I gave up on his lecture notes as the analogy sprouted legs.

I’ve never mastered the fishtail braid. What computation might it represent? What about the French braid? You begin French-braiding by selecting a clump of hair. You add strands to the clump while braiding. The addition brings to mind particles created (and annihilated) during a topological quantum computation.

Ancient Greek statues wear elaborate hairstyles, replete with braids and twists.  Could you decode a Greek hairdo? Might it represent the first 18 digits in pi? How long an algorithm could you run on Rapunzel’s hair?

Call me one bobby pin short of a bun. But shouldn’t a scientist find inspiration in every fiber of nature? The sunlight spilling through a window illuminates no less than the hair spilling over a shoulder. What grows on a quantum physicist’s head informs what grows in it.


1Alexei and John trade off on teaching Ph 219. Alexei recommends the notes that John wrote while teaching in previous years.

2When your mother ordered you to quit playing with your food, you could have objected, “I’m modeling computations!”

little by little and gate by gate

Washington state was drizzling on me. I was dashing from a shuttle to Building 112 on Microsoft’s campus. Microsoft has headquarters near Seattle. The state’s fir trees refreshed me. The campus’s vastness awed me. The conversations planned for the day enthused me. The drizzle dampened me.

Building 112 houses QuArC, one of Microsoft’s research teams. “QuArC” stands for “Quantum Architectures and Computation.” Team members develop quantum algorithms and codes. QuArC members write, as their leader Dr. Krysta Svore says, “software for computers that don’t exist.”

Microsoft 2

Small quantum computers exist. Large ones have eluded us like gold at the end of a Washington rainbow. Large quantum computers could revolutionize cybersecurity, materials engineering, and fundamental physics. Quantum computers are growing, in labs across the world. When they mature, the computers will need software.

Software consists of instructions. Computers follow instructions as we do. Suppose you want to find and read the poem “anyone lived in a pretty how town,” by 20th-century American poet e e cummings. You follow steps—for example:

1) Wake up your computer.
2) Type your password.
3) Hit “Enter.”
4) Kick yourself for entering the wrong password.
5) Type the right password.
6) Hit “Enter.”
7) Open a web browser.
8) Navigate to Google.
9) Type “anyone lived in a pretty how town e e cummings” into the search bar.
10) Hit “Enter.”
11) Click the Academy of American Poets’ link.
12) Exclaim, “Really? April is National Poetry Month?”
13) Read about National Poetry Month for four-and-a-half minutes.
14) Remember that you intended to look up a poem.
15) Return to the Academy of American Poets’ “anyone lived” webpage.
16) Read the poem.

We break tasks into chunks executed sequentially. So do software writers. Microsoft researchers break up tasks intended for quantum computers to perform.

Your computer completes tasks by sending electrons through circuits. Quantum computers will have circuits. A circuit contains wires, which carry information. The wires run through circuit components called gates. Gates manipulate the information in the wires. A gate can, for instance, add the number carried by this wire to the number carried by that wire.

Running a circuit amounts to completing a task, like hunting a poem. Computer engineers break each circuit into wires and gates, as we broke poem-hunting into steps 1-16.1

Circuits hearten me, because decomposing tasks heartens me. Suppose I demanded that you read a textbook in a week, or create a seminar in a day, or crack a cybersecurity system. You’d gape like a visitor to Washington who’s realized that she’s forgotten her umbrella.


Suppose I demanded instead that you read five pages, or create one Powerpoint slide, or design one element of a quantum circuit. You might gape. But you’d have more hope.2 Life looks more manageable when broken into circuit elements.

Circuit decomposition—and life decomposition—brings to mind “anyone lived in a pretty how town.” The poem concerns two characters who revel in everyday events. Laughter, rain, and stars mark their time. The more the characters attune to nature’s rhythm, the more vibrantly they live:3

          little by little and was by was

          all by all and deep by deep
          and more by more they dream their sleep

Those lines play in my mind when a seminar looms, or a trip to Washington coincident with a paper deadline, or a quantum circuit I’ve no idea how to parse. Break down the task, I tell myself. Inch by inch, we advance. Little by little and drop by drop, step by step and gate by gate.

IBM circuit

Not what e e cummings imagined when composing “anyone lived in a pretty how town”

Unless you’re dashing through raindrops to gate designers at Microsoft. I don’t recommend inching through Washington’s rain. But I would have dashed in a drought. What sees us through everyday struggles—the inching of science—if not enthusiasm? We tackle circuits and struggles because, beyond the drizzle, lie ideas and conversations that energize us to run.


e e cummings

With thanks to QuArC members for their time and hospitality.

1One might object that Steps 4 and 14 don’t belong in the instructions. But software involves error correction.

2Of course you can design a quantum-circuit element. Anyone can quantum.

3Even after the characters die.

March madness and quantum memory

Madness seized me this March. It pounced before newspaper and Facebook feeds began buzzing about basketball.1 I haven’t bought tickets or bet on teams. I don’t obsess over jump-shot statistics. But madness infected me two weeks ago. I began talking with condensed-matter physicists.

Condensed-matter physicists study collections of many particles. Example collections include magnets and crystals. And the semiconductors in the iPhones that report NCAA updates.

Caltech professor Gil Refael studies condensed matter. He specializes in many-body localization. By “many-body,” I mean “involving lots of quantum particles.” By “localization,” I mean “each particle anchors itself to one spot.” We’d expect these particles to spread out, like the eau de hotdog that wafts across a basketball court. But Gil’s particles stay put.

Hot-dog smell

How many-body-localized particles don’t behave.

Experts call many-body localization “MBL.” I’ve accidentally been calling many-body localization “MLB.” Hence the madness. You try injecting baseball into quantum discussions without sounding one out short of an inning.2

I wouldn’t have minded if the madness had erupted in October. The World Series began in October. The World Series involves Major League Baseball, what normal people call “the MLB.” The MLB dominates October; the NCAA dominates March. Preoccupation with the MLB during basketball season embarrasses me. I feel like I’ve bet on the last team that I could remember winning the championship, then realized that that team had last won in 2002.

March madness has been infecting my thoughts about many-body localization. I keep envisioning a localized particle as dribbling a basketball in place, opponents circling, fans screaming, “Go for it!” Then I recall that I’m pondering MBL…I mean, MLB…or the other way around. The dribbler gives way to a baseball player who refuses to abandon first base for second. Then I recall that I should be pondering particles, not playbooks.

Baseball diamond

Localized particles.

Recollection holds the key to MBL’s importance. Colleagues of Gil’s want to build quantum computers. Computers store information in memories. Memories must retain their contents; information mustn’t dribble away.

Consider recording halftime scores. You could encode the scores in the locations of the particles that form eau de hotdog. (Imagine you have advanced technology that manipulates scent particles.) If Duke had scored one point, you’d put this particle here; if Florida had scored two, you’d put that particle there. The particles—as smells too often do—would drift. You’d lose the information you’d encoded. Better to paint the scores onto scorecards. Dry paint stays put, preserving information.

The quantum particles studied by Gil stay put. They inspire scientists who develop memories for quantum computers. Quantum computation is gunning for a Most Valuable Player plaque in the technology hall of fame. Many-body localized systems could contain Most Valuable Particles.

MVP medal

Remembering the past, some say, one can help one read the future. I don’t memorize teams’ records. I can’t advise you about whom root for. But prospects for quantum memories are brightening. Bet on quantum information science.

1Non-American readers: University basketball teams compete in a tournament each March. The National Collegiate Athletic Association (NCAA) hosts the tournament. Fans glue themselves to TVs, tweet exaltations and frustrations, and excommunicate friends who support opposing teams.

2Without being John Preskill.