Generally speaking

My high-school calculus teacher had a mustache like a walrus’s and shoulders like a rower’s. At 8:05 AM, he would demand my class’s questions about our homework. Students would yawn, and someone’s hand would drift into the air.

“I have a general question,” the hand’s owner would begin.

“Only private questions from you,” my teacher would snap. “You’ll be a general someday, but you’re not a colonel, or even a captain, yet.”

Then his eyes would twinkle; his voice would soften; and, after the student asked the question, his answer would epitomize why I’ve chosen a life in which I use calculus more often than laundry detergent.

Many times though I witnessed the “general” trap, I fell into it once. Little wonder: I relish generalization as other people relish hiking or painting or Michelin-worthy relish. When inferring general principles from examples, I abstract away details as though they’re tomato stains. My veneration of generalization led me to quantum information (QI) theory. One abstract theory can model many physical systems: electrons, superconductors, ion traps, etc.

Little wonder that generalizing a QI model swallowed my summer.

QI has shed light on statistical mechanics and thermodynamics, which describe energy, information, and efficiency. Models called resource theories describe small systems’ energies, information, and efficiencies. Resource theories help us calculate a quantum system’s value—what you can and can’t create from a quantum system—if you can manipulate systems in only certain ways.

Suppose you can perform only operations that preserve energy. According to the Second Law of Thermodynamics, systems evolve toward equilibrium. Equilibrium amounts roughly to stasis: Averages of properties like energy remain constant.

Out-of-equilibrium systems have value because you can suck energy from them to power laundry machines. How much energy can you draw, on average, from a system in a constant-temperature environment? Technically: How much “work” can you draw? We denote this average work by < W >. According to thermodynamics, < W > equals the change ∆F in the system’s Helmholtz free energy. The Helmholtz free energy is a thermodynamic property similar to the energy stored in a coiled spring.

One reason to study thermodynamics?

Suppose you want to calculate more than the average extractable work. How much work will you probably extract during some particular trial? Though statistical physics offers no answer, resource theories do. One answer derived from resource theories resembles ∆F mathematically but involves one-shot information theory, which I’ve discussed elsewhere.

If you average this one-shot extractable work, you recover < W > = ∆F. “Helmholtz” resource theories recapitulate statistical-physics results while offering new insights about single trials.

Helmholtz resource theories sit atop a silver-tasseled pillow in my heart. Why not, I thought, spread the joy to the rest of statistical physics? Why not generalize thermodynamic resource theories?

The average work <W > extractable equals ∆F if heat can leak into your system. If heat and particles can leak, <W > equals the change in your system’s grand potential. The grand potential, like the Helmholtz free energy, is a free energy that resembles the energy in a coiled spring. The grand potential characterizes Bose-Einstein condensates, low-energy quantum systems that may have applications to metrology and quantum computation. If your system responds to a magnetic field, or has mass and occupies a gravitational field, or has other properties, <W > equals the change in another free energy.

A collaborator and I designed resource theories that describe heat-and-particle exchanges. In our paper “Beyond heat baths: Generalized resource theories for small-scale thermodynamics,” we propose that different thermodynamic resource theories correspond to different interactions, environments, and free energies. I detailed the proposal in “Beyond heat baths II: Framework for generalized thermodynamic resource theories.”

“II” generalizes enough to satisfy my craving for patterns and universals. “II” generalizes enough to merit a hand-slap of a pun from my calculus teacher. We can test abstract theories only by applying them to specific systems. If thermodynamic resource theories describe situations as diverse as heat-and-particle exchanges, magnetic fields, and polymers, some specific system should shed light on resource theories’ accuracy.

If you find such a system, let me know. Much as generalization pleases aesthetically, the detergent is in the details.

The cost and yield of moving from (quantum) state to (quantum) state

The countdown had begun.

In ten days, I’d move from Florida, where I’d spent the summer with family, to Caltech. Unfolded boxes leaned against my dresser, and suitcases yawned on the floor. I was working on a paper. Even if I’d turned around from my desk, I wouldn’t have seen the stacked books and folded sheets. I’d have seen Lorenz curves, because I’d drawn Lorenz curves all week, and the curves seemed imprinted on my eyeballs.

Using Lorenz curves, we illustrate how much we know about a quantum state. Say you have an electron, you’ll measure it using a magnet, and you can’t predict any measurement’s outcome. Whether you orient the magnet up-and-down, left-to-right, etc., you haven’t a clue what number you’ll read out. We represent this electron’s state by a straight line from (0, 0) to (1, 1).


Say you know the electron’s state. Say you know that, if you orient the magnet up-and-down, you’ll read out +1. This state, we call “pure.” We represent it by a tented curve.


The more you know about a state, the more the state’s Lorenz curve deviates from the straight line.


If Curve A fails to dip below Curve B, we know at least as much about State A as about State B. We can transform State A into State B by manipulating and/or discarding information.


By the time I’d drawn those figures, I’d listed the items that needed packing. A coauthor had moved from North America to Europe during the same time. If he could hop continents without impeding the paper, I could hop states. I unzipped the suitcases, packed a box, and returned to my desk.

Say Curve A dips below Curve B. We know too little about State A to transform it into State B. But we might combine State A with a state we know lots about. The latter state, C, might be pure. We have so much information about A + C, the amalgam can turn into B.

Yet more conversion costs Yet-more-conversion-costs-part-2

What’s the least amount of information we need about C to ensure that A + C can turn into B? That number, we call the “cost of transforming State A into State B.”

We call it that usually. But late in the evening, after I’d miscalculated two transformation costs and deleted four curves, days before my flight, I didn’t type the cost’s name into emails to coauthors. I typed “the cost of turning A into B” or “the cost of moving from state to state.”
Continue reading

Don’t sweat the epsilons…and it’s all epsilons

I’d come to Barnes and Noble to study and to submerse in the bustle. I needed reminding that humans other than those on my history exam existed. When I ran out of tea and of names to review, I stood, stretched, and browsed the shelves. A blue-bound book caught my eye: Don’t Sweat the Small Stuff…and it’s all small stuff.

Richard Carlson wrote that book for people like me. We have packing lists, grocery lists, and laundry lists of to-do lists. We transcribe lectures. We try to rederive equations that we should just use. Call us “detail-oriented”; call us “conscientious”; we’re boring as toast, and we have earlier bedtimes. When urged to relax, we try. We might not succeed, but we try hard.

For example, I do physics instead of math. Mathematicians agonize over what-ifs: “What if this bit of the fraction reaches one while that bit goes negative and the other goes loop-the-loop? We’d be dividing by zero!” Divisions by zero atom-bomb calculations. Since dividing by a tiny number amounts to multiplying by a large number, dividing by zero amounts to multiplying by infinity. While mathematicians chew their nails over infinities, physicists often assume we needn’t. We use math to represent physical systems like pendulums and ponytails.1 Ponytails have properties, like lacking infinite masses, that don’t smack of the apocalypse. Since those properties don’t, neither does the math that represents those properties. To justify assumptions that our math “behaves nicely,” we use the jargon, “the field goes to zero at the boundary,” “the coupling’s renormalized,” and “it worked last time.”

I tried not to sweat the small stuff. I tried to shrug off the question marks at calculations’ edges. Sometimes, I succeeded. Then I began a Masters thesis about epsilons.

Self-help for calculus addicts.

In many physics problems, the Greek letter epsilon (ε) means “-ish.” The butcher sold you epsilon-close to a pound of beef? He tipped the scale a tad in your favor. Your temperature dropped from 103 to epsilon-close to normal? Stay in bed this afternoon, and you should recover by tomorrow.

For half a year, I’ve used epsilons to describe transformations between quantum states. To visualize the transformations, say you have a fistful of coins. Each coin consists of gold and aluminum. The portion of the coin that’s gold varies from coin to coin. I want a differently-sized fistful of coins, each with a certain gold content. After melting down your fistful, can you cast the fistful I want? Can you cast a fistful that’s epsilon-close to the fistful I want? I calculated answers to those questions, after substituting “quantum states” for “fistfuls” and a property called “purity” for “gold.”2

You might expect epsilon-close conversions to require less effort than exact conversions: Butchers weigh out approximately a pound of beef more quickly than they weigh a pound. But epsilon-close math requires more effort than exact math. Introducing epsilons into calculations, you introduce another number to keep track of. As that number approaches zero, approximate conversions become exact. If that number approaches zero while in a denominator, you atom-bomb calculations with infinities. The infinities remind me of geysers in a water park of quantum theory.

Have you visited a water park where geysers erupt every few minutes? Have you found a geyser head that looks dead, and crouched to check it? Epsilons resemble dead-looking geyser heads. Just as geyser heads rise only inches from the ground, epsilons have values close to zero. Say you’ve divided by epsilon, and you’re lowering its value to naught. Hitch up your swimsuit, lower your head, and squint at the faucet. Farther you crouch, and farther, till SPLAT! Water shoots up your left nostril.

Infinities have been shooting up my left nostril for months.

Rocking back on your heels, you need a towel. Dividing by an epsilon that approaches zero, I need an advisor. An advisor who knows mounds of calculus, who corrects without crushing, and who doesn’t mind my bombarding him with questions once a week. I have one, thank goodness—an advisor, not a towel.3 I wouldn’t trade him for fifty fistfuls of gold coins.

Towel in hand, I tiptoed through the water park of epsilons. I learned how quickly geysers erupt, where they appear, and how to disable some. I learned about smoothed distributions, limits superior, and Asymptotic Equipartition Properties. Though soaked after crossing the park, I survived. I submitted my thesis last week. And I have the right—should I find the chutzpah—to toss off the word “epsilonification” like a spelling-bee champ.

Had I not sweated the epsilons, I wouldn’t have finished the thesis. Should I discard Richard Carlson’s advice? I can’t say, having returned to my history review instead of reading his book. But I don’t view epsilons as troubles to sweat or not. Why not view epsilons as geysers in the water park of quantum theory? Who doesn’t work up a sweat in a park? But I wouldn’t rather leave. And maybe—if enough geysers shoot up our left nostrils—we’ll learn a smidgeon about Old Faithful.

1 I’m not kidding about ponytails.

2 Quantum whizzes: I explored a resource theory like that of pure bipartite entanglement (e.g., Instead of entanglement or gold, nonuniformity (distance from the maximally mixed state) is a scarce resource. The uniform (maximally mixed state) has no worth, like aluminum. This “resource theory of nonuniformity” models thermodynamic systems whose Hamiltonians are trivial (H = 0).

3 Actually, I have two advisors, and I’m grateful for both. But one helped cure my epsilons. N.B. I have not begun working at Caltech.