What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Steaming soup

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Bridge - theory

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Bridge - exprmt

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Dunk 2

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter \delta. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number N_\delta of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform N_\delta trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

Schwinger headstone

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

Bread knife

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)

Toward physical realizations of thermodynamic resource theories

“This is your arch-nemesis.”

The thank-you slide of my presentation remained onscreen, and the question-and-answer session had begun. I was presenting a seminar about thermodynamic resource theories (TRTs), models developed by quantum-information theorists for small-scale exchanges of heat and work. The audience consisted of condensed-matter physicists who studied graphene and photonic crystals. I was beginning to regret my topic’s abstractness.

The question-asker pointed at a listener.

“This is an experimentalist,” he continued, “your arch-nemesis. What implications does your theory have for his lab? Does it have any? Why should he care?”

I could have answered better. I apologized that quantum-information theorists, reared on the rarefied air of Dirac bras and kets, had developed TRTs. I recalled the baby steps with which science sometimes migrates from theory to experiment. I could have advocated for bounding, with idealizations, efficiencies achievable in labs. I should have invoked the connections being developed with fluctuation results, statistical mechanical theorems that have withstood experimental tests.

The crowd looked unconvinced, but I scored one point: The experimentalist was not my arch-nemesis.

“My new friend,” I corrected the questioner.

His question has burned in my mind for two years. Experiments have inspired, but not guided, TRTs. TRTs have yet to drive experiments. Can we strengthen the connection between TRTs and the natural world? If so, what tools must resource theorists develop to predict outcomes of experiments? If not, are resource theorists doing physics?

http://everystevejobsvideo.com/steve-jobs-qa-session-excerpt-following-antennagate-2010/

A Q&A more successful than mine.

I explore answers to these questions in a paper released today. Ian Durham and Dean Rickles were kind enough to request a contribution for a book of conference proceedings. The conference, “Information and Interaction: Eddington, Wheeler, and the Limits of Knowledge” took place at the University of Cambridge (including a graveyard thereof), thanks to FQXi (the Foundational Questions Institute).

What, I asked my advisor, does one write for conference proceedings?

“Proceedings are a great opportunity to get something off your chest,” John said.

That seminar Q&A had sat on my chest, like a pet cat who half-smothers you while you’re sleeping, for two years. Theorists often justify TRTs with experiments.* Experimentalists, an argument goes, are probing limits of physics. Conventional statistical mechanics describe these regimes poorly. To understand these experiments, and to apply them to technologies, we must explore TRTs.

Does that argument not merit testing? If experimentalists observe the extremes predicted with TRTs, then the justifications for, and the timeliness of, TRT research will grow.

http://maryqin.com/wp-content/uploads/2014/05/

Something to get off your chest. Like the contents of a conference-proceedings paper, according to my advisor.

You’ve read the paper’s introduction, the first eight paragraphs of this blog post. (Who wouldn’t want to begin a paper with a mortifying anecdote?) Later in the paper, I introduce TRTs and their role in one-shot statistical mechanics, the analysis of work, heat, and entropies on small scales. I discuss whether TRTs can be realized and whether physicists should care. I identify eleven opportunities for shifting TRTs toward experiments. Three opportunities concern what merits realizing and how, in principle, we can realize it. Six adjustments to TRTs could improve TRTs’ realism. Two more-out-there opportunities, though less critical to realizations, could diversify the platforms with which we might realize TRTs.

One opportunity is the physical realization of thermal embezzlement. TRTs, like thermodynamic laws, dictate how systems can and cannot evolve. Suppose that a state R cannot transform into a state S: R \not\mapsto S. An ancilla C, called a catalyst, might facilitate the transformation: R + C \mapsto S + C. Catalysts act like engines used to extract work from a pair of heat baths.

Engines degrade, so a realistic transformation might yield S + \tilde{C}, wherein \tilde{C} resembles C. For certain definitions of “resembles,”** TRTs imply, one can extract arbitrary amounts of work by negligibly degrading C. Detecting the degradation—the work extraction’s cost—is difficult. Extracting arbitrary amounts of work at a difficult-to-detect cost contradicts the spirit of thermodynamic law.

The spirit, not the letter. Embezzlement seems physically realizable, in principle. Detecting embezzlement could push experimentalists’ abilities to distinguish between close-together states C and \tilde{C}. I hope that that challenge, and the chance to violate the spirit of thermodynamic law, attracts researchers. Alternatively, theorists could redefine “resembles” so that C doesn’t rub the law the wrong way.

http://www.eoht.info/page/Laws+of+thermodynamics+(game+version)

The paper’s broadness evokes a caveat of Arthur Eddington’s. In 1927, Eddington presented Gifford Lectures entitled The Nature of the Physical World. Being a physicist, he admitted, “I have much to fear from the expert philosophical critic.” Specializing in TRTs, I have much to fear from the expert experimental critic. The paper is intended to point out, and to initiate responses to, the lack of physical realizations of TRTs. Some concerns are practical; some, philosophical. I expect and hope that the discussion will continue…preferably with more cooperation and charity than during that Q&A.

If you want to continue the discussion, drop me a line.

*So do theorists-in-training. I have.

**A definition that involves the trace distance.

Reading the sub(linear) text

Physicists are not known for finesse. “Even if it cost us our funding,” I’ve heard a physicist declare, “we’d tell you what we think.” Little wonder I irked the porter who directed me toward central Cambridge.

The University of Cambridge consists of colleges as the US consists of states. Each college has a porter’s lodge, where visitors check in and students beg for help after locking their keys in their rooms. And where physicists ask for directions.

Last March, I ducked inside a porter’s lodge that bustled with deliveries. The woman behind the high wooden desk volunteered to help me, but I asked too many questions. By my fifth, her pointing at a map had devolved to jabbing.

Read the subtext, I told myself. Leave.

Or so I would have told myself, if not for that afternoon.

That afternoon, I’d visited Cambridge’s CMS, which merits every letter in “Centre for Mathematical Sciences.” Home to Isaac Newton’s intellectual offspring, the CMS consists of eight soaring, glass-walled, blue-topped pavilions. Their majesty walloped me as I turned off the road toward the gatehouse. So did the congratulatory letter from Queen Elizabeth II that decorated the route to the restroom.

P1040733

I visited Nilanjana Datta, an affiliated lecturer of Cambridge’s Faculty of Mathematics, and her student, Felix Leditzky. Nilanjana and Felix specialize in entropies and one-shot information theory. Entropies quantify uncertainties and efficiencies. Imagine compressing many copies of a message into the smallest possible number of bits (units of memory). How few bits can you use per copy? That number, we call the optimal compression rate. It shrinks as the number of copies compressed grows. As the number of copies approaches infinity, that compression rate drops toward a number called the message’s Shannon entropy. If the message is quantum, the compression rate approaches the von Neumann entropy.

Good luck squeezing infinitely many copies of a message onto a hard drive. How efficiently can we compress fewer copies? According to one-shot information theory, the answer involves entropies other than Shannon’s and von Neumann’s. In addition to describing data compression, entropies describe the charging of batteriesthe concentration of entanglementthe encrypting of messages, and other information-processing tasks.

Speaking of compressing messages: Suppose one-shot information theory posted status updates on Facebook. Suppose that that panel on your Facebook page’s right-hand side showed news weightier than celebrity marriages. The news feed might read, “TRENDING: One-shot information theory: Second-order asymptotics.”

Second-order asymptotics, I learned at the CMS, concerns how the optimal compression rate decays as the number of copies compressed grows. Imagine compressing a billion copies of a quantum message ρ. The number of bits needed about equals a billion times the von Neumann entropy HvN(ρ). Since a billion is less than infinity, 1,000,000,000 HvN(ρ) bits won’t suffice. Can we estimate the compression rate more precisely?

The question reminds me of gas stations’ hidden pennies. The last time I passed a station’s billboard, some number like $3.65 caught my eye. Each gallon cost about $3.65, just as each copy of ρ costs about HvN(ρ) bits. But a 9/10, writ small, followed the $3.65. If I’d budgeted $3.65 per gallon, I couldn’t have filled my tank. If you budget HvN(ρ) bits per copy of ρ, you can’t compress all your copies.

Suppose some station’s owner hatches a plan to promote business. If you buy one gallon, you pay $3.654. The more you purchase, the more the final digit drops from four. By cataloguing receipts, you calculate how a tank’s cost varies with the number of gallons, n. The cost equals $3.65 × n to a first approximation. To a second approximation, the cost might equal $3.65 × n + an, wherein a represents some number of cents. Compute a, and you’ll have computed the gas’s second-order asymptotics.

Nilanjana and Felix computed a’s associated with data compression and other quantum tasks. Second-order asymptotics met information theory when Strassen combined them in nonquantum problems. These problems developed under attention from Hayashi, Han, Polyanski, Poor, Verdu, and others. Tomamichel and Hayashi, as well as Li, introduced quantumness.

In the total-cost expression, $3.65 × n depends on n directly, or “linearly.” The second term depends on √n. As the number of gallons grows, so does √n, but √n grows more slowly than n. The second term is called “sublinear.”

Which is the word that rose to mind in the porter’s lodge. I told myself, Read the sublinear text.

Little wonder I irked the porter. At least—thanks to quantum information, my mistake, and facial expressions’ contagiousness—she smiled.

 

 

With thanks to Nilanjana Datta and Felix Leditzky for explanations and references; to Nilanjana, Felix, and Cambridge’s Centre for Mathematical Sciences for their hospitality; and to porters everywhere for providing directions.

Steampunk quantum

A dark-haired man leans over a marble balustrade. In the ballroom below, his assistants tinker with animatronic elephants that trumpet and with potions for improving black-and-white photographs. The man is an inventor near the turn of the 20th century. Cape swirling about him, he watches technology wed fantasy.

Welcome to the steampunk genre. A stew of science fiction and Victorianism, steampunk has invaded literature, film, and the Wall Street Journal. A few years after James Watt improved the steam engine, protagonists build animatronics, clone cats, and time-travel. At sci-fi conventions, top hats and blast goggles distinguish steampunkers from superheroes.

Photo

The closest the author has come to dressing steampunk.

I’ve never read steampunk other than H. G. Wells’s The Time Machine—and other than the scene recapped above. The scene features in The Wolsenberg Clock, a novel by Canadian poet Jay Ruzesky. The novel caught my eye at an Ontario library.

In Ontario, I began researching the intersection of QI with thermodynamics. Thermodynamics is the study of energy, efficiency, and entropy. Entropy quantifies uncertainty about a system’s small-scale properties, given large-scale properties. Consider a room of air molecules. Knowing that the room has a temperature of 75°F, you don’t know whether some molecule is skimming the floor, poking you in the eye, or elsewhere. Ambiguities in molecules’ positions and momenta endow the gas with entropy. Whereas entropy suggests lack of control, work is energy that accomplishes tasks.
Continue reading