Some like it cold.

When I reached IBM’s Watson research center, I’d barely seen Aaron in three weeks. Aaron is an experimentalist pursuing a physics PhD at Caltech. I eat dinner with him and other friends, most Fridays. The group would gather on a sidewalk in the November dusk, those three weeks. Light would spill from a lamppost, and we’d tuck our hands into our pockets against the chill. Aaron’s wife would shake her head.

“The fridge is running,” she’d explain.

Aaron cools down mechanical devices to near absolute zero. Absolute zero is the lowest temperature possible,1 lower than outer space’s temperature. Cold magnifies certain quantum behaviors. Researchers observe those behaviors in small systems, such as nanoscale devices (devices about 10-9 meters long). Aaron studies few-centimeter-long devices. Offsetting the devices’ size with cold might coax them into exhibiting quantum behaviors.

The cooling sounds as painless as teaching a cat to play fetch. Aaron lowers his fridge’s temperature in steps. Each step involves checking for leaks: A mix of two fluids—two types of helium—cools the fridge. One type of helium costs about $800 per liter. Lose too much helium, and you’ve lost your shot at graduating. Each leak requires Aaron to warm the fridge, then re-cool it. He hauled helium and pampered the fridge for ten days, before the temperature reached 10 milliKelvins (0.01 units above absolute zero). He then worked like…well, like a grad student to check for quantum behaviors.

Aaron came to mind at IBM.

“How long does cooling your fridge take?” I asked Nick Bronn.

Nick works at Watson, IBM’s research center in Yorktown Heights, New York. Watson has sweeping architecture frosted with glass and stone. The building reminded me of Fred Astaire: decades-old, yet classy. I found Nick outside the cafeteria, nursing a coffee. He had sandy hair, more piercings than I, and a mandate to build a quantum computer.


IBM Watson

“Might I look around your lab?” I asked.

“Definitely!” Nick fished out an ID badge; grabbed his coffee cup; and whisked me down a wide, window-paneled hall.

Different researchers, across the world, are building quantum computers from different materials. IBMers use superconductors. Superconductors are tiny circuits. They function at low temperatures, so IBM has seven closet-sized fridges. Different teams use different fridges to tackle different challenges to computing.

Nick found a fridge that wasn’t running. He climbed half-inside, pointed at metallic wires and canisters, and explained how they work. I wondered how his cooling process compared to Aaron’s.

“You push a button.” Nick shrugged. “The fridge cools in two days.”

IBM, I learned, has dry fridges. Aaron uses a wet fridge. Dry and wet fridges operate differently, though both require helium. Aaron’s wet fridge vibrates less, jiggling his experiment less. Jiggling relates to transferring heat. Heat suppresses the quantum behaviors Aaron hopes to observe.

Heat and warmth manifest in many ways, in physics. Count Rumford, an 18th-century American-Brit, conjectured the relationship between heat and jiggling. He noticed that drilling holes into canons immersed in water boils the water. The drill bits rotated–moved in circles–transferring energy of movement to the canons, which heated up. Heat enraptures me because it relates to entropy, a measure of disorderliness and ignorance. The flow of heat helps explain why time flows in just one direction.

A physicist friend of mine writes papers, he says, when catalyzed by “blinding rage.” He reads a paper by someone else, whose misunderstandings anger him. His wrath boils over into a research project.

Warmth manifests as the welcoming of a visitor into one’s lab. Nick didn’t know me from Fred Astaire, but he gave me the benefit of the doubt. He let me pepper him with questions and invited more questions.

Warmth manifests as a 500-word disquisition on fridges. I asked Aaron, via email, about how his cooling compares to IBM’s. I expected two sentences and a link to Wikipedia, since Aaron works 12-hour shifts. But he took pity on his theorist friend. He also warmed to his subject. Can’t you sense the zeal in “Helium is the only substance in the world that will naturally isotopically separate (neat!)”? No knowledge of isotopic separation required.

Many quantum scientists like it cold. But understanding, curiosity, and teamwork fire us up. Anyone under the sway of those elements of science likes it hot.

With thanks to Aaron and Nick. Thanks also to John Smolin and IBM Watson’s quantum-computing-theory team for their hospitality.

1In many situations. Some systems, like small magnets, can access negative temperatures.

Life, cellular automata, and mentoring

One night last July, IQIM postdoc Ning Bao emailed me a photo. He’d found a soda can that read, “Share a Coke with Patrick.”

Ning and I were co-mentoring two Summer Undergraduate Research Fellows, or SURFers. One mentee received Ning’s photo: Caltech physics major Patrick Rall.

“Haha,” Patrick emailed back. “I’ll share a Coke.”

Patrick, Ning, and I shared the intellectual equivalent of a six-pack last summer. We shared papers, meals, frustrations, hopes, late-night emails (from Patrick and Ning), 7-AM emails (from me), and webcomic strips. Now a senior, Patrick is co-authoring a paper about his SURF project.

The project grew from the question “What would happen if we quantized Conway’s Game of Life?” (For readers unfamiliar with the game, I’ll explain below.) Lessons we learned about the Game of Life overlapped with lessons I learned about life, as a first-time mentor. The soda fountain of topics contained the following flavors.

Patrick + Coke

Update rules: Till last spring, I’d been burrowing into two models for out-of-equilibrium physics. PhD students burrow as no prairie dogs can. But, given five years in Caltech’s grassland, I wanted to explore. I wanted an update.

Ning and I had trespassed upon quantum game theory months earlier. Consider a nonquantum game, such as the Prisoner’s Dilemma or an election. Suppose that players have physical systems, such as photons (particles of light), that occupy superposed or entangled states. These quantum resources can change the landscape of the game’s possible outcomes. These changes clarify how we can harness quantum mechanics to process, transmit, and secure information.

How might quantum resources change Conway’s Game of Life, or GoL? British mathematician John Conway invented the game in 1970. Imagine a square board divided into smaller squares, or cells. On each cell sits a white or a black tile. Black represents a living organism; white represents a lack thereof.

Conway modeled population dynamics with an update rule. If prairie dogs overpopulate a field, some die from overcrowding. If a black cell borders more than three black neighbors, a white tile replaces the black. If separated from its pack, a prairie dog dies from isolation. If a black tile borders too few black neighbors, we exchange the black for a white. Mathematics columnist Martin Gardner detailed the rest of Conway’s update rule in this 1970 article.

Updating the board repeatedly evolves the population. Black and white shapes might flicker and undulate. Space-ship-like shapes can glide across the board. A simple update rule can generate complex outcomes—including, I found, frustrations, hopes, responsibility for another human’s contentment, and more meetings than I’d realized could fit in one summer.

Prairie dogs

Modeled by Conway’s Game of Life. And by PhD students.

Initial conditions: The evolution depends on the initial state, on how you distribute white and black tiles when preparing the board. Imagine choosing the initial state randomly from all the possibilities. White likely mingles with about as much black. The random initial condition might not generate eye-catchers such as gliders. The board might fade to, and remain, one color.*

Enthusiasm can fade as research drags onward. Project Quantum GoL has continued gliding due to its initial condition: The spring afternoon on which Ning, Patrick, and I observed the firmness of each other’s handshakes; Patrick walked Ning and me through a CV that could have intimidated a postdoc; and everyone tried to soothe everyone else’s nerves but occasionally avoided eye contact.

I don’t mean that awkwardness sustained the project. The awkwardness faded, as exclamation points and smiley faces crept into our emails. I mean that Ning and I had the fortune to entice Patrick. We signed up a bundle of enthusiasm, creativity, programming skills, and determination. That determination perpetuated the project through the summer and beyond. Initial conditions can determine a system’s evolution.

Long-distance correlations:  “Sure, I’d love to have dinner with you both! Thank you for the invitation!”

Lincoln Carr, a Colorado School of Mines professor, visited in June. Lincoln’s group, I’d heard, was exploring quantum GoLs.** He studies entanglement (quantum correlations) in many-particle systems. When I reached out, Lincoln welcomed our SURF group to collaborate.

I relished coordinating his visit with the mentees. How many SURFers could say that a professor had visited for his or her sake? When I invited Patrick to dinner with Lincoln, Patrick lit up like a sunrise over grasslands.

Our SURF group began skyping with Mines every Wednesday. We brainstorm, analyze, trade code, and kvetch with Mines student Logan Hillberry and colleagues. They offer insights about condensed matter; Patrick, about data processing and efficiency; I, about entanglement theory; and Ning, about entropy and time evolution.

We’ve learned together about long-range entanglement, about correlations between far-apart quantum systems. Thank goodness for skype and email that correlate far-apart research groups. Everyone would have learned less alone.


Long-distance correlations between quantum states and between research groups

Time evolution: Logan and Patrick simulated quantum systems inspired by Conway’s GoL. Each researcher coded a simulation, or mathematical model, of a quantum system. They agreed on a nonquantum update rule; Logan quantized it in one way (constructed one quantum analog of the rule); and Patrick quantized the rule another way. They chose initial conditions, let their systems evolve, and waited.

In July, I noticed that Patrick brought a hand-sized green spiral notepad to meetings. He would synopsize his progress, and brainstorm questions, on the notepad before arriving. He jotted suggestions as we talked.

The notepad began guiding meetings in July. Patrick now steers discussions, ticking items off his agenda. The agenda I’ve typed remains minimized on my laptop till he finishes. My agenda contains few points absent from his, and his contains points not in mine.

Patrick and Logan are comparing their results. Behaviors of their simulations, they’ve found, depend on how they quantized their update rule. One might expect the update rule to determine a system’s evolution. One might expect the SURF program’s template to determine how research and mentoring skills evolve. But how we implement update rules matters.

SURF photo

Caltech’s 2015 quantum-information-theory Summer Undergraduate Research Fellows and mentors

Life: I’ve learned, during the past six months, about Conway’s Game of Life, simulations, and many-body entanglement. I’ve learned how to suggest references and experts when I can’t answer a question. I’ve learned that editing SURF reports by hand costs me less time than editing electronically. I’ve learned where Patrick and his family vacation, that he’s studying Chinese, and how undergrads regard on-campus dining. Conway’s Game of Life has expanded this prairie dog’s view of the grassland more than expected.

I’ll drink a Coke to that.

Glossary: Conway’s GoL is a cellular automatonA cellular automaton consists of a board whose tiles change according to some update rule. Different cellular automata correspond to different board shapes, to boards of different dimensions, to different types of tiles, and to different update rules.

*Reversible cellular automata have greater probabilities (than the GoL has) of updating random initial states through dull-looking evolutions.

**Others have pondered quantum variations on Conway’s GoL.

Quantum Information: Episode II: The Tools’ Applications

Monday dawns. Headlines report that “Star Wars: Episode VII” has earned more money, during its opening weekend, than I hope to earn in my lifetime. Trading the newspaper for my laptop, I learn that a friend has discovered ThinkGeek’s BB-8 plushie. “I want one!” she exclaims in a Facebook post. “Because BB-8 definitely needs to be hugged.”

BB-8 plays sidekick to Star Wars hero Poe Dameron. The droid has a spherical body covered with metallic panels and lights.Mr. Gadget and Frosty the Snowman could have spawned such offspring. BB-8 has captured viewers’ hearts, and its chirps have captured cell-phone ringtones.


ThinkGeek’s BB-8 plushie

Still, I scratch my head over my friend’s Facebook post. Hugged? Why would she hug…

Oh. Oops.

I’ve mentally verbalized “BB-8” as “BB84.” BB84 denotes an application of quantum theory to cryptography. Cryptographers safeguard information from eavesdroppers and tampering. I’ve been thinking that my friend wants to hug a safety protocol.

Charles Bennett and Gilles Brassard invented BB84 in 1984. Imagine wanting to tell someone a secret. Suppose I wish to coordinate, with a classmate, the purchase of a BB-8 plushie for our friend the droid-hugger. Suppose that the classmate and I can communicate only via a public channel on which the droid-hugger eavesdrops.

Cryptographers advise me to send my classmate a key. A key is a random string of letters, such as CCCAAACCABACA. I’ll encode my message with the string, with which my classmate will decode the message.

Key 2

I have to transmit the key via the public channel. But the droid-hugger eavesdrops on the public channel. Haven’t we taken one step forward and one step back? Why would the key secure our information?

Because quantum-information science enables me to to transmit the key without the droid-hugger’s obtaining it. I won’t transmit random letters; I’ll transmit quantum states. That is, I’ll transmit physical systems, such as photons (particles of light), whose properties encode quantum information.

A nonquantum letter has a value, such as A or B or C.  Each letter has one and only one value, regardless of whether anyone knows what value the letter has. You can learn the value by measuring (looking at) the letter. We can’t necessarily associate such a value with a quantum state. Imagine my classmate measuring a state I send. Which value the measurement device outputs depends on chance and on how my classmate looks at the state.

If the droid-hugger intercepts and measures the state, she’ll change it. My classmate and I will notice such changes. We’ll scrap our key and repeat the BB84 protocol until the droid-hugger quits eavesdropping.

BB84 launched quantum cryptography, the safeguarding of information with quantum physics. Today’s quantum cryptographers rely on BB84 as you rely, when planning a holiday feast, on a carrot-cake recipe that passed your brother’s taste test on his birthday. Quantum cryptographers construct protocols dependent on lines like “The message sender and receiver are assumed to share a key distributed, e.g., via the BB84 protocol.”

BB84 has become a primitive task, a solved problem whose results we invoke in more-complicated problems. Other quantum-information primitives include (warning: jargon ahead) entanglement distillation, entanglement dilution, quantum data compression, and quantum-state merging. Quantum-information scientists solved many primitive problems during the 1990s and early 2000s. You can apply those primitives, even if you’ve forgotten how to prove them.


A primitive task, like quantum-entanglement distillation

Those primitives appear to darken quantum information’s horizons. The spring before I started my PhD, an older physicist asked me why I was specializing in quantum information theory. Haven’t all the problems been solved? he asked. Isn’t quantum information theory “dead”?

Imagine discovering how to power plasma blades with kyber crystals. Would you declare, “Problem solved” and relegate your blades to the attic? Or would you apply your tool to defending freedom?

Saber + what to - small

Primitive quantum-information tools are unknotting problems throughout physics—in computer science; chemistry; optics (the study of light); thermodynamics (the study of work, heat, and efficiency); and string theory. My advisor has tracked how uses of “entanglement,” a quantum-information term, have swelled in high-energy-physics papers.

A colleague of that older physicist views quantum information theory as a toolkit, a perspective, a lens through which to view science. During the 1700s, the calculus invented by Isaac Newton and Gottfried Leibniz revolutionized physics. Emmy Noether (1882—1935) recast physics in terms of symmetries and conservation laws. (If the forces acting on a system don’t change in time, for example, the system doesn’t gain or lose energy. A constant force is invariant under, or symmetric with respect to, the progression of time. This symmetry implies that the system’s energy is conserved.) We can cast physics instead (jargon ahead) in terms of the minimization of a free energy or an action.

Quantum information theory, this physicist predicted, will revolutionize physics as calculus, symmetries, conservation, and free energy have. Quantum-information tools such as entropies, entanglement, and qubits will bleed into subfields of physics as Lucasfilm has bled into the fanfiction, LEGO, and Halloween-costume markets.

BB84, and the solution of other primitives, have not killed quantum information. They’ve empowered it to spread—thankfully, to this early-career quantum information scientist. Never mind BB-8; I’d rather hug BB84. Perhaps I shall. Engineers have realized technologies that debuted on Star Trek; quantum mechanics has secured key sharing; bakers have crafted cakes shaped like the Internet; and a droid’s popularity rivals R2D2’s. Maybe next Monday will bring a BB84 plushie.


The author hugging the BB84 paper and a plushie. On my wish list: a combination of the two.

Discourse in Delft

A camel strolled past, yards from our window in the Applied-Sciences Building.

I hadn’t expected to see camels at TU Delft, aka the Delft University of Technology, in Holland. I breathed, “Oh!” and turned to watch until the camel followed its turbaned leader out of sight. Nelly Ng, the PhD student with whom I was talking, followed my gaze and laughed.

Nelly works in Stephanie Wehner’s research group. Stephanie—a quantum cryptographer, information theorist, thermodynamicist, and former Caltech postdoc—was kind enough to host me for half August. I arrived at the same time as TU Delft’s first-year undergrads. My visit coincided with their orientation. The orientation involved coffee hours, team-building exercises, and clogging the cafeteria whenever the Wehner group wanted lunch.

And, as far as I could tell, a camel.

Not even a camel could unseat Nelly’s and my conversation. Nelly, postdoc Mischa Woods, and Stephanie are the Wehner-group members who study quantum and small-scale thermodynamics. I study quantum and small-scale thermodynamics, as Quantum Frontiers stalwarts might have tired of hearing. The four of us exchanged perspectives on our field.

Mischa knew more than Nelly and I about clocks; Nelly knew more about catalysis; and I knew more about fluctuation relations. We’d read different papers. We’d proved different theorems. We explained the same phenomena differently. Nelly and I—with Mischa and Stephanie, when they could join us—questioned and answered each other almost perpetually, those two weeks.

We talked in our offices, over lunch, in the group discussion room, and over tea at TU Delft’s Quantum Café. We emailed. We talked while walking. We talked while waiting for Stephanie to arrive so that she could talk with us.


The site of many a tête-à-tête.

The copiousness of the conversation drained me. I’m an introvert, formerly “the quiet kid” in elementary school. Early some mornings in Delft, I barricaded myself in the visitors’ office. Late some nights, I retreated to my hotel room or to a canal bank. I’d exhausted my supply of communication; I had no more words for anyone. Which troubled me, because I had to finish a paper. But I regret not one discussion, for three reasons.

First, we relished our chats. We laughed together, poked fun at ourselves, commiserated about calculations, and confided about what we didn’t understand.

We helped each other understand, second. As I listened to Mischa or as I revised notes about a meeting, a camel would stroll past a window in my understanding. I’d see what I hadn’t seen before. Mischa might be explaining which quantum states represent high-quality clocks. Nelly might be explaining how a quantum state ξ can enable a state ρ to transform into a state σ. I’d breathe, “Oh!” and watch the mental camel follow my interlocutor through my comprehension.

Nelly’s, Mischa’s, and Stephanie’s names appear in the acknowledgements of the paper I’d worried about finishing. The paper benefited from their explanations and feedback.

Third, I left Delft with more friends than I’d had upon arriving. Nelly, Mischa, and I grew to know each other, to trust each other, to enjoy each other’s company. At the end of my first week, Nelly invited Mischa and me to her apartment for dinner. She provided pasta; I brought apples; and Mischa brought a sweet granola-and-seed mixture. We tasted and enjoyed more than we would have separately.


Dinner with Nelly and Mischa.

I’ve written about how Facebook has enhanced my understanding of, and participation in, science. Research involves communication. Communication can challenge us, especially many of us drawn to science. Let’s shoulder past the barrier. Interlocutors point out camels—and hot-air balloons, and lemmas and theorems, and other sources of information and delight—that I wouldn’t spot alone.

With gratitude to Stephanie, Nelly, Mischa, the rest of the Wehner group (with whom I enjoyed talking), QuTech and TU Delft.

During my visit, Stephanie and Delft colleagues unveiled the “first loophole-free Bell test.” Their paper sent shockwaves (AKA camels) throughout the quantum community. Scott Aaronson explains the experiment here.

Toward physical realizations of thermodynamic resource theories

“This is your arch-nemesis.”

The thank-you slide of my presentation remained onscreen, and the question-and-answer session had begun. I was presenting a seminar about thermodynamic resource theories (TRTs), models developed by quantum-information theorists for small-scale exchanges of heat and work. The audience consisted of condensed-matter physicists who studied graphene and photonic crystals. I was beginning to regret my topic’s abstractness.

The question-asker pointed at a listener.

“This is an experimentalist,” he continued, “your arch-nemesis. What implications does your theory have for his lab? Does it have any? Why should he care?”

I could have answered better. I apologized that quantum-information theorists, reared on the rarefied air of Dirac bras and kets, had developed TRTs. I recalled the baby steps with which science sometimes migrates from theory to experiment. I could have advocated for bounding, with idealizations, efficiencies achievable in labs. I should have invoked the connections being developed with fluctuation results, statistical mechanical theorems that have withstood experimental tests.

The crowd looked unconvinced, but I scored one point: The experimentalist was not my arch-nemesis.

“My new friend,” I corrected the questioner.

His question has burned in my mind for two years. Experiments have inspired, but not guided, TRTs. TRTs have yet to drive experiments. Can we strengthen the connection between TRTs and the natural world? If so, what tools must resource theorists develop to predict outcomes of experiments? If not, are resource theorists doing physics?

A Q&A more successful than mine.

I explore answers to these questions in a paper released today. Ian Durham and Dean Rickles were kind enough to request a contribution for a book of conference proceedings. The conference, “Information and Interaction: Eddington, Wheeler, and the Limits of Knowledge” took place at the University of Cambridge (including a graveyard thereof), thanks to FQXi (the Foundational Questions Institute).

What, I asked my advisor, does one write for conference proceedings?

“Proceedings are a great opportunity to get something off your chest,” John said.

That seminar Q&A had sat on my chest, like a pet cat who half-smothers you while you’re sleeping, for two years. Theorists often justify TRTs with experiments.* Experimentalists, an argument goes, are probing limits of physics. Conventional statistical mechanics describe these regimes poorly. To understand these experiments, and to apply them to technologies, we must explore TRTs.

Does that argument not merit testing? If experimentalists observe the extremes predicted with TRTs, then the justifications for, and the timeliness of, TRT research will grow.

Something to get off your chest. Like the contents of a conference-proceedings paper, according to my advisor.

You’ve read the paper’s introduction, the first eight paragraphs of this blog post. (Who wouldn’t want to begin a paper with a mortifying anecdote?) Later in the paper, I introduce TRTs and their role in one-shot statistical mechanics, the analysis of work, heat, and entropies on small scales. I discuss whether TRTs can be realized and whether physicists should care. I identify eleven opportunities for shifting TRTs toward experiments. Three opportunities concern what merits realizing and how, in principle, we can realize it. Six adjustments to TRTs could improve TRTs’ realism. Two more-out-there opportunities, though less critical to realizations, could diversify the platforms with which we might realize TRTs.

One opportunity is the physical realization of thermal embezzlement. TRTs, like thermodynamic laws, dictate how systems can and cannot evolve. Suppose that a state R cannot transform into a state S: R \not\mapsto S. An ancilla C, called a catalyst, might facilitate the transformation: R + C \mapsto S + C. Catalysts act like engines used to extract work from a pair of heat baths.

Engines degrade, so a realistic transformation might yield S + \tilde{C}, wherein \tilde{C} resembles C. For certain definitions of “resembles,”** TRTs imply, one can extract arbitrary amounts of work by negligibly degrading C. Detecting the degradation—the work extraction’s cost—is difficult. Extracting arbitrary amounts of work at a difficult-to-detect cost contradicts the spirit of thermodynamic law.

The spirit, not the letter. Embezzlement seems physically realizable, in principle. Detecting embezzlement could push experimentalists’ abilities to distinguish between close-together states C and \tilde{C}. I hope that that challenge, and the chance to violate the spirit of thermodynamic law, attracts researchers. Alternatively, theorists could redefine “resembles” so that C doesn’t rub the law the wrong way.

The paper’s broadness evokes a caveat of Arthur Eddington’s. In 1927, Eddington presented Gifford Lectures entitled The Nature of the Physical World. Being a physicist, he admitted, “I have much to fear from the expert philosophical critic.” Specializing in TRTs, I have much to fear from the expert experimental critic. The paper is intended to point out, and to initiate responses to, the lack of physical realizations of TRTs. Some concerns are practical; some, philosophical. I expect and hope that the discussion will continue…preferably with more cooperation and charity than during that Q&A.

If you want to continue the discussion, drop me a line.

*So do theorists-in-training. I have.

**A definition that involves the trace distance.

Bits, Bears, and Beyond in Banff: Part Deux

You might remember that about one month ago, Nicole blogged about the conference Beyond i.i.d. in information theory and told us about bits, bears, and beyond in Banff. I was very pleased that Nicole did so, because this conference has become one of my favorites in recent years (ok, it’s my favorite). You can look at her post to see what is meant by “Beyond i.i.d.” The focus of the conference includes cutting-edge topics in quantum Shannon theory, and the conference still has a nice “small world” feel to it (for example, the most recent edition and the first one featured a music session from participants). Here is a picture of some of us having a fun time:

Will Matthews, Felix Leditzky, me, and Nilanjana Datta (facing away) singing

Will Matthews, Felix Leditzky, me, and Nilanjana Datta (facing away) singing “Jamaica Farewell”.

The Beyond i.i.d. series has shaped a lot of the research going on in this area and has certainly affected my own research directions. The first Beyond i.i.d. was held in Cambridge, UK in January 2013, organized by Nilanjana Datta and Renato Renner. It had a clever logo, featuring cyclists of various sorts biking one after another, the first few looking the same and the ones behind them breaking out of the i.i.d. pattern:

Screen Shot 2015-09-12 at 3.55.17 PM

It was also at the Cambridge edition that the famous entropy zoo first appeared, which has now been significantly updated, based on recent progress in the area. The next Beyond i.i.d. happened in Singapore in May 2014, organized by Marco Tomamichel, Vincent Tan, and Stephanie Wehner. (Stephanie was a recent “quantum woman” for her work on a loophole-free Bell experiment.)

The tradition continued this past summer in beautiful Banff, Canada. I hope that it goes on for a long time. At least I have next year’s to look forward to, which will be in beachy Barcelona in the summertime, (as of now) planned to be just one week before Alexander Holevo presents the Shannon lecture in Barcelona at the ISIT 2016 conference (by the way, this is the first time that a quantum information theorist has won the prestigious Shannon award).

So why am I blabbing on and on about the Beyond i.i.d. conference if Nicole already wrote a great summary of the Banff edition this past summer? Well, she didn’t have room in her blog post to cover one of my favorite topics that was discussed at my favorite conference, so she graciously invited me to discuss it here.

The driving question of my new favorite topic is “What is the right notion of a quantum Markov chain?” The past year or so has seen some remarkable progress in this direction. To motivate it, let’s go back to bears, and specifically bear attacks (as featured in Nicole’s post). In Banff, the locals there told us that they had never heard of a bear attacking a group of four or more people who hike together. But let’s suppose that Alice, Bob, and Eve ignore this advice and head out together for a hike in the mountains. Also, in a different group are 50 of Alice’s sisters, but the park rangers are focusing their binoculars on the group of three (Alice, Bob, and Eve), observing their movements, because they are concerned that a group of three will run into trouble.

In the distance, there is a clever bear observing the movements of Alice, Bob, and Eve, and he notices some curious things. If he looks at Alice and Bob’s movements alone, they appear to take each step randomly, but for the most part together. That is, their steps appear correlated. He records their movements for some time and estimates a probability distribution p(a,b) that characterizes their movements. However, if he considers the movements of Alice, Bob, and Eve all together, he realizes that Alice and Bob are really taking their steps based on what Eve does, who in turn is taking her steps completely at random. So at this point the bear surmises that Eve is the mediator of the correlations observed between Alice and Bob’s movements, and when he writes down an estimate for the probability distribution p(a,b,e) characterizing all three of their movements, he notices that it factors as p(a,b,e) = p(a|e) p(b|e) p(e). That is, the bear sees that the distribution forms a Markov chain.

What is an important property of such a Markov chain?“, asks the bear. Well, neglecting Alice’s movements (summing over the a variable), the probability distribution reduces to p(b|e) p(e), because p(a|e) is a conditional probability distribution. A characteristic of a Markov probability distribution is that one could reproduce the original distribution p(a,b,e) simply by acting on the e variable of p(b|e) p(e) with the conditional probability distribution p(a|e). So the bear realizes that it would be possible for Alice to be lost and subsequently replaced by Eve calling in one of Alice’s sisters, such that nobody else would notice anything different from before — it would appear as if the movements of all three were unchanged once this replacement occurs. Salivating at his realization, the bear takes Eve briefly aside without any of the others noticing. The bear explains that he will not eat Eve and will instead eat Alice if Eve can call in one of Alice’s sisters and direct her movements to be chosen according to the distribution p(a|e). Eve, realizing that her options are limited (ok, ok, maybe there are other options…), makes a deal with the bear. So the bear promptly eats Alice, and Eve draws in one of Alice’s sisters, whom Eve then directs to walk according to the distribution p(a|e). This process repeats, going on and on, and all the while, the park rangers, focusing exclusively on the movements on the party of three, don’t think anything of what’s going because they observe that the joint distribution p(a,b,e) describing the movements of “Alice,” Bob, and Eve never seems to change (let’s assume that the actions of the bear and Eve are very fast :) ). So the bear is very satisfied after eating Alice and some of her sisters, and Eve is pleased not to be eaten, at the same time never having cared too much for Alice or any of her sisters.

A natural question arises: “What could Alice and Bob do to prevent this terrible situation from arising, in which Alice and so many of her sisters get eaten without the park rangers noticing anything?” Well, Alice and Bob could attempt to coordinate their movements independently of Eve’s. Even better, before heading out on a hike, they could make sure to have brought along several entangled pairs of particles (and perhaps some bear spray). If Alice and Bob choose their movements according to the outcomes of measurements of the entangled pairs, then it would be impossible for Alice to be eaten and the park rangers not to notice. That is, the distribution describing their movements could never be described by a Markov chain distribution of the form p(a|e) p(b|e) p(e). Thus, in such a scenario, as soon as the bear attacks Alice and then Eve replaces her with one of her sisters, the park rangers would immediately notice something different about the movements of the party of three and then figure out what is going on. So at least Alice could save her sisters…

What is the lesson here? A similar scenario is faced in quantum key distribution. Eve and other attackers (such as a bear) might try to steal what is there in Alice’s system and then replace it with something else, in an attempt to go undetected. If the situation is described by classical physics, this would be possible if Eve had access to a “hidden variable” that dictates the actions of Alice and Bob. But according to Bell’s theorem or the monogamy of entanglement, it is impossible for a “hidden variable” strategy to mimic the outcomes of measurements performed on sufficiently entangled particles.

Since we never have perfectly entangled particles or ones whose distributions exactly factor as Markov chains, it would be ideal to quantify, for a given three-party quantum state of Alice, Bob, and Eve, how well one could recover from the loss of the Alice system by Eve performing a recovery channel on her system alone. This would help us to better understand approximate cases that we expect to appear in realistic scenarios. At the same time, we could have a more clear understanding of what constitutes an approximate quantum Markov chain.

Now due to recent results of Fawzi and Renner, we know that this quantification of quantum non-Markovianity is possible by using the conditional quantum mutual information (CQMI), a fundamental measure of information in quantum information theory. We already knew that the CQMI is non-negative when evaluated for any three-party quantum state, due to the strong subadditivity inequality, but now we can say more than that: If the CQMI is small, then Eve can recover well from the loss of Alice, implying that the reduced state of Alice and Bob’s system could not have been too entangled in the first place. Relatedly, if Eve cannot recover well from the loss of Alice, then the CQMI cannot be small. The CQMI is the quantity underlying the squashed entanglement measure, which in turn plays a fundamental role in characterizing the performance of realistic quantum key distribution systems.

Since the original results of Fawzi and Renner appeared on the arXiv, this topic has seen much activity in “quantum information land.” Here are some papers related to this topic, which have appeared in the past year or so (apologies if I missed your paper!):

Renyi generalizations of the conditional quantum mutual information
Fidelity of recovery, geometric squashed entanglement, and measurement recoverability
Renyi squashed entanglement, discord, and relative entropy differences
Squashed entanglement, k-extendibility, quantum Markov chains, and recovery maps
Quantum Conditional Mutual Information, Reconstructed States, and State Redistribution
Multipartite quantum correlations and local recoverability
Monotonicity of quantum relative entropy and recoverability
Quantum Markov chains, sufficiency of quantum channels, and Renyi information measures
The Fidelity of Recovery is Multiplicative
Universal recovery map for approximate Markov chains
Recoverability in quantum information theory
Strengthened Monotonicity of Relative Entropy via Pinched Petz Recovery Map
Universal recovery from a decrease of quantum relative entropy

Some of the papers are admittedly that of myself and my collaborators, but hey!, please forgive me, I’ve been excited about the topic. We now know simpler proofs of the original FawziRenner results and extensions of them that apply to the quantum relative entropy as well. Since the quantum relative entropy is such a fundamental quantity in quantum information theory, some of the above papers provide sweeping ramifications for many foundational statements in quantum information theory, including entanglement theory, quantum distinguishability, the Holevo bound, quantum discord, multipartite information measures, etc. Beyond i.i.d. had a day of talks dedicated to the topic, and I think we will continue seeing further developments in this area.

Bits, bears, and beyond in Banff

Another conference about entropy. Another graveyard.

Last year, I blogged about the University of Cambridge cemetery visited by participants in the conference “Eddington and Wheeler: Information and Interaction.” We’d lectured each other about entropy–a quantification of decay, of the march of time. Then we marched to an overgrown graveyard, where scientists who’d lectured about entropy decades earlier were decaying.

This July, I attended the conference “Beyond i.i.d. in information theory.” The acronym “i.i.d.” stands for “independent and identically distributed,” which requires its own explanation. The conference took place at BIRS, the Banff International Research Station, in Canada. Locals pronounce “BIRS” as “burrs,” the spiky plant bits that stick to your socks when you hike. (I had thought that one pronounces “BIRS” as “beers,” over which participants in quantum conferences debate about the Measurement Problem.) Conversations at “Beyond i.i.d.” dinner tables ranged from mathematical identities to the hiking for which most tourists visit Banff to the bears we’d been advised to avoid while hiking. So let me explain the meaning of “i.i.d.” in terms of bear attacks.


The BIRS conference center. Beyond here, there be bears.

Suppose that, every day, exactly one bear attacks you as you hike in Banff. Every day, you have a probability p1 of facing down a black bear, a probability p2 of facing down a grizzly, and so on. These probabilities form a distribution {pi} over the set of possible events (of possible attacks). We call the type of attack that occurs on a given day a random variable. The distribution associated with each day equals the distribution associated with each other day. Hence the variables are identically distributed. The Monday distribution doesn’t affect the Tuesday distribution and so on, so the distributions are independent.

Information theorists quantify efficiencies with which i.i.d. tasks can be performed. Suppose that your mother expresses concern about your hiking. She asks you to report which bear harassed you on which day. You compress your report into the fewest possible bits, or units of information. Consider the limit as the number of days approaches infinity, called the asymptotic limit. The number of bits required per day approaches a function, called the Shannon entropy HS, of the distribution:

Number of bits required per day → HS({pi}).

The Shannon entropy describes many asymptotic properties of i.i.d. variables. Similarly, the von Neumann entropy HvN describes many asymptotic properties of i.i.d. quantum states.

But you don’t hike for infinitely many days. The rate of black-bear attacks ebbs and flows. If you stumbled into grizzly land on Friday, you’ll probably avoid it, and have a lower grizzly-attack probability, on Saturday. Into how few bits can you compress a set of nonasymptotic, non-i.i.d. variables?

We answer such questions in terms of ɛ-smooth α-Rényi entropies, the sandwiched Rényi relative entropy, the hypothesis-testing entropy, and related beasts. These beasts form a zoo diagrammed by conference participant Philippe Faist. I wish I had his diagram on a placemat.

Entropy zoo

“Beyond i.i.d.” participants define these entropies, generalize the entropies, probe the entropies’ properties, and apply the entropies to physics. Want to quantify the efficiency with which you can perform an information-processing task or a thermodynamic task? An entropy might hold the key.

Many highlights distinguished the conference; I’ll mention a handful.  If the jargon upsets your stomach, skip three paragraphs to Thermodynamic Thursday.

Aram Harrow introduced a resource theory that resembles entanglement theory but whose agents pay to communicate classically. Why, I interrupted him, define such a theory? The backstory involves a wager against quantum-information pioneer Charlie Bennett (more precisely, against an opinion of Bennett’s). For details, and for a quantum version of The Princess and the Pea, watch Aram’s talk.

Graeme Smith and colleagues “remove[d] the . . . creativity” from proofs that certain entropic quantities satisfy subadditivity. Subadditivity is a property that facilitates proofs and that offers physical insights into applications. Graeme & co. designed an algorithm for checking whether entropic quantity Q satisfies subadditivity. Just add water; no innovation required. How appropriate, conference co-organizer Mark Wilde observed. BIRS has the slogan “Inspiring creativity.”

Patrick Hayden applied one-shot entropies to AdS/CFT and emergent spacetime, enthused about elsewhere on this blog. Debbie Leung discussed approximations to Haar-random unitaries. Gilad Gour compared resource theories.


Conference participants graciously tolerated my talk about thermodynamic resource theories. I closed my eyes to symbolize the ignorance quantified by entropy. Not really; the photo didn’t turn out as well as hoped, despite the photographer’s goodwill. But I could have closed my eyes to symbolize entropic ignorance.

Thermodynamics and resource theories dominated Thursday. Thermodynamics is the physics of heat, work, entropy, and stasis. Resource theories are simple models for transformations, like from a charged battery and a Tesla car at the bottom of a hill to an empty battery and a Tesla atop a hill.


My advisor’s Tesla. No wonder I study thermodynamic resource theories.

Philippe Faist, diagrammer of the Entropy Zoo, compared two models for thermodynamic operations. I introduced a generalization of resource theories for thermodynamics. Last year, Joe Renes of ETH and I broadened thermo resource theories to model exchanges of not only heat, but also particles, angular momentum, and other quantities. We calculated work in terms of the hypothesis-testing entropy. Though our generalization won’t surprise Quantum Frontiers diehards, the magic tricks in my presentation might.

At twilight on Thermodynamic Thursday, I meandered down the mountain from the conference center. Entropies hummed in my mind like the mosquitoes I slapped from my calves. Rising from scratching a bite, I confronted the Banff Cemetery. Half-wild greenery framed the headstones that bordered the gravel path I was following. Thermodynamicists have associated entropy with the passage of time, with deterioration, with a fate we can’t escape. I seem unable to escape from brushing past cemeteries at entropy conferences.

Not that I mind, I thought while scratching the bite in Pasadena. At least I escaped attacks by Banff’s bears.


With thanks to the conference organizers and to BIRS for the opportunity to participate in “Beyond i.i.d. 2015.”