# Yes, seasoned scientists do extraordinary science.

Imagine that you earned tenure and your field’s acclaim decades ago. Perhaps you received a Nobel Prize. Perhaps you’re directing an institute for science that you helped invent. Do you still do science? Does mentoring youngsters, advising the government, raising funds, disentangling logistics, presenting keynote addresses at conferences, chairing committees, and hosting visitors dominate the time you dedicate to science? Or do you dabble, attend seminars, and read, following progress without spearheading it?

People have asked whether my colleagues do science when weighed down with laurels. The end of August illustrates my answer.

At the end of August, I participated in the eighth Conference on Quantum Information and Quantum Control (CQIQC) at Toronto’s Fields Institute. CQIQC bestows laurels called “the John Stewart Bell Prize” on quantum-information scientists. John Stewart Bell revolutionized our understanding of entanglement, strong correlations that quantum particles can share and that power quantum computing. Aephraim Steinberg, vice-chair of the selection committee, bestowed this year’s award. The award, he emphasized, recognizes achievements accrued during the past six years. This year’s co-winners have been leading quantum information theory for decades. But the past six years earned the winners their prize.

Peter Zoller co-helms IQOQI in Innsbruck. (You can probably guess what the acronym stands for. Hint: The name contains “Quantum” and “Institute.”) Ignacio Cirac is a director of the Max Planck Institute of Quantum Optics near Munich. Both winners presented recent work about quantum many-body physics at the conference. You can watch videos of their talks here.

Peter discussed how a lab in Austria and a lab across the world can check whether they’ve prepared the same quantum state. One lab might have trapped ions, while the other has ultracold atoms. The experimentalists might not know which states they’ve prepared, and the experimentalists might have prepared the states at different times. Create multiple copies of the states, Peter recommended, measure the copies randomly, and play mathematical tricks to calculate correlations.

Ignacio expounded upon how to simulate particle physics on a quantum computer formed from ultracold atoms trapped by lasers. For expert readers: Simulate matter fields with fermionic atoms and gauge fields with bosonic atoms. Give the optical lattice the field theory’s symmetries. Translate the field theory’s Lagrangian into Hamiltonian language using Kogut and Susskind’s prescription.

Even before August, I’d collected an arsenal of seasoned scientists who continue to revolutionize their fields. Frank Wilczek shared a physics Nobel Prize for theory undertaken during the 1970s. He and colleagues helped explain matter’s stability: They clarified how close-together quarks (subatomic particles) fail to attract each other, though quarks draw together when far apart. Why stop after cofounding one subfield of physics? Frank spawned another in 2012. He proposed the concept of a time crystal, which is like table salt, except extended across time instead of across space. Experimentalists realized a variation on Frank’s prediction in 2018, and time crystals have exploded across the scientific literature.1

Rudy Marcus is 96 years old. He received a chemistry Nobel Prize, for elucidating how electrons hop between molecules during reactions, in 1992. I took a nonequilibrium-statistical-mechanics course from Rudy four years ago. Ever since, whenever I’ve seen him, he’s asked for the news in quantum information theory. Rudy’s research group operates at Caltech, and you won’t find “Emeritus” in the title on his webpage.

My PhD supervisor, John Preskill, received tenure at Caltech for particle-physics research performed before 1990. You might expect the rest of his career to form an afterthought. But he helped establish quantum computing, starting in the mid-1990s. During the past few years, he co-midwifed the subfield of holographic quantum information theory, which concerns black holes, chaos, and the unification of quantum theory with general relativity. Watching a subfield emerge during my PhD left a mark like a tree on a bicyclist (or would have, if such a mark could uplift instead of injure). John hasn’t helped create subfields only by garnering resources and encouraging youngsters. Several papers by John and collaborators—about topological quantum matter, black holes, quantum error correction, and more—have transformed swaths of physics during the past 15 years. Nor does John stamp his name on many papers: Most publications by members of his group don’t list him as a coauthor.

Do my colleagues do science after laurels pile up on them? The answer sounds to me, in many cases, more like a roar than like a “yes.” Much science done by senior scientists inspires no less than the science that established them. Beyond their results, their enthusiasm inspires. Never mind receiving a Bell Prize. Here’s to working toward deserving a Bell Prize every six years.

With thanks to the Fields Institute, the University of Toronto, Daniel F. V. James, Aephraim Steinberg, and the rest of the conference committee for their invitation and hospitality.

You can find videos of all the conference’s talks here. My talk is shown here

1To scientists, I recommend this Physics Today perspective on time crystals. Few articles have awed and inspired me during the past year as much as this review did.

# Quantum conflict resolution

If only my coauthors and I had quarreled.

I was working with Tony Bartolotta, a PhD student in theoretical physics at Caltech, and Jason Pollack, a postdoc in cosmology at the University of British Columbia. They acted as the souls of consideration. We missed out on dozens of opportunities to bicker—about the paper’s focus, who undertook which tasks, which journal to submit to, and more. Bickering would have spiced up the story behind our paper, because the paper concerns disagreement.

Quantum observables can disagree. Observables are measurable properties, such as position and momentum. Suppose that you’ve measured a quantum particle’s position and obtained an outcome $x$. If you measure the position immediately afterward, you’ll obtain $x$ again. Suppose that, instead of measuring the position again, you measure the momentum. All the possible outcomes have equal probabilities of obtaining. You can’t predict the outcome.

The particle’s position can have a well-defined value, or the momentum can have a well-defined value, but the observables can’t have well-defined values simultaneously. Furthermore, if you measure the position, you randomize the outcome of a momentum measurement. Position and momentum disagree.

How should we quantify the disagreement of two quantum observables, $\hat{A}$ and $\hat{B}$? The question splits physicists into two camps. Pure quantum information (QI) theorists use uncertainty relations, whereas condensed-matter and high-energy physicists prefer out-of-time-ordered correlators. Let’s meet the camps in turn.

Heisenberg intuited an uncertainty relation that Robertson formalized during the 1920s,

$\Delta \hat{A} \, \Delta \hat{B} \geq \frac{1}{i \hbar} \langle [\hat{A}, \hat{B}] \rangle$.

Imagine preparing a quantum state $| \psi \rangle$ and measuring $\hat{A}$, then repeating this protocol in many trials. Each trial has some probability $p_a$ of yielding the outcome $a$. Different trials will yield different $a$’s. We quantify the spread in $a$ values with the standard deviation $\Delta \hat{A} = \sqrt{ \langle \psi | \hat{A}^2 | \psi \rangle - \langle \psi | \hat{A} | \psi \rangle^2 }$. We define $\Delta \hat{B}$ analogously. $\hbar$ denotes Planck’s constant, a number that characterizes our universe as the electron’s mass does.

$[\hat{A}, \hat{B}]$ denotes the observables’ commutator. The numbers that we use in daily life commute: $7 \times 5 = 5 \times 7$. Quantum numbers, or operators, represent $\hat{A}$ and $\hat{B}$. Operators don’t necessarily commute. The commutator $[\hat{A}, \hat{B}] = \hat{A} \hat{B} - \hat{B} \hat{A}$ represents how little $\hat{A}$ and $\hat{B}$ resemble 7 and 5.

Robertson’s uncertainty relation means, “If you can predict an $\hat{A}$ measurement’s outcome precisely, you can’t predict a $\hat{B}$ measurement’s outcome precisely, and vice versa. The uncertainties must multiply to at least some number. The number depends on how much $\hat{A}$ fails to commute with $\hat{B}$.” The higher an uncertainty bound (the greater the inequality’s right-hand side), the more the operators disagree.

Heisenberg and Robertson explored operator disagreement during the 1920s. They wouldn’t have seen eye to eye with today’s QI theorists. For instance, QI theorists consider how we can apply quantum phenomena, such as operator disagreement, to information processing. Information processing includes cryptography. Quantum cryptography benefits from operator disagreement: An eavesdropper must observe, or measure, a message. The eavesdropper’s measurement of one observable can “disturb” a disagreeing observable. The message’s sender and intended recipient can detect the disturbance and so detect the eavesdropper.

How efficiently can one perform an information-processing task? The answer usually depends on an entropy $H$, a property of quantum states and of probability distributions. Uncertainty relations cry out for recasting in terms of entropies. So QI theorists have devised entropic uncertainty relations, such as

$H (\hat{A}) + H( \hat{B} ) \geq - \log c. \qquad (^*)$

The entropy $H( \hat{A} )$ quantifies the difficulty of predicting the outcome $a$ of an $\hat{A}$ measurement. $H( \hat{B} )$ is defined analogously. $c$ is called the overlap. It quantifies your ability to predict what happens if you prepare your system with a well-defined $\hat{A}$ value, then measure $\hat{B}$. For further analysis, check out this paper. Entropic uncertainty relations have blossomed within QI theory over the past few years.

Pure QI theorists, we’ve seen, quantify operator disagreement with entropic uncertainty relations. Physicists at the intersection of condensed matter and high-energy physics prefer out-of-time-ordered correlators (OTOCs). I’ve blogged about OTOCs so many times, Quantum Frontiers regulars will be able to guess the next two paragraphs.

Consider a quantum many-body system, such as a chain of qubits. Imagine poking one end of the system, such as by flipping the first qubit upside-down. Let the operator $\hat{W}$ represent the poke. Suppose that the system evolves chaotically for a time $t$ afterward, the qubits interacting. Information about the poke spreads through many-body entanglement, or scrambles.

Imagine measuring an observable $\hat{V}$ of a few qubits far from the $\hat{W}$ qubits. A little information about $\hat{W}$ migrates into the $\hat{V}$ qubits. But measuring $\hat{V}$ reveals almost nothing about $\hat{W}$, because most of the information about $\hat{W}$ has spread across the system. $\hat{V}$ disagrees with $\hat{W}$, in a sense. Actually, $\hat{V}$ disagrees with $\hat{W}(t)$. The $(t)$ represents the time evolution.

The OTOC’s smallness reflects how much $\hat{W}(t)$ disagrees with $\hat{V}$ at any instant $t$. At early times $t \gtrsim 0$, the operators agree, and the OTOC $\approx 1$. At late times, the operators disagree loads, and the OTOC $\approx 0$.

Different camps of physicists, we’ve seen, quantify operator disagreement with different measures: Today’s pure QI theorists use entropic uncertainty relations. Condensed-matter and high-energy physicists use OTOCs. Trust physicists to disagree about what “quantum operator disagreement” means.

I want peace on Earth. I conjectured, in 2016 or so, that one could reconcile the two notions of quantum operator disagreement. One must be able to prove an entropic uncertainty relation for scrambling, wouldn’t you think?

You might try substituting $\hat{W}(t)$ for the $\hat{A}$ in Ineq. ${(^*)}$, and $\hat{V}$ for the $\hat{B}$. You’d expect the uncertainty bound to tighten—the inequality’s right-hand side to grow—when the system scrambles. Scrambling—the condensed-matter and high-energy-physics notion of disagreement—would coincide with a high uncertainty bound—the pure-QI-theory notion of disagreement. The two notions of operator disagreement would agree. But the bound I’ve described doesn’t reflect scrambling. Nor do similar bounds that I tried constructing. I banged my head against the problem for about a year.

The sky brightened when Jason and Tony developed an interest in the conjecture. Their energy and conversation enabled us to prove an entropic uncertainty relation for scrambling, published this month.1 We tested the relation in computer simulations of a qubit chain. Our bound tightens when the system scrambles, as expected: The uncertainty relation reflects the same operator disagreement as the OTOC. We reconciled two notions of quantum operator disagreement.

As Quantum Frontiers regulars will anticipate, our uncertainty relation involves weak measurements and quasiprobability distributions: I’ve been studying their roles in scrambling over the past three years, with colleagues for whose collaborations I have the utmost gratitude. I’m grateful to have collaborated with Tony and Jason. Harmony helps when you’re tackling (quantum operator) disagreement—even if squabbling would spice up your paper’s backstory.

1Thanks to Communications Physics for publishing the paper. For pedagogical formatting, read the arXiv version.

# What distinguishes quantum thermodynamics from quantum statistical mechanics?

Yoram Alhassid asked the question at the end of my Yale Quantum Institute colloquium last February. I knew two facts about Yoram: (1) He belongs to Yale’s theoretical-physics faculty. (2) His PhD thesis’s title—“On the Information Theoretic Approach to Nuclear Reactions”—ranks among my three favorites.1

Over the past few months, I’ve grown to know Yoram better. He had reason to ask about quantum statistical mechanics, because his research stands up to its ears in the field. If forced to synopsize quantum statistical mechanics in five words, I’d say, “study of many-particle quantum systems.” Examples include gases of ultracold atoms. If given another five words, I’d add, “Calculate and use partition functions.” A partition function is a measure of the number of states, or configurations, accessible to the system. Calculate a system’s partition function, and you can calculate the system’s average energy, the average number of particles in the system, how the system responds to magnetic fields, etc.

My colloquium concerned quantum thermodynamics, which I’ve blogged about many times. So I should have been able to distinguish quantum thermodynamics from its neighbors. But the answer I gave Yoram didn’t satisfy me. I mulled over the exchange for a few weeks, then emailed Yoram a 502-word essay. The exercise grew my appreciation for the question and my understanding of my field.

An adaptation of the email appears below. The adaptation should suit readers who’ve majored in physics, but don’t worry if you haven’t. Bits of what distinguishes quantum thermodynamics from quantum statistical mechanics should come across to everyone—as should, I hope, the value of question-and-answer sessions:

One distinction is a return to the operational approach of 19th-century thermodynamics. Thermodynamicists such as Sadi Carnot wanted to know how effectively engines could operate. Their practical questions led to fundamental insights, such as the Carnot bound on an engine’s efficiency. Similarly, quantum thermodynamicists often ask, “How can this state serve as a resource in thermodynamic tasks?” This approach helps us identify what distinguishes quantum theory from classical mechanics.

For example, quantum thermodynamicists found an advantage in charging batteries via nonlocal operations. Another example is the “MBL-mobile” that I designed with collaborators. Many-body localization (MBL), we found, can enhance an engine’s reliability and scalability.

Asking, “How can this state serve as a resource?” leads quantum thermodynamicists to design quantum engines, ratchets, batteries, etc. We analyze how these devices can outperform classical analogues, identifying which aspects of quantum theory power the outperformance. This question and these tasks contrast with the questions and tasks of many non-quantum-thermodynamicists who use statistical mechanics. They often calculate response functions and (e.g., ground-state) properties of Hamiltonians.

These goals of characterizing what nonclassicality is and what it can achieve in thermodynamic contexts resemble upshots of quantum computing and cryptography. As a 21st-century quantum information scientist, I understand what makes quantum theory quantum partially by understanding which problems quantum computers can solve efficiently and classical computers can’t. Similarly, I understand what makes quantum theory quantum partially by understanding how much more work you can extract from a singlet $\frac{1}{ \sqrt{2} } ( | 0 1 \rangle - |1 0 \rangle )$ (a maximally entangled state of two qubits) than from a product state in which the reduced states have the same forms as in the singlet, $\frac{1}{2} ( | 0 \rangle \langle 0 | + | 1 \rangle \langle 1 | )$.

As quantum thermodynamics shares its operational approach with quantum information theory, quantum thermodynamicists use mathematical tools developed in quantum information theory. An example consists of generalized entropies. Entropies quantify the optimal efficiency with which we can perform information-processing and thermodynamic tasks, such as data compression and work extraction.

Most statistical-mechanics researchers use just the Shannon and von Neumann entropies, $H_{\rm Sh}$ and $H_{\rm vN}$, and perhaps the occasional relative entropy. These entropies quantify optimal efficiencies in large-system limits, e.g., as the number of messages compressed approaches infinity and in the thermodynamic limit.

Other entropic quantities have been defined and explored over the past two decades, in quantum and classical information theory. These entropies quantify the optimal efficiencies with which tasks can be performed (i) if the number of systems processed or the number of trials is arbitrary, (ii) if the systems processed share correlations, (iii) in the presence of “quantum side information” (if the system being used as a resource is entangled with another system, to which an agent has access), or (iv) if you can tolerate some probability $\varepsilon$ that you fail to accomplish your task. Instead of limiting ourselves to $H_{\rm Sh}$ and $H_{\rm vN}$, we use also “$\varepsilon$-smoothed entropies,” Rényi divergences, hypothesis-testing entropies, conditional entropies, etc.

Another hallmark of quantum thermodynamics is results’ generality and simplicity. Thermodynamics characterizes a system with a few macroscopic observables, such as temperature, volume, and particle number. The simplicity of some quantum thermodynamics served a chemist collaborator and me, as explained in the introduction of https://arxiv.org/abs/1811.06551.

Yoram’s question reminded me of one reason why, as an undergrad, I adored studying physics in a liberal-arts college. I ate dinner and took walks with students majoring in economics, German studies, and Middle Eastern languages. They described their challenges, which I analyzed with the physics mindset that I was acquiring. We then compared our approaches. Encountering other disciplines’ perspectives helped me recognize what tools I was developing as a budding physicist. How can we know our corner of the world without stepping outside it and viewing it as part of a landscape?

1The title epitomizes clarity and simplicity. And I have trouble resisting anything advertised as “the information-theoretic approach to such-and-such.”

# Introducing a new game: Quantum TiqTaqToe

### A passing conversation with my supervisor

Video games have been a part of my life for about as long as I can remember. From Paperboy and The Last Ninja on the Commodore 64 when I was barely old enough to operate a keyboard, to Mario Kart 8 and Zelda on the Nintendo Switch, as a postdoc at Caltech, working on quantum computing and condensed matter physics. Up until recently, I have kept my two lives separate: my love of video games and my career in quantum physics.

The realization that I could combine quantum physics with games came during an entertaining discussion with my current supervisor, Gil Refael. Gil and I were brainstorming approaches to develop a quantum version of Tetris. Instead of stopping and laughing it off, or even keeping the idea on the horizon, Gil suggested that we talk to Spyridon (Spiros) Michalakis for some guidance.

This is not the story of Quantum Tetris (yet), but rather the story of how we made a quantum version of a much older, and possibly more universally known game. This is a new game that Spiros and myself have been testing at elementary schools.

And so I am super excited to be able to finally present to you: Quantum TiqTaqToe! As of right now, the app is available both for Android devices and iPhone/iPad:

### Previous quantum games

Gil and I knew that Spiros had been involved in prior quantum games (most notably qCraft and Quantum Chess), so he seemed like the perfect contact point. He was conveniently located on the same campus, and even in the same department. But more importantly, he was curious about the idea and eager to talk.

After introducing the idea of Quantum Tetris, Spiros came up with an alternative approach. Seeing as this was going to be my first attempt at creating a video game, not to mention building a game from the ground up with quantum physics, he proposed to put me in touch with Chris Cantwell and help him improve the AI for Quantum Chess.

I thought long and hard about this proposition. Like five seconds. It was an amazing opportunity. I would get to look under the hood of a working and incredibly sophisticated video game, unlike any game ever made: the only game in the world I knew of that was truly based on quantum physics. And I would be solving a critical problem that I would have to deal with eventually, by adapting a conventional, classical rules-based game AI for quantum.

### Fun and Games

My first focus was to jump on Quantum Chess full-force, with the aim of helping Chris implement a new AI player for the game. After evaluating some possible chess-playing AI engines, including state-of-the-art players based off of Google’s AlphaZero, we landed on Stockfish as our best candidate for integration. The AI is currently hot-swappable though, so users can try to develop their own!

While some of the work for implementing the AI could be done directly using Chris’s C++ implementation of Quantum Chess, other aspects of the work required me to learn the program he had used to develop the user interface. That program is called Unity. Unity is a free game development program that I would highly recommend trying out and playing around with.

This experience was essential to the birth of Quantum TiqTaqToe. In my quest to understand Unity and Quantum Games, I set out to implement a “simple” game to get a handle on how all the different game components worked together. Having a game based on quantum mechanics is one thing; making sure it is fun to play requires an entirely different skill set.

### Perspective

Classic Tic-Tac-Toe is a game in which two players, called X and O, take turns in placing their symbols on a 3×3 grid. The first player to get 3 of their symbols in a line (diagonally, vertically or horizontally) wins. The game goes as far back as ancient Egypt, and evidence of the game has been found on roof tiles dating to 1300 BC [1].

Many variations of the game have existed across many cultures. The first print reference to a game called “tick-tack-toe” was in 1884. In the US the game was renamed “tic-tac-toe” sometime in the 20th century. Here’s a random fun fact: in Dutch, the game is most often referred to as “Butter-Cheese-and-Eggs” [2]. In 1952, computer scientist Alexander S. Douglas at the University of Cambridge turned it into one of the first computer games, featuring an AI player that could play perfect games against a human opponent.

Combinatorics has determined that whoever plays first will win 91 out of 138 possible board combinations. The second player will win in 44 boards. However, if both players play optimally, looking ahead through all the possible future outcomes, neither player should ever win and the game always ends in a draw, in one of only 3 board combinations.

In Quantum TiqTaqToe, with the current ruleset, we don’t yet know if a winning strategy exists.

I explicitly refer to the current ruleset because we currently limit the amount of quantumness in the game. We want to make sure the game is fun to play and ‘graspable’ for now. In addition, it turns out there already is a game called Quantum TicTacToe, developed by Allan Goff [3]. That version of TicTacToe has similar concepts but has a different set of rules.

### The Game

A typical game of Quantum TiqTaqToe will look very much like regular Tic-Tac-Toe until one of the players decides to make a quantum move:

At this point, the game board enters into a superposition. The X is in each position with 50/50 chance; in one universe the X is on the left and in the other it is on the right. Neither player knows how things will play out. And the game only gets more interesting from here. The opponent can choose to place their O in a superposition between an empty square and a square occupied by a quantum X.

Et voilà, player O has entangled his fate with his opponent’s. Once the two squares become entangled, the only outcomes are X-O or O-X, each with probability ½. Interestingly, since the game is fully quantum, the phase between the two entangled outcomes can in principle be leveraged to create interesting plays through destructive and constructive interference. The app features a simple tutorial (to be updated) that teaches you these moves and a few others. There are boards that classically result in a draw but are quantumly “winnable”.

### A quick note on the quantumness

The squares in TiqTaqToe are all fully quantum. I represent them as qutrits (like qubits, but instead of having states 0 and 1 my qutrits have states 0, 1 and 2), and moves made by the players are unitary operations acting on them. So the game consists of these essential elements:

1. The squares of the 3×3 grid are turned into qutrits (Empty, X, O). Each move is a unitary gate operation on those qutrits. I’ll leave the details of the math out, but for the case of qubits check out Chris’ detailed writeup on Quantum Chess [4].
2. Quantum TiqTaqToe allows you to select two squares in the grid, providing you with the option of creating a superposition or an entangled state. For the sake of simplicity (i.e. keeping the game fun to play and ‘graspable’ for now), no more than 3 squares can be involved in a given entangled state.

I chose to explicitly track sets of qutrits that share a Hilbert space. The entire quantum state of the game combines these sets with classical strings of the form “XEEOXEOXE”, indicating that the first square is an X, the second is Empty, etc.

### Victory in the multiverse

So, when does the game end if these quantum states are in play? In Quantum TiqTaqToe, the board collapses to a single classical state as soon as it is full (i.e. every square is non-empty). The resulting state is randomly chosen from all the possible outcomes, with a probability that is equal to the (square of the) wave-function amplitude (basic quantum mechanics). If there is a winner after the collapse, the game ends. Otherwise, the game continues until either there is a winner or until there are no more moves to be made (ending in a draw). On top of this, players get the option to forfeit their move for the opportunity to cause a partial collapse of the state, by using the collapse-mode. Future versions may include other ways of collapse, including one that does not involve rolling dice! [5]

### Can you beat the quantum AI?

Due to quantum physics and the collapse of the state, the game is inherently statistical. So instead of asking: “Can I beat my opponent in a game of Quantum TiqTaqToe?” one should ask “If I play 100 games against my opponent, can I consistently win more than 50 of them?”

You can test your skill against the in-game quantum AI to see if you’ve indeed mastered Quantum TiqTaqToe yet. At the hardest setting, winning even 30% of the time after, say, 20 games may be extraordinary. The implementation of this AI, by the way, would have been a blog-post by itself. For the curious, I can say it is based on the ExpectiMiniMax algorithm. As of the moment of this writing, the hardest AI setting is not available in the app yet. Keep your eyes out for an update soon though!

### The future

Perhaps kids who grow up playing quantum games will acquire a visceral understanding of quantum phenomena that our generation lacks.” – John Preskill, in his recent article [6].

From the get-go, Quantum TiqTaqToe (and Quantum Chess) have had outreach as a core motivation. Perhaps future quantum engineers and quantum programmers will look back on their youth and remember playing Quantum TiqTaqToe as I remember my Commodore 64 games. I am convinced that these small steps into the realm of Quantum Games are only just the beginning of an entirely new genre of fun and useful games.

In the meantime, we are hard at work implementing an Online mode so you can play with your fellow human friends remotely too. This online mode, plus the option of fighting a strong quantum AI, will be unlockable in-game through a small fee (unless you are an educator who wishes to introduce quantum physics in class through this game; those use cases are fee-free courtesy of IQIM and NSF). Each purchase will go towards supporting the future development of exciting new Quantum TiqTaqToe features, as well as other exciting Quantum Games (Tetris, anyone?)

Just in case you missed it: the app is available both for Android devices and iPhone/iPad right now:

I really hope you enjoy the game, and perhaps use it to get your friends and family excited about quantum physics. Oh, and start practicing! You never know if the online mode will bring along with it a real Quantum TiqTaqToe Tournament down the road 😉

### References

[2] The origin of this name in Dutch isn’t really certain as far as I know. Alledgedly, it is a left-over from the period in which butter, cheese and eggs were sold at the door (so was milk, but that was done separately since it was sold daily). The salesman had a list with columns for each of these three products, and would jot down a cross or a zero whenever a customer at an address bought or declined a product. Three crosses in a row would earn them praise from the boss.

# The importance of being open

Barcelona refused to stay indoors this May.

Merchandise spilled outside shops onto the streets, restaurateurs parked diners under trees, and ice-cream cones begged to be eaten on park benches. People thronged the streets, markets filled public squares, and the scents of flowers wafted from vendors’ stalls. I couldn’t blame the city. Its sunshine could have drawn Merlin out of his crystal cave. Insofar as a city lives, Barcelona epitomized a quotation by thermodynamicist Ilya Prigogine: “The main character of any living system is openness.”

Prigogine (1917–2003), who won the Nobel Prize for chemistry, had brought me to Barcelona. I was honored to receive, at the Joint European Thermodynamics Conference (JETC) there, the Ilya Prigogine Prize for a thermodynamics PhD thesis. The JETC convenes and awards the prize biennially; the last conference had taken place in Budapest. Barcelona suited the legacy of a thermodynamicist who illuminated open systems.

The conference center. Not bad, eh?

Ilya Prigogine began his life in Russia, grew up partially in Germany, settled in Brussels, and worked at American universities. His nobelprize.org biography reveals a mind open to many influences and disciplines: Before entering university, his “interest was more focused on history and archaeology, not to mention music, especially piano.” Yet Prigogine pursued chemistry.

He helped extend thermodynamics outside equilibrium. Thermodynamics is the study of energy, order, and time’s arrow in terms of large-scale properties, such as temperature, pressure, and volume. Many physicists think that thermodynamics describes only equilibrium. Equilibrium is a state of matter in which (1) large-scale properties remain mostly constant and (2) stuff (matter, energy, electric charge, etc.) doesn’t flow in any particular direction much. Apple pies reach equilibrium upon cooling on a countertop. When I’ve described my research as involving nonequilibrium thermodynamics, some colleagues have asked whether I’ve used an oxymoron. But “nonequilibrium thermodynamics” appears in Prigogine’s Nobel Lecture.

Ilya Prigogine

Another Nobel laureate, Lars Onsager, helped extend thermodynamics a little outside equilibrium. He imagined poking a system gently, as by putting a pie on a lukewarm stovetop or a magnet in a weak magnetic field. (Experts: Onsager studied the linear-response regime.) You can read about his work in my blog post “Long live Yale’s cemetery.” Systems poked slightly out of equilibrium tend to return to equilibrium: Equilibrium is stable. Systems flung far from equilibrium, as Prigogine showed, can behave differently.

A system can stay far from equilibrium by interacting with other systems. Imagine placing an apple pie atop a blistering stove. Heat will flow from the stove through the pie into the air. The pie will stay out of equilibrium due to interactions with what we call a “hot reservoir” (the stove) and a “cold reservoir” (the air). Systems (like pies) that interact with other systems (like stoves and air), we call “open.”

You and I are open: We inhale air, ingest food and drink, expel waste, and radiate heat. Matter and energy flow through us; we remain far from equilibrium. A bumper sticker in my high-school chemistry classroom encapsulated our status: “Old chemists don’t die. They come to equilibrium.” We remain far from equilibrium—alive—because our environment provides food and absorbs heat. If I’m an apple pie, the yogurt that I ate at breakfast serves as my stovetop, and the living room in which I breakfasted serves as the air above the stove. We live because of our interactions with our environments, because we’re open. Hence Prigogine’s claim, “The main character of any living system is openness.”

The author

JETC 2019 fostered openness. The conference sessions spanned length scales and mass scales, from quantum thermodynamics to biophysics to gravitation. One could arrive as an expert in cell membranes and learn about astrophysics.

I remain grateful for the prize-selection committee’s openness. The topics of earlier winning theses include desalination, colloidal suspensions, and falling liquid films. If you tipped those topics into a tube, swirled them around, and capped the tube with a kaleidoscope glass, you might glimpse my thesis’s topic, quantum steampunk. Also, of the nine foregoing Prigogine Prize winners, only one had earned his PhD in the US. I’m grateful for the JETC’s consideration of something completely different.

When Prigogine said, “openness,” he referred to exchanges of energy and mass. Humans can exhibit openness also to ideas. The JETC honored Prigogine’s legacy in more ways than one. Here’s hoping I live up to their example.

# Thermodynamics of quantum channels

You would hardly think that a quantum channel could have any sort of thermodynamic behavior. We were surprised, too.

How do the laws of thermodynamics apply in the quantum regime? Thanks to novel ideas introduced in the context of quantum information, scientists have been able to develop new ways to characterize the thermodynamic behavior of quantum states. If you’re a Quantum Frontiers regular, you have certainly read about these advances in Nicole’s captivating posts on the subject.

Asking the same question for quantum channels, however, turned out to be more challenging than expected. A quantum channel is a way of representing how an input state can change into an output state according to the laws of quantum mechanics. Let’s picture it as a box with an input state and an output state, like so:

A computing gate, the building block of quantum computers, is described by a quantum channel. Or, if Alice sends a photon to Bob over an optical fiber, then the whole process is represented by a quantum channel. Thus, by studying quantum channels directly we can derive statements that are valid regardless of the physical platform used to store and process the quantum information—ion traps, superconducting qubits, photonic qubits, NV centers, etc.

We asked the following question: If I’m given a quantum channel, can I transform it into another, different channel by using something like a miniature heat engine? If so, how much work do I need to spend in order to accomplish this task? The answer is tricky because of a few aspects in which quantum channels are more complicated than quantum states.

In this post, I’ll try to give some intuition behind our results, which were developed with the help of Mario Berta and Fernando Brandão, and which were recently published in Physical Review Letters.

First things first, let’s worry about how to study the thermodynamic behavior of miniature systems.

## Thermodynamics of small stuff

One of the important ideas that quantum information brought to thermodynamics is the idea of a resource theory. In a resource theory, we declare that there are certain kinds of states that are available for free, and that there are a set of operations that can be carried out for free. In a resource theory of thermodynamics, when we say “for free,” we mean “without expending any thermodynamic work.”

Here, the free states are those in thermal equilibrium at a fixed given temperature, and the free operations are those quantum operations that preserve energy and that introduce no noise into the system (we call those unitary operations). Faced with a task such as transforming one quantum state into another, we may ask whether or not it is possible to do so using the freely available operations. If that is not possible, we may then ask how much thermodynamic work we need to invest, in the form of additional energy at the input, in order to make the transformation possible.

Interestingly, the amount of work needed to go from one state ρ to another state σ might be unrelated to the work required to go back from σ to ρ. Indeed, the freely allowed operations can’t always be reversed; the reverse process usually requires a different sequence of operations, incurring an overhead. There is a mathematical framework to understand these transformations and this reversibility gap, in which generalized entropy measures play a central role. To avoid going down that road, let’s instead consider the macroscopic case in which we have a large number n of independent particles that are all in the same state ρ, a state which we denote by . Then something magical happens: This macroscopic state can be reversibly converted to and from another macroscopic state , where all particles are in some other state σ. That is, the work invested in the transformation from to can be entirely recovered by performing the reverse transformation:

If this rings a bell, that is because this is precisely the kind of thermodynamics that you will find in your favorite textbook. There is an optimal, reversible way of transforming any two thermodynamic states into each other, and the optimal work cost of the transformation is the difference of a corresponding quantity known as the thermodynamic potential. Here, the thermodynamic potential is a quantity known as the free energy . Therefore, the optimal work cost per copy w of transforming into is given by the difference in free energy .

## From quantum states to quantum channels

Can we repeat the same story for quantum channels? Suppose that we’re given a channel , which we picture as above as a box that transforms an input state into an output state. Using the freely available thermodynamic operations, can we “transform” into another channel ? That is, can we wrap this box with some kind of procedure that uses free thermodynamic operations to pre-process the input and post-process the output, such that the overall new process corresponds (approximately) to the quantum channel ? We might picture the situation like this:

Let us first simplify the question by supposing we don’t have a channel to start off with. How can we implement the channel from scratch, using only free thermodynamic operations and some invested work? That simple question led to pages and pages of calculations, lots of coffee, a few sleepless nights, and then more coffee. After finally overcoming several technical obstacles, we found that in the macroscopic limit of many copies of the channel, the corresponding amount of work per copy is given by the maximum difference of free energy F between the input and output of the channel. We decided to call this quantity the thermodynamic capacity of the channel:

Intuitively, an implementation of must be prepared to expend an amount of work corresponding to the worst possible transformation of an input state to its corresponding output state. It’s kind of obvious in retrospect. However, what is nontrivial is that one can find a single implementation that works for all input states.

It turned out that this quantity had already been studied before. An earlier paper by Navascués and García-Pintos had shown that it was exactly this quantity that characterized the amount of work per copy that could be extracted by “consuming” many copies of a process provided as black boxes.

To our surprise, we realized that Navascués and García-Pintos’s result implied that the transformation of into is reversible. There is a simple procedure to convert into at a cost per copy that equals . The procedure consists in first extracting work per copy of the first set of channels, and then preparing from scratch at a work cost of per copy:

Clearly, the reverse transformation yields back all the work invested in the forward transformation, making the transformation reversible. That’s because we could have started with ’s and finished with ’s instead of the opposite, and the associated work cost per copy would be . Thus the transformation is, indeed, reversible:

In turn, this implies that in the many-copy regime, quantum channels have a macroscopic thermodynamic behavior. That is, there is a thermodynamic potential—the thermodynamic capacity—that quantifies the minimal work required to transform one macroscopic set of channels into another.

## Prospects for the thermodynamic capacity

Resource theories that are reversible are pretty rare. Reversibility is a coveted property because a reversible resource theory is one in which we can easily understand exactly which transformations are possible. Other than the thermodynamic resource theory of states mentioned above, most instances of a resource theory—especially resource theories of channels—typically produce the kind of overheads in the conversion cost that spoil reversibility. So it’s rather exciting when you do find a new reversible resource theory of channels.

Quantum information theorists, especially those working on the theory of quantum communication, care a lot about characterizing the capacity of a channel. This is the maximal amount of information that can be transmitted through a channel. Even though in our case we’re talking about a different kind of capacity—one where we transmit thermodynamic energy and entropy, rather than quantum bits of messages—there are some close parallels between the two settings from which both fields of quantum communication and quantum thermodynamics can profit. Our result draws deep inspiration from the so-called quantum reverse Shannon theorem, an important result in quantum communication that tells us how two parties can communicate using one kind of a channel if they have access to another kind of a channel. On the other hand, the thermodynamic capacity at zero energy is a quantity that was already studied in quantum communication, but it was not clear what that quantity represented concretely. This quantity gained even more importance as it was identified as the entropy of a channel. Now, we see that this quantity has a thermodynamic interpretation. Also, the thermodynamic capacity has a simple definition, it is relatively easy to compute and it is additive—all desirable properties that other measures of capacity of a quantum channel do not necessarily share.

We still have a few rough edges that I hope we can resolve sooner or later. In fact, there is an important caveat that I have avoided mentioning so far—our argument only holds for special kinds of channels, those that do the same thing regardless of when they are applied in time. (Those channels are called time-covariant.) A lot of channels that we’re used to studying have this property, but we think it should be possible to prove a version of our result for any general quantum channel. In fact, we do have another argument that works for all quantum channels, but it uses a slightly different thermodynamic framework which might not be physically well-grounded.

That’s all very nice, I can hear you think, but is this useful for any quantum computing applications? The truth is, we’re still pretty far from founding a new quantum start-up. The levels of heat dissipation in quantum logic elements are still orders of magnitude away from the fundamental limits that we study in the thermodynamic resource theory.

Rather, our result teaches us about the interplay of quantum channels and thermodynamic concepts. We not only have gained useful insight on the structure of quantum channels, but also developed new tools for how to analyze them. These will be useful to study more involved resource theories of channels. And still, in the future when quantum technologies will perhaps approach the thermodynamically reversible limit, it might be good to know how to implement a given quantum channel in such a way that good accuracy is guaranteed for any possible quantum input state, and without any inherent overhead due to the fact that we don’t know what the input state is.

Thermodynamics, a theory developed to study gases and steam engines, has turned out to be relevant from the most obvious to the most unexpected of situations—chemical reactions, electromagnetism, solid state physics, black holes, you name it. Trust the laws of thermodynamics to surprise you again by applying to a setting you’d never imagined them to, like quantum channels.

# Quantum information in quantum cognition

Some research topics, says conventional wisdom, a physics PhD student shouldn’t touch with an iron-tipped medieval lance: sinkholes in the foundations of quantum theory. Problems so hard, you’d have a snowball’s chance of achieving progress. Problems so obscure, you’d have a snowball’s chance of convincing anyone to care about progress. Whether quantum physics could influence cognition much.

Quantum physics influences cognition insofar as (i) quantum physics prevents atoms from imploding and (ii) implosion inhabits atoms from contributing to cognition. But most physicists believe that useful entanglement can’t survive in brains. Entanglement consists of correlations shareable by quantum systems and stronger than any achievable by classical systems. Useful entanglement dies quickly in hot, wet, random environments.

Brains form such environments. Imagine injecting entangled molecules A and B into someone’s brain. Water, ions, and other particles would bombard the molecules. The higher the temperature, the heavier the bombardment. The bombardiers would entangle with the molecules via electric and magnetic fields. Each molecule can share only so much entanglement. The more A entangled with the environment, the less A could remain entangled with B. A would come to share a tiny amount of entanglement with each of many particles. Such tiny amounts couldn’t accomplish much. So quantum physics seems unlikely to affect cognition significantly.

Do not touch.

Yet my PhD advisor, John Preskill, encouraged me to consider whether the possibility interested me.

Try some completely different research, he said. Take a risk. If it doesn’t pan out, fine. People don’t expect much of grad students, anyway. Have you seen Matthew Fisher’s paper about quantum cognition?

Matthew Fisher is a theoretical physicist at the University of California, Santa Barbara. He has plaudits out the wazoo, many for his work on superconductors. A few years ago, Matthew developed an interest in biochemistry. He knew that most physicists doubt whether quantum physics could affect cognition much. But suppose that it could, he thought. How could it? Matthew reverse-engineered a mechanism, in a paper published by Annals of Physics in 2015.

A PhD student shouldn’t touch such research with a ten-foot radio antenna, says conventional wisdom. But I trust John Preskill in a way in which I trust no one else on Earth.

I’ll look at the paper, I said.

Matthew proposed that quantum physics could influence cognition as follows. Experimentalists have performed quantum computation using one hot, wet, random system: that of nuclear magnetic resonance (NMR). NMR is the process that underlies magnetic resonance imaging (MRI), a technique used to image people’s brains. A common NMR system consists of high-temperature liquid molecules. The molecules consists of atoms whose nuclei have quantum properties called spin. The nuclear spins encode quantum information (QI).

Nuclear spins, Matthew reasoned, might store QI in our brains. He catalogued the threats that could damage the QI. Hydrogen ions, he concluded, would threaten the QI most. They could entangle with (decohere) the spins via dipole-dipole interactions.

How can a spin avoid the threats? First, by having a quantum number $s = 1/2$. Such a quantum number zeroes out the nuclei’s electric quadrupole moments. Electric-quadrupole interactions can’t decohere such spins. Which biologically prevalent atoms have $s = 1/2$ nuclear spins? Phosphorus and hydrogen. Hydrogen suffers from other vulnerabilities, so phosphorus nuclear spins store QI in Matthew’s story. The spins serve as qubits, or quantum bits.

How can a phosphorus spin avoid entangling with other spins via magnetic dipole-dipole interactions? Such interactions depend on the spins’ orientations relative to their positions. Suppose that the phosphorus occupies a small molecule that tumbles in biofluids. The nucleus’s position changes randomly. The interaction can average out over tumbles.

The molecule contains atoms other than phosphorus. Those atoms have nuclei whose spins can interact with the phosphorus spins, unless every threatening spin has a quantum number $s = 0$. Which biologically prevalent atoms have $s = 0$ nuclear spins? Oxygen and calcium. The phosphorus should therefore occupy a molecule with oxygen and calcium.

Matthew designed this molecule to block decoherence. Then, he found the molecule in the scientific literature. The structure, ${\rm Ca}_9 ({\rm PO}_4)_6$, is called a Posner cluster or a Posner molecule. I’ll call it a Posner, for short. Posners appear to exist in simulated biofluids, fluids created to mimic the fluids in us. Posners are believed to exist in us and might participate in bone formation. According to Matthew’s estimates, Posners might protect phosphorus nuclear spins for up to 1-10 days.

Posner molecule (image courtesy of Swift et al.)

How can Posners influence cognition? Matthew proposed the following story.

Adenosine triphosphate (ATP) is a molecule that fuels biochemical reactions. “Triphosphate” means “containing three phosphate ions.” Phosphate (${\rm PO}_4^{3-}$) consists of one phosphorus atom and three oxygen atoms. Two of an ATP molecule’s phosphates can break off while remaining joined to each other.

The phosphate pair can drift until encountering an enzyme called pyrophosphatase. The enzyme can break the pair into independent phosphates. Matthew, with Leo Radzihovsky, conjectured that, as the pair breaks, the phosphorus nuclear spins are projected onto a singlet. This state, represented by $\frac{1}{ \sqrt{2} } ( | \uparrow \downarrow \rangle - | \downarrow \uparrow \rangle )$, is maximally entangled.

Imagine many entangled phosphates in a biofluid. Six phosphates can join nine calcium ions to form a Posner molecule. The Posner can share up to six singlets with other Posners. Clouds of entangled Posners can form.

One clump of Posners can enter one neuron while another clump enters another neuron. The protein VGLUT, or BNPI, sits in cell membranes and has the potential to ferry Posners in. The neurons will share entanglement. Imagine two Posners, P and Q, approaching each other in a neuron N. Quantum-chemistry calculations suggest that the Posners can bind together. Suppose that P shares entanglement with a Posner P’ in a neuron N’, while Q shares entanglement with a Posner Q’ in N’. The entanglement, with the binding of P to Q, can raise the probability that P’ binds to Q’.

Bound-together Posners will move slowly, having to push much water out of the way. Hydrogen and magnesium ions can latch onto the slow molecules easily. The Posners’ negatively charged phosphates will attract the ${\rm H}^+$ and ${\rm Mg}^{2+}$ as the phosphates attract the Posner’s ${\rm Ca}^{2+}$. The hydrogen and magnesium can dislodge the calcium, breaking apart the Posners. Calcium will flood neurons N and N’. Calcium floods a neuron’s axion terminal (the end of the neuron) when an electrical signal reaches the axion. The flood induces the neuron to release neurotransmitters. Neurotransmitters are chemicals that travel to the next neuron, inducing it to fire. So entanglement between phosphorus nuclear spins in Posner molecules might stimulate coordinated neuron firing.

Does Matthew’s story play out in the body? We can’t know till running experiments and analyzing the results. Experiments have begun: Last year, the Heising-Simons Foundation granted Matthew and collaborators \$1.2 million to test the proposal.

Suppose that Matthew conjectures correctly, John challenged me, or correctly enough. Posner molecules store QI. Quantum systems can process information in ways in which classical systems, like laptops, can’t. How adroitly can Posners process QI?

I threw away my iron-tipped medieval lance in year five of my PhD. I left Caltech for a five-month fellowship, bent on returning with a paper with which to answer John. I did, and Annals of Physics published the paper this month.

I had the fortune to interest Elizabeth Crosson in the project. Elizabeth, now an assistant professor at the University of New Mexico, was working as a postdoc in John’s group. Both of us are theorists who specialize in QI theory. But our backgrounds, skills, and specialties differ. We complemented each other while sharing a doggedness that kept us emailing, GChatting, and Google-hangout-ing at all hours.

Elizabeth and I translated Matthew’s biochemistry into the mathematical language of QI theory. We dissected Matthew’s narrative into a sequence of biochemical steps. We ascertained how each step would transform the QI encoded in the phosphorus nuclei. Each transformation, we represented with a piece of math and with a circuit-diagram element. (Circuit-diagram elements are pictures strung together to form circuits that run algorithms.) The set of transformations, we called Posner operations.

Imagine that you can perform Posner operations, by preparing molecules, trying to bind them together, etc. What QI-processing tasks can you perform? Elizabeth and I found applications to quantum communication, quantum error detection, and quantum computation. Our results rest on the assumption—possibly inaccurate—that Matthew conjectures correctly. Furthermore, we characterized what Posners could achieve if controlled. Randomness, rather than control, would direct Posners in biofluids. But what can happen in principle offers a starting point.

First, QI can be teleported from one Posner to another, while suffering noise.1 This noisy teleportation doubles as superdense coding: A trit is a random variable that assumes one of three possible values. A bit is a random variable that assumes one of two possible values. You can teleport a trit from one Posner to another effectively, while transmitting a bit directly, with help from entanglement.

Second, Matthew argued that Posners’ structures protect QI. Scientists have developed quantum error-correcting and -detecting codes to protect QI. Can Posners implement such codes, in our model? Yes: Elizabeth and I (with help from erstwhile Caltech postdoc Fernando Pastawski) developed a quantum error-detection code accessible to Posners. One Posner encodes a logical qutrit, the quantum version of a trit. The code detects any error that slams any of the Posner’s six qubits.

Third, how complicated an entangled state can Posner operations prepare? A powerful one, we found: Suppose that you can measure this state locally, such that earlier measurements’ outcomes affect which measurements you perform later. You can perform any quantum computation. That is, Posner operations can prepare a state that fuels universal measurement-based quantum computation.

Finally, Elizabeth and I quantified effects of entanglement on the rate at which Posners bind together. Imagine preparing two Posners, P and P’, that share entanglement only with other particles. If the Posners approach each other with the right orientation, they have a 33.6% chance of binding, in our model. Now, suppose that every qubit in P is maximally entangled with a qubit in P’. The binding probability can rise to 100%.

Elizabeth and I recast as a quantum circuit a biochemical process discussed in Matthew Fisher’s 2015 paper.

I feared that other scientists would pooh-pooh our work as crazy. To my surprise, enthusiasm flooded in. Colleagues cheered the risk on a challenge in an emerging field that perks up our ears. Besides, Elizabeth’s and my work is far from crazy. We don’t assert that quantum physics affects cognition. We imagine that Matthew conjectures correctly, acknowledging that he might not, and explore his proposal’s implications. Being neither biochemists nor experimentalists, we restrict our claims to QI theory.

Maybe Posners can’t protect coherence for long enough. Would inaccuracy of Matthew’s beach our whale of research? No. Posners prompted us to propose ideas and questions within QI theory. For instance, our quantum circuits illustrate interactions (unitary gates, to experts) interspersed with measurements implemented by the binding of Posners. The circuits partially motivated a subfield that emerged last summer and is picking up speed: Consider interspersing random unitary gates with measurements. The unitaries tend to entangle qubits, whereas the measurements disentangle. Which influence wins? Does the system undergo a phase transition from “mostly entangled” to “mostly unentangled” at some measurement frequency? Researchers from Santa Barbara to Colorado; MIT; Oxford; Lancaster, UK; Berkeley; Stanford; and Princeton have taken up the challenge.

A physics PhD student, conventional wisdom says, shouldn’t touch quantum cognition with a Swiss guard’s halberd. I’m glad I reached out: I learned much, contributed to science, and had an adventure. Besides, if anyone disapproves of daring, I can blame John Preskill.

Annals of Physics published “Quantum information in the Posner model of quantum cognition” here. You can find the arXiv version here and can watch a talk about our paper here.

1Experts: The noise arises because, if two Posners bind, they effectively undergo a measurement. This measurement transforms a subspace of the two-Posner Hilbert space as a coarse-grained Bell measurement. A Bell measurement yields one of four possible outcomes, or two bits. Discarding one of the bits amounts to coarse-graining the outcome. Quantum teleportation involves a Bell measurement. Coarse-graining the measurement introduces noise into the teleportation.