# Building a Koi pond with Lie algebras

When I was growing up, one of my favourite places was the shabby all-you-can-eat buffet near our house. We’d walk in, my mom would approach the hostess to explain that, despite my being abnormally large for my age, I qualified for kids-eat-free, and I would peel away to stare at the Koi pond. The display of different fish rolling over one another was bewitching. Ten-year-old me would have been giddy to build my own Koi pond, and now I finally have. However, I built one using Lie algebras.

The different fish swimming in the Koi pond are, in many ways, like charges being exchanged between subsystems. A “charge” is any globally conserved quantity. Examples of charges include energy, particles, electric charge, or angular momentum. Consider a system consisting of a cup of coffee in your office. The coffee will dynamically exchange charges with your office in the form of heat energy. Still, the total energy of the coffee and office is conserved (assuming your office walls are really well insulated). In this example, we had one type of charge (heat energy) and two subsystems (coffee and office). Consider now a closed system consisting of many subsystems and many different types of charges. The closed system is like the finite Koi pond with different charges like the different fish species. The charges can move around locally, but the total number of charges is globally fixed, like how the fish swim around but can’t escape the pond. Also, the presence of one type of charge can alter another’s movement, just as a big fish might block a little one’s path.

Unfortunately, the Koi pond analogy reaches its limit when we move to quantum charges. Classically, charges commute. This means that we can simultaneously determine the amount of each charge in our system at each given moment. In quantum mechanics, this isn’t necessarily true. In other words, classically, I can count the number of glossy fish and matt fish. But, in quantum mechanics, I can’t.

So why does this matter? Subsystems exchanging charges are prevalent in thermodynamics. Quantum thermodynamics extends thermodynamics to include small systems and quantum effects. Noncommutation underlies many important quantum phenomena. Hence, studying the exchange of noncommuting charges is pivotal in understanding quantum thermodynamics. Consequently, noncommuting charges have emerged as a rapidly growing subfield of quantum thermodynamics. Many interesting results have been discovered from no longer assuming that charges commute (such as these). Until recently, most of these discoveries have been theoretical. Bridging these discoveries to experimental reality requires Hamiltonians (functions that tell you how your system evolves in time) that move charges locally but conserve them globally. Last year it was unknown whether these Hamiltonians exist, what they look like generally, how to build them, and for what charges you could find them.

Nicole Yunger Halpern (NIST physicist, my co-advisor, and Quantum Frontiers blogger) and I developed a prescription for building Koi ponds for noncommuting charges. Our prescription allows you to systematically build Hamiltonians that overtly move noncommuting charges between subsystems while conserving the charges globally. These Hamiltonians are built using Lie algebras, abstract mathematical tools that can describe many physical quantities (including everything in the standard model of particle physics and space-time metric). Our results were recently published in npj QI. We hope that our prescription will bolster the efforts to bridge the results of noncommuting charges to experimental reality.

In the end, a little group theory was all I needed for my Koi pond. Maybe I’ll build a treehouse next with calculus or a remote control car with combinatorics.

So much to do, so little time. Tending to one task is inevitably at the cost of another, so how does one decide how to spend their time? In the first few years of my PhD, I balanced problem sets, literature reviews, and group meetings, but at the detriment to my hobbies. I have played drums my entire life, but I largely fell out of practice in graduate school. Recently, I made time to play with a group of musicians, even landing a couple gigs in downtown Austin, Texas, “live music capital of the world.” I have found attending to my non-physics interests makes my research hours more productive and less taxing. Finding the right balance of on- versus off-time has been key to my success as my PhD enters its final year.

Of course, life within physics is also full of tradeoffs. My day job is as an experimentalist. I use tightly focused laser beams, known as optical tweezers, to levitate micrometer-sized glass spheres. I monitor a single microsphere’s motion as it undergoes collisions with air molecules, and I study the system as an environmental sensor of temperature, fluid flow, and acoustic waves; however, by night I am a computational physicist. I code simulations of interacting qubits subject to kinetic constraints, so-called quantum cellular automata (QCA). My QCA work started a few years ago for my Master’s degree, but my interest in the subject persists. I recently co-authored one paper summarizing the work so far and another detailing an experimental implementation.

QCA, the subject of this post, are themselves tradeoff-aware systems. To see what I mean, first consider their classical counterparts cellular automata. In their simplest construction, the system is a one-dimensional string of bits. Each bit takes a value of 0 or 1 (white or black). The bitstring changes in discrete time steps based on a simultaneously-applied local update rule: Each bit, along with its two nearest-neighbors, determine the next state of the central bit. Put another way, a bit either flips, i.e., changes 0 to 1 or 1 to 0, or remains unchanged over a timestep depending on the state of that bit’s local neighborhood. Thus, by choosing a particular rule, one encodes a trade off between activity (bit flips) and inactivity (bit remains unchanged). Despite their simple construction, cellular automata dynamics are diverse; they can produce fractals and encryption-quality random numbers. One rule even has the ability to run arbitrary computer algorithms, a property known as universal computation.

In QCA, bits are promoted to qubits. Instead of being just 0 or 1 like a bit, a qubit can be a continuous mixture of both 0 and 1, a property called superposition. In QCA, a qubit’s two neighbors being 0 or 1 determine whether or not it changes. For example, when in an active neighborhood configuration, a qubit can be coded to change from 0 to “0 plus 1” or from 1 to “0 minus 1”. This is already a head-scratcher, but things get even weirder. If a qubit’s neighbors are in a superposition, then the center qubit can become entangled with those neighbors. Entanglement correlates qubits in a way that is not possible with classical bits.

Do QCA support the emergent complexity observed in their classical cousins? What are the effects of a continuous state space, superposition, and entanglement? My colleagues and I attacked these questions by re-examining many-body physics tools through the lens of complexity science. Singing the lead, we have a workhorse of quantum and solid-state physics: two-point correlations. Singing harmony we have the bread-and-butter of network analysis: complex-network measures. The duet between the two tells the story of structured correlations in QCA dynamics.

In a bit more detail, at each QCA timestep we calculate the mutual information between all qubits i and all other qubits j. Doing so reveals how much there is to learn about one qubit by measuring another, including effects of quantum entanglement. Visualizing each qubit as a node, the mutual information can be depicted as weighted links between nodes: the more correlated two qubits are, the more strongly they are linked. The collection of nodes and links makes a network. Some QCA form unstructured, randomly-linked networks while others are highly structured.

Complex-network measures are designed to highlight certain structural patterns within a network. Historically, these measures have been used to study diverse networked-systems like friend groups on Facebook, biomolecule pathways in metabolism, and functional-connectivity in the brain. Remarkably, the most structured QCA networks we observed quantitatively resemble those of the complex systems just mentioned despite their simple construction and quantum unitary dynamics.

What’s more, the particular QCA that generate the most complex networks are those that balance the activity-inactivity trade-off. From this observation, we formulate what we call the Goldilocks principle: QCA that generate the most complexity are those that change a qubit if and only if the qubit’s neighbors contain an equal number of 1’s and 0’s. The Goldilocks rules are neither too inactive nor too active, balancing the tradeoff to be “just right.”  We demonstrated the Goldilocks principle for QCA with nearest-neighbor constraints as well as QCA with nearest-and-next-nearest-neighbor constraints.

To my delight, the scientific conclusions of my QCA research resonate with broader lessons-learned from my time as a PhD student: Life is full of trade-offs, and finding the right balance is key to achieving that “just right” feeling.

# The shape of MIP* = RE

There’s a famous parable about a group of blind men encountering an elephant for the very first time. The first blind man, who had his hand on the elephant’s side, said that it was like an enormous wall. The second blind man, wrapping his arms around the elephant’s leg, exclaimed that surely it was a gigantic tree trunk. The third, feeling the elephant’s tail, declared that it must be a thick rope. Vehement disagreement ensues, but after a while the blind men eventually come to realize that, while each person was partially correct, there is much more to the elephant than initially thought.

Last month, Zhengfeng, Anand, Thomas, John and I posted MIP* = RE to arXiv. The paper feels very much like the elephant of the fable — and not just because of the number of pages! To a computer scientist, the paper is ostensibly about the complexity of interactive proofs. To a quantum physicist, it is talking about mathematical models of quantum entanglement. To the mathematician, there is a claimed resolution to a long-standing problem in operator algebras. Like the blind men of the parable, each are feeling a small part of a new phenomenon. How do the wall, the tree trunk, and the rope all fit together?

I’ll try to trace the outline of the elephant: it starts with a mystery in quantum complexity theory, curves through the mathematical foundations of quantum mechanics, and arrives at a deep question about operator algebras.

# The rope: The complexity of nonlocal games

In 2004, computer scientists Cleve, Hoyer, Toner, and Watrous were thinking about a funny thing called nonlocal games. A nonlocal game $G$ involves three parties: two cooperating players named Alice and Bob, and someone called the verifier. The verifier samples a pair of random questions $(x,y)$ and sends $x$ to Alice (who responds with answer $a$), and $y$ to Bob (who responds with answer $b$). The verifier then uses some function $D(x,y,a,b)$ that tells her whether the players win, based on their questions and answers.

All three parties know the rules of the game before it starts, and Alice and Bob’s goal is to maximize their probability of winning the game. The players aren’t allowed to communicate with each other during the game, so it’s a nontrivial task for them to coordinate an optimal strategy (i.e., how they should individually respond to the verifier’s questions) before the game starts.

The most famous example of a nonlocal game is the CHSH game (which has made several appearances on this blog already): in this game, the verifier sends a uniformly random bit $x$ to Alice (who responds with a bit $a$) and a uniformly random bit $y$ to Bob (who responds with a bit $b$). The players win if $a \oplus b = x \wedge y$ (in other words, the sum of their answer bits is equal to the product of the input bits modulo $2$).

What is Alice’s and Bob’s maximum winning probability? Well, it depends on what type of strategy they use. If they use a strategy that can be modeled by classical physics, then their winning probability cannot exceed $75\%$ (we call this the classical value of CHSH). On the other hand, if they use a strategy based on quantum physics, Alice and Bob can do better by sharing two quantum bits (qubits) that are entangled. During the game each player measures their own qubit (where the measurement depends on their received question) to obtain answers that win the CHSH game with probability $\cos^2(\pi/8) \approx .854\ldots$ (we call this the quantum value of CHSH). So even though the entangled qubits don’t allow Alice and Bob to communicate with each other, entanglement gives them a way to win with higher probability! In technical terms, their responses are more correlated than what is possible classically.

The CHSH game comes from physics, and was originally formulated not as a game involving Alice and Bob, but rather as an experiment involving two spatially separated devices to test whether stronger-than-classical correlations exist in nature. These experiments are known as Bell tests, named after John Bell. In 1964, he proved that correlations from quantum entanglement cannot be explained by any “local hidden variable theory” — in other words, a classical theory of physics.1 He then showed that a Bell test, like the CHSH game, gives a simple statistical test for the presence of nonlocal correlations between separated systems. Since the 1960s, numerous Bell tests have been conducted experimentally, and the verdict is clear: nature does not behave classically.

Cleve, Hoyer, Toner and Watrous noticed that nonlocal games/Bell tests can be viewed as a kind of multiprover interactive proof. In complexity theory, interactive proofs are protocols where some provers are trying to convince a verifier of a solution to a long, difficult computation, and the verifier is trying to efficiently determine if the solution is correct. In a Bell test, one can think of the provers as instead trying to convince the verifier of a physical statement: that they possess quantum entanglement.

With the computational lens trained firmly on nonlocal games, it then becomes natural to ask about their complexity. Specifically, what is the complexity of approximating the optimal winning probability in a given nonlocal game $G$? In complexity-speak, this is phrased as a question about characterizing the class MIP* (pronounced “M-I-P star”). This is also a well-motivated question for an experimentalist conducting Bell tests: at the very least, they’d want to determine if (a) quantum players can do better than classical players, and (b) what can the best possible quantum strategy achieve?

Studying this question in the case of classical players led to some of the most important results in complexity theory, such as MIP = NEXP and the PCP Theorem. Indeed, the PCP Theorem says that it is NP-hard to approximate the classical value of a nonlocal game (i.e. the maximum winning probability of classical players) to within constant additive accuracy (say $\pm \frac{1}{10}$). Thus, assuming that P is not equal to NP, we shouldn’t expect a polynomial-time algorithm for this. However it is easy to see that there is a “brute force” algorithm for this problem: by taking exponential time to enumerate over all possible deterministic player strategies, one can exactly compute the classical value of nonlocal games.

When considering games with entangled players, however, it’s not even clear if there’s a similar “brute force” algorithm that solves this in any amount of time — forget polynomial time; even if we allow ourselves exponential, doubly-exponential, Ackermann function amount of time, we still don’t know how to solve this quantum value approximation problem. The problem is that there is no known upper bound on the amount of entanglement that is needed for players to play a nonlocal game. For example, for a given game $G$, does an optimal quantum strategy require one qubit, ten qubits, or $10^{10^{10}}$ qubits of entanglement? Without any upper bound, a “brute force” algorithm wouldn’t know how big of a quantum strategy to search for — it would keep enumerating over bigger and bigger strategies in hopes of finding a better one.

Thus approximating the quantum value may not even be solvable in principle! But could it really be uncomputable? Perhaps we just haven’t found the right mathematical tool to give an upper bound on the dimension — maybe we just need to come up with some clever variant of, say, Johnson-Lindenstrauss or some other dimension reduction technique.2

In 2008, there was promising progress towards an algorithmic solution for this problem. Two papers [DLTW, NPA] (appearing on arXiv on the same day!) showed that an algorithm based on semidefinite programming can produce a sequence of numbers that converge to something called the commuting operator value of a nonlocal game.3 If one could show that the commuting operator value and the quantum value of a nonlocal game coincide, then this would yield an algorithm for solving this approximation problem!

Asking whether this commuting operator and quantum values are the same, however, immediately brings us to the precipice of some deep mysteries in mathematical physics and operator algebras, far removed from computer science and complexity theory. This takes us to the next part of the elephant.

# The tree: mathematical foundations of locality

The mystery about the quantum value versus the commuting operator value of nonlocal games has to do with two different ways of modeling Alice and Bob in quantum mechanics. As I mentioned earlier, quantum physics predicts that the maximum winning probability in, say, the CHSH game when Alice and Bob share entanglement is approximately 85%. As with any physical theory, these predictions are made using some mathematical framework — formal rules for modeling physical experiments like the CHSH game.

In a typical quantum information theory textbook, players in the CHSH game are usually modelled in the following way: Alice’s device is described a state space $\mathcal{H}_A$ (all the possible states the device could be in), a particular state $|\psi_A\rangle$ from $\mathcal{H}_A$, and a set of measurement operators $\mathcal{M}_A$ (operations that can be performed by the device). It’s not necessary to know what these things are formally; the important feature is that these three things are enough to make any prediction about Alice’s device — when treated in isolation, at least. Similarly, Bob’s device can be described using its own state space $\mathcal{H}_B$, state $|\psi_B\rangle$, and measurement operators $\mathcal{M}_B$.

In the CHSH game though, one wants to make predictions about Alice’s and Bob’s devices together. Here the textbooks say that Alice and Bob are jointly described by the tensor product formalism, which is a natural mathematical way of “putting separate spaces together”. Their state space is denoted by $\mathcal{H}_A \otimes \mathcal{H}_B$. The joint state $|\psi_{AB}\rangle$ describing the devices comes from this tensor product space. When Alice and Bob independently make their local measurements, this is described by a measurement operator from the tensor product of operators from $\mathcal{M}_A$ and $\mathcal{M}_B$. The strange correlations of quantum mechanics arise when their joint state $|\psi_{AB}\rangle$ is entangled, i.e. it cannot be written as a well-defined state on Alice’s side combined with a well-defined state on Bob’s side (even though the state space itself is two independent spaces combined together!)

The tensor product model works well; it satisfies natural properties you’d want from the CHSH experiment, such as the constraint that Alice and Bob can’t instantaneously signal to each other. Furthermore, predictions made in this model match up very accurately with experimental results!

This is the not the whole story, though. The tensor product formalism works very well in non-relativistic quantum mechanics, where things move slowly and energies are low. To describe more extreme physical scenarios — like when particles are being smashed together at near-light speeds in the Large Hadron Collider — physicists turn to the more powerful quantum field theory. However, the notion of spatiotemporal separation in relativistic settings gets especially tricky. In particular, when trying to describe quantum mechanical systems, it is no longer evident how to assign Alice and Bob their own independent state spaces, and thus it’s not clear how to put relativistic Alice and Bob in the tensor product framework!

In quantum field theory, locality is instead described using the commuting operator model. Instead of assigning Alice and Bob their own individual state spaces and then tensoring them together to get a combined space, the commuting operator model stipulates that there is just a single monolithic space $\mathcal{H}$ for both Alice and Bob. Their joint state is described using a vector $|\psi\rangle$ from $\mathcal{H}$, and Alice and Bob’s measurement operators both act on $\mathcal{H}$. The constraint that they can’t communicate is captured by the fact that Alice’s measurement operators commute with Bob’s operators. In other words, the order in which the players perform their measurements on the system does not matter: Alice measuring before Bob, or Bob measuring before Alice, both yield the same statistical outcomes. Locality is enforced through commutativity.

The commuting operator framework contains the tensor product framework as a special case4, so it’s more general. Could the commuting operator model allow for correlations that can’t be captured by the tensor product model, even approximately56? This question is known as Tsirelson’s problem, named after the late mathematician Boris Tsirelson.

There is a simple but useful way to phrase this question using nonlocal games. What we call the “quantum value” of a nonlocal game $G$ (denoted by $\omega^* (G)$) really refers to the supremum of success probabilities over tensor product strategies for Alice and Bob. If they use strategies from the more general commuting operator model, then we call their maximum success probability the commuting operator value of $G$ (denoted by $\omega^{co}(G)$). Since tensor product strategies are a special case of commuting operator strategies, we have the relation $\omega^* (G) \leq \omega^{co}(G)$ for all nonlocal games $G$.

Could there be a nonlocal game $G$ whose tensor product value is different from its commuting operator value? With tongue-in-cheek: is there a game $G$ that Alice and Bob could succeed at better if they were using quantum entanglement at near-light speeds? It is difficult to find even a plausible candidate game for which the quantum and commuting operator values may differ. The CHSH game, for example, has the same quantum and commuting operator value; this was proved by Tsirelson.

If the tensor product and the commuting operator models are the same (i.e., the “positive” resolution of Tsirelson’s problem), then as I mentioned earlier, this has unexpected ramifications: there would be an algorithm for approximating the quantum value of nonlocal games.

How does this algorithm work? It comes in two parts: a procedure to search from below, and one to search from above. The “search from below” algorithm computes a sequence of numbers $\alpha_1,\alpha_2,\alpha_3,\ldots$ where $\alpha_d$ is (approximately) the best winning probability when Alice and Bob use a $d$-qubit tensor product strategy. For fixed $d$, the number $\alpha_d$ can be computed by enumerating over (a discretization of) the space of all possible $d$-qubit strategies. This takes a doubly-exponential amount of time in $d$ — but at least this is still a finite time! This naive “brute force” algorithm will slowly plod along, computing a sequence of better and better winning probabilities. We’re guaranteed that in the limit as $d$ goes to infinity, the sequence $\{ \alpha_d\}$ converges to the quantum value $\omega^* (G)$. Of course the issue is that the “search from below” procedure never knows how close it is to the true quantum value.

This is where the “search from above” comes in. This is an algorithm that computes a different sequence of numbers $\beta_1,\beta_2,\beta_3,\ldots$ where each $\beta_d$ is an upper bound on the commuting operator value $\omega^{co}(G)$, and furthermore as $d$ goes to infinity, $\beta_d$ eventually converges to $\omega^{co}(G)$. Furthermore, each $\beta_d$ can be computed by a technique known as semidefinite optimization; this was shown by the two papers I mentioned.

Let’s put the pieces together. If the quantum and commuting operator values of a game $G$ coincide (i.e. $\omega^* (G) = \omega^{co}(G)$), then we can run the “search from below” and “search from above” procedures in parallel, interleaving the computation of the $\{\alpha_d\}$ and $\{ \beta_d\}$. Since both are guaranteed to converge to the quantum value, at some point the upper bound $\beta_d$ will come within some $\epsilon$ to the lower bound $\alpha_d$, and thus we would have homed in on (an approximation of) $\omega^* (G)$. There we have it: an algorithm to approximate the quantum value of games.

All that remains to do, surely, is to solve Tsirelson’s problem in the affirmative (that commuting operator correlations can be approximated by tensor product correlations), and then we could put this pesky question about the quantum value to rest. Right?

# The wall: Connes’ embedding problem

At the end of the 1920s, polymath extraordinaire John von Neumann formulated the first rigorous mathematical framework for the recently developed quantum mechanics. This framework, now familiar to physicists and quantum information theorists everywhere, posits that quantum states are vectors in a Hilbert space, and measurements are linear operators acting on those spaces. It didn’t take long for von Neumann to realize that there was a much deeper theory of operators on Hilbert spaces waiting to be discovered. With Francis Murray, in the 1930s he started to develop a theory of “rings of operators” — today these are called von Neumann algebras.

The theory of operator algebras has since flourished into a rich and beautiful area of mathematics. It remains inseparable from mathematical physics, but has established deep connections with subjects such as knot theory and group theory. One of the most important goals in operator algebras has been to provide a classification of von Neumann algebras. In their series of papers on the subject, Murray and von Neumann first showed that classifying von Neumann algebras reduces to understanding their factors, the atoms out of which all von Neumann algebras are built. Then, they showed that factors of von Neumann algebras come in one of three species: type $I$, type $II$, and type $III$. Type $I$ factors were completely classified by Murray and von Neumann, and they made much progress on characterizing certain type $II$ factors. However progress stalled until the 1970s, when Alain Connes provided a classification of type $III$ factors (work for which he would later receive the Fields Medal). In the same 1976 classification paper, Connes makes a casual remark about something called type $II_1$ factors7:

We now construct an embedding of $N$ into $\mathcal{R}$. Apparently such an embedding ought to exist for all $II_1$ factors.

This line, written in almost a throwaway manner, eventually came to be called “Connes’ embedding problem”: does every separable $II_1$ factor embed into an ultrapower of the hyperfinite $II_1$ factor? It seems that Connes surmises that it does (and thus this is also called “Connes’ embedding conjecture“). Since 1976, this problem has grown into a central question of operator algebras, with numerous equivalent formulations and consequences across mathematics.

In 2010, two papers (again appearing on the arXiv on the same day!) showed that the reach of Connes’ embedding conjecture extends back to the foundations of quantum mechanics. If Connes’ embedding problem has a positive answer (i.e. an embedding exists), then Tsirelson’s problem (i.e. whether commuting operator can be approximated by tensor product correlations) also has a positive answer! Later it was shown by Ozawa that Connes’ embedding problem is in fact equivalent to Tsirelson’s problem.

Remember that our approach to compute the value of nonlocal games hinged on obtaining a positive answer to Tsirelson’s problem. The sequence of papers [NPA, DLTW, Fritz, JNPPSW] together show that resolving — one way or another — whether this search-from-below, search-from-above algorithm works would essentially settle Connes’ embedding conjecture. What started as a funny question at the periphery of computer science and quantum information theory has morphed into an attack on one of the central problems in operator algebras.

# MIP* = RE

We’ve now ended back where we started: the complexity of nonlocal games. Let’s take a step back and try to make sense of the elephant.

Even to a complexity theorist, “MIP* = RE” may appear esoteric. The complexity classes MIP* and RE refer to a bewildering grabbag of concepts: there’s Alice, Bob, Turing machines, verifiers, interactive proofs, quantum entanglement. What is the meaning of the equality of these two classes?

First, it says that the Halting problem has an interactive proof involving quantum entangled provers. In the Halting problem, you want to decide whether a Turing machine $M$, if you started running it, would eventually terminate with a well-defined answer, or if it would get stuck in an infinite loop. Alan Turing showed that this problem is undecidable: there is no algorithm that can solve this problem in general. Loosely speaking, the best thing you can do is to just flick on the power switch to $M$, and wait to see if it eventually stops. If $M$ gets stuck in an infinite loop — well, you’re going to be waiting forever.

MIP* = RE shows with the help of all-powerful Alice and Bob, a time-limited verifier can run an interactive proof to “shortcut” the waiting. Given the Turing machine $M$‘s description (its “source code”), the verifier can efficiently compute a description of a nonlocal game $G_M$ whose behavior reflects that of $M$. If $M$ does eventually halt (which could happen after a million years), then there is a strategy for Alice and Bob that causes the verifier to accept with probability $1$. In other words, $\omega^* (G_M) = 1$. If $M$ gets stuck in an infinite loop, then no matter what strategy Alice and Bob use, the verifier always rejects with high probability, so $\omega^* (G_M)$ is close to $0$.

By playing this nonlocal game, the verifier can obtain statistical evidence that $M$ is a Turing machine that eventually terminates. If the verifier plays $G_M$ and the provers win, then the verifier should believe that it is likely that $M$ halts. If they lose, then the verifier concludes there isn’t enough evidence that $M$ halts8. The verifier never actually runs $M$ in this game; she has offloaded the task to Alice and Bob, who we can assume are computational gods capable of performing million-year-long computations instantly. For them, the challenge is instead to convince the verifier that if she were to wait millions of years, she would witness the termination of $M$. Incredibly, the amount of work put in by the verifier in the interactive proof is independent of the time it takes for $M$ to halt!

The fact that the Halting problem has an interactive proof seems borderline absurd: if the Halting problem is unsolvable, why should we expect it to be verifiable? Although complexity theory has taught us that there can be a large gap between the complexity of verification versus search, it has always been a difference of efficiency: if solutions to a problem can be efficiently verified, then solutions can also be found (albeit at drastically higher computational cost). MIP* = RE shows that, with quantum entanglement, there can be a chasm of computability between verifying solutions and finding them.

Now let’s turn to the non-complexity consequences of MIP* = RE. The fact that we can encode the Halting problem into nonlocal games also immediately tells us that there is no algorithm whatsoever to approximate the quantum value. Suppose there was an algorithm that could approximate $\omega^* (G)$. Then, using the transformation from Turing machines to nonlocal games mentioned above, we could use this algorithm to solve the Halting problem, which is impossible.

Now the dominoes start to fall. This means that, in particular, the proposed “search-from-below”/”search-from-above” algorithm cannot succeed in approximating $\omega^* (G)$. There must be a game $G$, then, for which the quantum value is different from the commuting operator value. But this implies Tsirelson’s problem has a negative answer, and therefore Connes’ embedding conjecture is false.

We’ve only sketched the barest of outlines of this elephant, and yet it is quite challenging to hold it in the mind’s eye all at once9. This story is intertwined with some of the most fundamental developments in the past century: modern quantum mechanics, operator algebras, and computability theory were birthed in the 1930s. Einstein, Podolsky and Rosen wrote their landmark paper questioning the nature of quantum entanglement in 1935, and John Bell discovered his famous test and inequality in 1964. Connes’ formulated his conjecture in the ’70s, Tsirelson made his contributions to the foundations of quantum mechanics in the ’80s, and about the same time computer scientists were inventing the theory of interactive proofs and probabilistically checkable proofs (PCPs).

We haven’t said anything about the proof of MIP* = RE yet (this may be the subject of future blog posts), but it is undeniably a product of complexity theory. The language of interactive proofs and Turing machines is not just convenient but necessary: at its heart MIP* = RE is the classical PCP Theorem, with the help of quantum entanglement, recursed to infinity.

What is going on in this proof? What parts of it are fundamental, and which parts are unnecessary? What is the core of it that relates to Connes’ embedding conjecture? Are there other consequences of this uncomputability result? These are questions to be explored in the coming days and months, and the answers we find will be fascinating.

Acknowledgments. Thanks to William Slofstra and Thomas Vidick for helpful feedback on this post.

1. This is why quantum correlations are called “nonlocal”, and why we call the CHSH game a “nonlocal game”: it is a test for nonlocal behavior.
2. A reasonable hope would be that, for every nonlocal game $G$, there is a generic upper bound on the number of qubits needed to approximate the optimal quantum strategy (e.g., a game $G$ with $Q$ possible questions and $A$ possible answers would require at most, say, $2^{O(Q \cdot A)}$ qubits to play optimally).
3. In those papers, they called it the field theoretic value
4. The space $\mathcal{H}$ can be broken down into the tensor product $\mathcal{H}_A \otimes \mathcal{H}_B$, and Alice’s measurements only act on the $\mathcal{H}_A$ space and Bob’s measurements only act on the $\mathcal{H}_B$ space. In this case, Alice’s measurements clearly commute with Bob’s.
5. In a breakthrough work in 2017, Slofstra showed that the tensor product framework is not exactly the same as the commuting operator framework; he shows that there is a nonlocal game $G$ where players using commuting operator strategies can win with probability $1$, but when they use a tensor-product strategy they can only win with probability strictly less than $1$. However the perfect commuting operator strategy can be approximated by tensor-product strategies arbitrarily well, so the quantum values and the commuting operator values of $G$ are the same.
6. The commuting operator model is motivated by attempts to develop a rigorous mathematical framework for quantum field theory from first principles (see, for example algebraic quantum field theory (AQFT)). In the “vanilla” version of AQFT, tensor product decompositions between casually independent systems do not exist a priori, but mathematical physicists often consider AQFTs augmented with an additional “split property”, which does imply tensor product decompositions. Thus in such AQFTs, Tsirelson’s problem has an affirmative answer.
7. Type $II_1$ is pronounced “type two one”.
8. This is not the same as evidence that $M$ loops forever!
9. At least, speaking for myself.

# The entangled fabric of space

We live in the information revolution. We translate everything into vast sequences of ones and zeroes. From our personal email to our work documents, from our heart rates to our credit rates, from our preferred movies to our movie preferences, all things information are represented using this minimal {0,1} alphabet which our digital helpers “understand” and process. Many of us physicists are now taking this information revolution at heart and embracing the “It from qubit” motto. Our dream: to understand space, time and gravity as emergent features in a world made of information – quantum information.

Over the past two years, I have been obsessively trying to understand this profound perspective more rigorously. Recently, John Preskill and I have taken a further step in this direction in the recent paper: quantum code properties from holographic geometries. In it, we make progress in interpreting features of the holographic approach to quantum gravity in the terms of quantum information constructs.

In this post I would like to present some context for this work through analogies which hopefully help intuitively convey the general ideas. While still containing some technical content, this post is not likely to satisfy those readers seeking a precise in-depth presentation. To you I can only recommend the masterfully delivered lecture notes on gravity and entanglement by Mark Van Raamsdonk.

## Entanglement as a cat’s cradle

A cat’s cradle serves as a crude metaphor for quantum mechanical entanglement. The full image provides a complete description of the string and how it is laced in a stable configuration around the two hands. However, this lacing does not describe a stable configuration of half the string on one hand. The string would become disentangled and fall if we were to suddenly remove one of the hands or cut through the middle.

Of all the concepts needed to explain emergent spacetime, maybe the most difficult is that of quantum entanglement. While the word seems to convey some kind of string wound up in a complicated way, it is actually a quality which may describe information in quantum mechanical systems. In particular, it applies to a system for which we have a complete description as a whole, but are only capable of describing certain statistical properties of its parts. In other words, our knowledge of the whole loses predictive power when we are only concerned with the parts. Entanglement entropy is a measure of information which quantifies this.

While our metaphor for entanglement is quite crude, it will serve the purpose of this post. Namely, to illustrate one of the driving premises for the holographic approach to quantum gravity, that the very structure of spacetime is emergent and built up from entanglement entropy.

## Knit and crochet your way into the manifolds

But let us bring back our metaphors and try to convey the content of this identification. For this, we resort to the unlikely worlds of knitting and crochet. Indeed, by a planned combination of individual loops and stitches, these traditional crafts are capable of approximating any kind of surface (2D Riemannian surface would be the technical term).

Here I have presented some examples with uniform curvature R: flat in green, positive curvature (ball) in yellow and negative curvature (coral reef) in purple. While actual practitioners may be more interested in getting the shape right on hats and socks for loved ones, for us the point is that if we take a step back, these objects built of simple loops, hooks and stitches could end up looking a lot like the smooth surfaces that a physicist might like to use to describe 2D space. This is cute, but can we push this metaphor even further?

Well, first of all, although the pictures above are only representing 2D surfaces, we can expect that a similar approach should allow approximating 3D and even higher dimensional objects (again the technical term is Riemannian manifolds). It would just make things much harder to present in a picture. These woolen structures are, in fact, quite reminiscent of tensor networks, a modern mathematical construct widely used in the field of quantum information. There too, we combine basic building blocks (tensors) through simple operations (tensor index contraction) to build a more complex composite object. In the tensor network world, the structure of the network (how its nodes are connected to other nodes) generically defines the entanglement structure of the resulting object.

This regular tensor network layout was used to describe hyperbolic space which is similar to the purple crochet. However, they apriori look quite dissimilar due to the use of the Poincaré disk model where tensors further from the center look smaller. Another difference is that the high degree of regularity is achieved at the expense of having very few tensors per curvature radius (as compared to its purple crochet cousin). However, planarity and regularity don’t seem to be essential so the crochet probably provides a better intuitive picture.

Roughly speaking, tensor networks are ingenious ways of encoding (quantum) inputs into (quantum) outputs. In particular, if you enter some input at the boundary of your tensor network, the tensors do the work of processing that information throughout the network so that if you ask for an output at any one of the nodes in the bulk of the tensor network, you get the right encoded answer. In other words, the information we input into the tensor network begins its journey at the dangling edges found at the boundary of the network and travels through the bulk edges by exploiting them as information bridges between the nodes of the network.

In the figure representing the cat’s cradle, these dangling input edges can be though of as the fingers holding the wool. Now, if we partition these edges into two disjoint sets (say, the fingers on the left hand and the fingers on the right hand, respectively), there will be some amount of entanglement between them. How much? In general, we cannot say, but under certain assumptions we find that it is proportional to the minimum cut through the network. Imagine you had an incredible number of fingers holding your wool structure. Now separate these fingers arbitrarily into two subsets L and R (we may call them left hand and right hand, although there is nothing right or left handy about them). By pulling left hand and right hand apart, the wool might stretch until at some point it breaks. How many threads will break? Well, the question is analogous to the entanglement one. We might expect, however, that a minimal number of threads break such that each hand can go its own way. This is what we call the minimal cut. In tensor networks, entanglement entropy is always bounded above by such a minimal cut and it has been confirmed that under certain conditions entanglement also reaches, or approximates, this bound. In this respect, our wool analogy seems to be working out.

## Holography

Holography, in the context of black holes, was sparked by a profound observation of Jacob Bekenstein and Stephen Hawking, which identified the surface area of a black hole horizon (in Planck units) with its entropy, or information content:

$S_{BH} = \frac{k A_{BH}}{4\ell_p^2}$.

Here, $S_{BH}$ is the entropy associated to the black hole, $A_{BH}$ is its horizon area, $\ell_p$ is the Planck length and $k$ is Boltzmann’s constant.
Why is this equation such a big deal? Well, there are many reasons, but let me emphasize one. For theoretical physicists, it is common to get rid of physical units by relating them through universal constants. For example, the theory of special relativity allows us to identify units of distance with units of time through the equation $d=ct$ using the speed of light c. General relativity further allows us to identify mass and energy through the famous $E=mc^2$. By considering the Bekenstein-Hawking entropy, units of area are being swept away altogether! They are being identified with dimensionless units of information (one square meter is roughly $1.4\times10^{69}$ bits according to the Bousso bound).

Initially, the identification of area and information was proposed to reconcile black holes with the laws of thermodynamics. However, this has turned out to be the main hint leading to the holographic principle, wherein states that describe a certain volume of space in a theory of quantum gravity can also be thought of as being represented at the lower dimensional boundary of the given volume. This idea, put forth by Gerard ‘t Hooft, was later given a more precise interpretation by Leonard Susskind and subsequently by Juan Maldacena through the celebrated AdS/CFT correspondence. I will not dwell in the details of the AdS/CFT correspondence as I am not an expert myself. However, this correspondence gave S. Ryu and T. Takayanagi  (RT) a setting to vastly generalize the identification of area as an information quantity. They proposed identifying the area of minimal surfaces on the bulk (remember the minimal cut?) with entanglement entropy in the boundary theory.

Roughly speaking, if we were to split the boundary into two regions, left $L$ and right $R$ it should be possible to also partition the bulk in a way that each piece of the bulk has either $L$ or $R$ in its boundary. Ryu and Takayanagi proposed that the area of the smallest surface $\chi_R=\chi_L$ which splits the bulk in this way would be proportional to the entanglement entropy between the two parts

$S_L = S_R = \frac{|\chi_L|}{4G} =\frac{|\chi_R|}{4G}$.

It turns out that some quantum field theory states admit such a geometric interpretation. Many high energy theory colleagues have ideas about when this is possible and what are the necessary conditions. By far the best studied setting for this holographic duality is AdS/CFT, where Ryu and Takayanagi first checked their proposal. Here, the entanglement features of  the lowest energy state of a conformal field theory are matched to surfaces in a hyperbolic space (like the purple crochet and the tensor network presented). However, other geometries have been shown to match the RT prediction with respect to the entanglement properties of different states. The key point here is that the boundary states do not have any geometry per se. They just manifest different amounts of entanglement when partitioned in different ways.

## Emergence

The holographic program suggests that bulk geometry emerges from the entanglement properties of the boundary state. Spacetime materializes from the information structure of the boundary instead of being a fundamental structure as in general relativity. Am I saying that we should strip everything physical, including space in favor of ones and zeros? Well, first of all, it is not just me who is pushing this approach. Secondly, no one is claiming that we should start making all our physical reasoning in terms of ones and zeros.

Let me give an example. We know that the sea is composed mostly of water molecules. The observation of waves that travel, superpose and break can be labeled as an emergent phenomenon. However, to a surfer, a wave is much more real than the water molecules composing it and the fact that it is emergent is of no practical consequence when trying to predict where a wave will break. A proficient physicist, armed with tools from statistical mechanics (there are more than $10^{25}$ molecules per liter), could probably derive a macroscopic model for waves from the microscopic theory of particles. In the process of learning what the surfer already understood, he would identify elements of the  microscopic theory which become irrelevant for such questions. Such details could be whether the sea has an odd or even number of molecules or the presence of a few fish.

In the case of holography, each square meter corresponds to $1.4\times10^{69}$ bits of entanglement. We don’t even have words to describe anything close to this outrageously large exponent which leaves plenty of room for emergence. Even taking all the information on the internet – estimated at $10^{22}$ bits (10 zettabits) – we can’t even match the area equivalent of the smallest known particle. The fact that there are so many orders of magnitude makes it difficult to extrapolate our understanding of the geometric domain to the information domain and vice versa. This is precisely the realm where techniques such as those from statistical mechanics successfully get rid of irrelevant details.

High energy theorists and people with a background in general relativity tend to picture things in a continuum language. For example, part of their daily butter are Riemannian or Lorentzian manifolds which are respectively used to describe space and spacetime. In contrast, most of information theory is usually applied to deal with discrete elements such as bits, elementary circuit gates, etc. Nevertheless, I believe it is fruitful to straddle this cultural divide to the benefit of both parties. In a way, the convergence we are seeking is analogous to the one achieved by the kinetic theory of gases, which allowed the unification of thermodynamics with classical mechanics.

## So what did we do?

The remarkable success of the geometric RT prediction to different bulk geometries such as the BTZ black holes and the generality of the entanglement result for its random tensor network cousins emboldened us to take the RT prescription beyond its usual domain of application. We considered applying it to arbitrary Riemannian manifolds that are space-like and that can be approximated by a smoothly knit fabric.

Furthermore, we went on to consider the implications that such assumptions would have when the corresponding geometries are interpreted as error-correcting codes. In fact, our work elaborates on the perspective of A. Almheiri, X. Dong and D. Harlow (ADH) where quantum error-correcting code properties of AdS/CFT were laid out; it is hard to overemphasize the influence of this work. Our work considers general geometries and identifies properties a code associated to a specific holographic geometry should satisfy.

In the cat cradle/fabric metaphor for holography, the fingers at the boundary constitute the boundary theory without gravity and the resulted fabric represents a bulk geometry in the corresponding bulk gravitational theory. Bulk observables may be represented in different ways on the boundary, but not arbitrarily. This raises the question of which parts of the bulk correspond to which parts of the boundary. In general, there is not a one to one mapping. However, if we partition the boundary in two parts $L$ and $R$, we expect to be able to split the bulk into two corresponding regions  ${\mathcal E}[L]$  and  ${\mathcal E}[R]$. This is the content of the entanglement wedge hypothesis, which is our other main assumption.  In our metaphor, one could imagine that we pull the left fingers up and the right fingers down (taking care not to get hurt). At some point, the fabric breaks through $\chi_R$ into two pieces. In the setting we are concerned with, these pieces maintain part of the original structure, which tells us which bulk information was available in one piece of the boundary and which part was available in the other.

Although we do not produce new explicit examples of such codes, we worked our way towards developing a language which translates between the holographic/geometric perspective and the coding theory perspective. We specifically build upon the language of operator algebra quantum error correction (OAQEC) which allows individually focusing on different parts of the logical message. In doing so we identified several coding theoretic bounds and quantities, some of which we found to be applicable beyond the context of holography. A particularly noteworthy one is a strengthening of the quantum Singleton bound, which defines a trade-off between how much logical information can be packed in a code, how much physical space is used for encoding this information and how well-protected the information is from erasures.

One of the central observations of ADH highlights how quantum codes have properties from both classical error-correcting codes and secret sharing schemes. On the one hand, logical encoded information should be protected from loss of small parts of the carrier, a property quantified by the code distance. On the other hand, the logical encoded information should not become accessible until a sufficiently large part of the carrier is available to us. This is quantified by the threshold of a corresponding secret sharing scheme. We call this quantity price as it identifies how much of the carrier we would need before someone could reconstruct the message faithfully. In general, it is hard to balance these two competing requirements; a statement which can be made rigorous. This kind of complementarity has long been recognized in quantum cryptography. However, we found that according to holographic predictions, codes admitting a geometric interpretation achieve a remarkable optimality in the trade-off between these features.

Our exploration of alternative geometries is rewarded by the following guidelines

In uberholography, bulk observables are accessible in a Cantor type fractal shaped subregion of the boundary. This is illustrated on the Poincare disc presentation of negatively curved bulk.

• Hyperbolic geometries predict a fixed polynomial scaling for code distance. This is illustrated by a feature we call uberholography. We use this name because there is an excess of holography wherein bulk observables can be represented on intricate subsets of the boundary which have fractal dimension even smaller than the boundary itself.
• Hyperbolic geometries suggest the possibility of decoding procedures which are local on the boundary geometry. This property may be connected to the locality of the corresponding boundary field theory.
• Flat and positive curvature geometries may lead to codes with better parameters in terms of distance and rates (ratio of logical information to physical information). A hemisphere reaches optimum parameters, saturating coding bounds.

Seven iterations of a ternary Cantor set (dark line) on the unit interval. Each iteration is obtained by punching holes from the previous one and the set obtained in the limit is a fractal.

Current day quantum computers are far from the number of qubits required to invoke an emergent geometry. Nevertheless, it is exhilarating to take a step back and consider how the properties of the codes, and information in general, may be interpreted geometrically. On the other hand, I find that the quantum code language we adapt to the context of holography might eventually serve as a useful tool in distinguishing which boundary features are relevant or irrelevant for the emergent properties of the holographic dual. Ours is but one contribution in a very active field. However, the one thing I am certain about is that these are exciting times to be doing physics.

# Quantum Chess

Two years ago, as a graduate student in Physics at USC,  I began work on a game whose mechanics were based on quantum mechanics. When I had a playable version ready, my graduate adviser, Todd Brun, put me in contact with IQIM’s Spiros Michalakis, who had already worked with Google to design qCraft, a mod introducing quantum mechanics into Minecraft. Spiros must have seen potential in my clunky prototype and our initial meeting turned into weekly brainstorming lunches at Caltech’s Chandler cafeteria. More than a year later, the game had evolved into Quantum Chess and we began talking about including a video showing some gameplay at an upcoming Caltech event celebrating Feynman’s quantum legacy. The next few months were a whirlwind. Somehow this video turned into a Quantum Chess battle for the future of humanity, between Stephen Hawking and Paul Rudd. And it was being narrated by Keanu Reeves! The video, called Anyone Can Quantum, and directed by Alex Winter, premiered at Caltech’s One Entangled Evening on January 26, 2016 and has since gone viral. If you haven’t watched it, now would be a good time to do so (if you are at work, be prepared to laugh quietly).

So, what exactly is Quantum Chess and how does it make use of quantum physics? It is a modern take on the centuries-old game of strategy that endows each chess piece with quantum powers. You don’t need to know quantum mechanics to play the game. On the other hand, understanding the rules of chess might help [1].  But if you already know the basics of regular chess, you can just start playing. Over time, your brain will get used to some of the strange quantum behavior of the chess pieces and the battles you wage in Quantum Chess will make regular chess look like tic-tac-toe [2].

In this post, I will discuss the concept of quantum superposition and how it plays a part in the game. There will be more posts to follow that will discuss entanglement, interference, and quantum measurement [3].

In quantum chess, players have the ability to perform quantum moves in addition to the standard chess moves. Each time a player chooses to move a piece, they can indicate whether they want to perform a standard move, or a quantum move. A quantum move creates a superposition of boards. If any of you ever saw Star Trek 3D Chess, you can think of this in a similar way.

Star Trek 3D Chess

There are multiple boards on which pieces exist. However, in Quantum Chess, the number of possible boards is not fixed, it can increase or decrease. All possible boards exist in a superposition. The player is presented with a single board that represents the entire superposition. In Quantum Chess, any individual move will act on all boards at the same time.  Each time a player makes a quantum move, the number of possible boards present in the superposition doubles. Let’s look at some pictures that might clarify things.

The Quantum Chess board begins in the same configuration as standard chess.

All pawns move the same as they would in standard chess, but all other pieces get a choice of two movement types, standard or quantum. Standard moves act exactly as they would in standard chess. However, quantum moves, create superpositions. Let’s look at an example of a quantum move for the white queen.

In this diagram, we see what happens when we perform a quantum move of the white queen from D1 to D3. We get two possible boards. On one board the queen did not move at all. On the other, the queen did move. Each board has a 50% chance of “existence”. Showing every possible board, though, would get quite complicated after just a few moves. So, the player view of the game is a single board. After the same quantum queen move, the player sees this:

The teal colored “fill” of each queen shows the probability of finding the queen in that space; the same queen, existing in different locations on the board. The queen is in a superposition of being in two places at once. On their next turn, the player can choose to move any one of their pieces.

So, let’s talk about moving the queen, again. You may be wondering, “What happens if I want to move a piece that is in a superposition?” The queen exists in two spaces. You choose which of those two positions you would like to move from, and you can perform the same standard or quantum moves from that space. Let’s look at trying to perform a standard move, instead of a quantum move, on the queen that now exists in a superposition. The result would be as follows:

The move acts on all boards in the superposition. On any board where the queen is in space D3, it will be moved to B5. On any board where the queen is still in space D1, it will not be moved. There is a 50% chance that the queen is still in space D1 and a 50% chance that it is now located in B5. The player view, as illustrated below, would again be a 50/50 superposition of the queen’s position. This was just an example of a standard move on a piece in a superposition, but a quantum move would work similarly.

Some of you might have noticed the quantum move basically gives you a 50% chance to pass your turn. Not a very exciting thing to do for most players. That’s why I’ve given the quantum move an added bonus. With a quantum move, you can choose a target space that is up to two standard moves away! For example, the queen could choose a target that is forward two spaces and then left two spaces. Normally, this would take two turns: The first turn to move from D1 to D3 and the second turn to move from D3 to B3. A quantum move gives you a 50% chance to move from D1 to B3 in a single turn!

Let’s look at a quantum queen move from D1 to B3.

Just like the previous quantum move we looked at, we get a 50% probability that the move was successful and a 50% probability that nothing happened. As a player, we would see the board below.

There is a 50% chance the queen completed two standard moves in one turn! Don’t worry though, things are not just random. The fact that the board is a superposition of boards and that movement is unitary (just a fancy word for how quantum things evolve) can lead to some interesting effects. I’ll end this post here. Now, I hope I’ve given you some idea of how superposition is present in Quantum Chess. In the next post I’ll go into entanglement and a bit more on the quantum move!

Notes:

[1] For those who would like to know more about chess, here is a good link.

[2] If you would like to see a public release of Quantum Chess (and get a copy of the game), consider supporting the Kickstarter campaign.

[3] I am going to be describing aspects of the game in terms of probability and multiple board states. For those with a scientific or technical understanding of how quantum mechanics works, this may not appear to be very quantum. I plan to go into a more technical description of the quantum aspects of the game in a later post. Also, a reminder to the non-scientific audience. You don’t need to know quantum mechanics to play this game. In fact, you don’t even need to know what I’m going to be describing here to play! These posts are just for those with an interest in how concepts like superposition, entanglement, and interference can be related to how the game works.

# IQIM Presents …”my father”

Debaleena Nandi at Caltech

Following the IQIM teaser, which was made with the intent of creating a wider perspective of the scientist, to highlight the normalcy behind the perception of brilliance and to celebrate the common human struggles to achieve greatness, we decided to do individual vignettes of some of the characters you saw in the video.

We start with Debaleena Nandi, a grad student in Prof Jim Eisenstein’s lab, whose journey from Jadavpur University in West Bengal, India to the graduate school and research facility at the Indian institute of Science, Bangalore, to Caltech has seen many obstacles. We focus on the essentials of an environment needed to manifest the quest for “the truth” as Debaleena says. We start with her days as a child when her double-shift working father sat by her through the days and nights that she pursued her homework.

She highlights what she feels is the only way to growth; working on what is lacking, to develop that missing tool in your skill set, that asset that others might have by birth but you need to inspire by hard work.

Debaleena’s motto: to realize and face your shortcomings is the only way to achievement.

As we build Debaleena up, we also build up the identity of Caltech through its breathtaking architecture that oscillates from Spanish to Goth to modern. Both Debaleena and Caltech are revealed slowly, bit by bit.

This series is about dissecting high achievers, seeing the day to day steps, the bit by bit that adds up to the more often than not, overwhelming, impressive presence of Caltech’s science. We attempt to break it down in smaller vignettes that help us appreciate the amount of discipline, intent and passion that goes into making cutting edge researchers.

Presenting the emotional alongside the rational is something this series aspires to achieve. It honors and celebrates human limitations surrounding limitless boundaries, discoveries and possibilities.

Stay tuned for more vignettes in the IQIM Presents “My _______” Series.

But for now, here is the video. Watch, like and share!

(C) Parveen Shah Production 2014

# Can a game teach kids quantum mechanics?

Five months ago, I received an email and then a phone call from Google’s Creative Lab Executive Producer, Lorraine Yurshansky. Lo, as she prefers to be called, is not your average thirty year-old. She has produced award-winning short films like Peter at the End (starring Napoleon Dynamite, aka Jon Heder), launched the wildly popular Maker Camp on Google+ and had time to run a couple of New York marathons as a warm-up to all of that. So why was she interested in talking to a quantum physicist?

You may remember reading about Google’s recent collaboration with NASA and D-Wave, on using NASA’s supercomputing facilities along with a D-Wave Two machine to solve optimization problems relevant to both Google (Glass, for example) and NASA (analysis of massive data sets). It was natural for Google, then, to want to promote this new collaboration through a short video about quantum computers. The video appeared last week on Google’s YouTube channel:

This is a very exciting collaboration in my view. Google has opened its doors to quantum computation and this has some powerful consequences. And it is all because of D-Wave. But, let me put my perspective in context, before Scott Aaronson unleashes the hounds of BQP on me.

Two years ago, together with Science magazine’s 2010 Breakthrough of the Year winner, Aaron O’ Connell, we decided to ask Google Ventures for \$10,000,000 dollars to start a quantum computing company based on technology Aaron had developed as a graduate student at John Martini’s group at UCSB. The idea we pitched was that a hand-picked team of top experimentalists and theorists from around the world, would prototype new designs to achieve longer coherence times and greater connectivity between superconducting qubits, faster than in any academic environment. Google didn’t bite. At the time, I thought the reason behind the rejection was this: Google wants a real quantum computer now, not just a 10 year plan of how to make one based on superconducting X-mon qubits that may or may not work.

I was partially wrong. The reason for the rejection was not a lack of proof that our efforts would pay off eventually – it was a lack of any prototype on which Google could run algorithms relevant to their work. In other words, Aaron and I didn’t have something that Google could use right-away. But D-Wave did and Google was already dating D-Wave One for at least three years, before marrying D-Wave Two this May. Quantum computation has much to offer Google, so I am excited to see this relationship blossom (whether it be D-Wave or Pivit Inc that builds the first quantum computer). Which brings me back to that phone call five months ago…

Lorraine: Hi Spiro. Have you heard of Google’s collaboration with NASA on the new Quantum Artificial Intelligence Lab?

Me: Yes. It is all over the news!

Lo: Indeed. Can you help us design a mod for Minecraft to get kids excited about quantum mechanics and quantum computers?

Me: Minecraft? What is Minecraft? Is it like Warcraft or Starcraft?

Lo: (Omg, he doesn’t know Minecraft!?! How old is this guy?) Ahh, yeah, it is a game where you build cool structures by mining different kinds of blocks in this sandbox world. It is popular with kids.

Me: Oh, okay. Let me check out the game and see what I can come up with.

After looking at the game I realized three things:
1. The game has a fan base in the tens of millions.
2. There is an annual convention (Minecon) devoted to this game alone.
3. I had no idea how to incorporate quantum mechanics within Minecraft.

Lo and I decided that it would be better to bring some outside help, if we were to design a new mod for Minecraft. Enter E-Line Media and TeacherGaming, two companies dedicated to making games which focus on balancing the educational aspect with gameplay (which influences how addictive the game is). Over the next three months, producers, writers, game designers and coder-extraordinaire Dan200, came together to create a mod for Minecraft. But, we quickly came to a crossroads: Make a quantum simulator based on Dan200’s popular ComputerCraft mod, or focus on gameplay and a high-level representation of quantum mechanics within Minecraft?

The answer was not so easy at first, especially because I kept pushing for more authenticity (I asked Dan200 to create Hadamard and CNOT gates, but thankfully he and Scot Bayless – a legend in the gaming world – ignored me.) In the end, I would like to think that we went with the best of both worlds, given the time constraints we were operating under (a group of us are attending Minecon 2013 to showcase the new mod in two weeks) and the young audience we are trying to engage. For example, we decided that to prepare a pair of entangled qubits within Minecraft, you would use the Essence of Entanglement, an object crafted using the Essence of Superposition (Hadamard gate, yay!) and Quantum Dust placed in a CNOT configuration on a crafting table (don’t ask for more details). And when it came to Quantum Teleportation within the game, two entangled quantum computers would need to be placed at different parts of the world, each one with four surrounding pylons representing an encoding/decoding mechanism. Of course, on top of each pylon made of obsidian (and its far-away partner), you would need to place a crystal, as the required classical side-channel. As an authorized quantum mechanic, I allowed myself to bend quantum mechanics, but I could not bring myself to mess with Special Relativity.

As the mod launched two days ago, I am not sure how successful it will be. All I know is that the team behind its development is full of superstars, dedicated to making sure that John Preskill wins this bet (50 years from now):

The plan for the future is to upload a variety of posts and educational resources on qcraft.org discussing the science behind the high-level concepts presented within the game, at a level that middle-schoolers can appreciate. So, if you play Minecraft (or you have kids over the age of 10), download qCraft now and start building. It’s a free addition to Minecraft.

# Quantum mechanics – it’s all in our mind!

Last week was the final week of classes, and I brought my ph12b class, aka baby-quantum, to conclusion. Just like the last time I taught the class, I concluded with what should make the students honor the quantum gods – the EPR paradox and Bell’s inequality. Even before these common conundrums of quantum mechanics, the students had already picked up on the trouble with measurement theory and had started hammering me with questions on the “many-worlds interpretation”. The many-worlds interpretation, pioneered by Everett, stipulates that whenever a quantum measurement is made of a state in a quantum superposition, the universe will split into several copies where each possible result will be realized in one of the copies. All results come to pass, but if we are cats, in some universes, we won’t survive to meaow about it.

Questions on the many-worlds interpretation always make me think back to my early student days, when I also obsessed over these issues. In fact, I got so frustrated with the question, that I started having heretic thoughts: What if it is all in our minds? What if the quantum superposition is always there, but maybe evolution had consciousness always zoom in on one possible outcome. Maybe hunting a duck is just easier if the duck is not in a superposition of flying south and swimming in a pond. Of course, this requires that at least you and the duck, and probably other bystanders, all agree on which quantum reality it is that you are operating in. No problem – maybe evolution equipped all of our consciousnesses with the ability to zoom in on a common reality where all of us agree on the results of experiments, but there are other possibilities for this reality, which still live side by side to ‘our’ reality, since – hey – it’s all in our minds!