Quantum steampunk invades Scientific American

London, at an hour that made Rosalind glad she’d nicked her brother’s black cloak instead of wearing her scarlet one. The factory alongside her had quit belching smoke for the night, but it would start again soon. A noise caused her to draw back against the brick wall. Glancing up, she gasped. An oblong hulk was drifting across the sky. The darkness obscured the details, but she didn’t need to see; a brass-colored lock would be painted across the side. Mellator had launched his dirigible.

A variation on the paragraph above began the article that I sent to Scientific American last year. Clara Moskowitz, an editor, asked which novel I’d quoted the paragraph from. I’d made the text up, I confessed. 

Engine

Most of my publications, which wind up in physics journals, don’t read like novels. But I couldn’t resist when Clara invited me to write a feature about quantum steampunk, the confluence of quantum information and thermodynamics. Quantum Frontiers regulars will anticipate paragraphs two and three of the article:

Welcome to steampunk. This genre has expanded across literature, art and film over the past several decades. Its stories tend to take place near nascent factories and in grimy cities, in Industrial Age England and the Wild West—in real-life settings where technologies were burgeoning. Yet steampunk characters extend these inventions into futuristic technologies, including automata and time machines. The juxtaposition of old and new creates an atmosphere of romanticism and adventure. Little wonder that steampunk fans buy top hats and petticoats, adorn themselves in brass and glass, and flock to steampunk conventions. 

These fans dream the adventure. But physicists today who work at the intersection of three fields—quantum physics, information theory and thermodynamics—live it. Just as steampunk blends science-fiction technology with Victorian style, a modern field of physics that I call “quantum steampunk” unites 21st-century technology with 19th-century scientific principles. 

The Scientific American graphics team dazzled me. For years, I’ve been hankering to work with artists on visualizing quantum steampunk. I had an opportunity after describing an example of quantum steampunk in the article. The example consists of a quantum many-body engine that I designed with members Christopher White, Sarang Gopalakrishnan, and Gil Refael of Caltech’s Institute for Quantum Information and Matter. Our engine is a many-particle system ratcheted between two phases accessible to quantum matter, analogous to liquid and solid. The engine can be realized with, e.g., ultracold atoms or trapped ions. Lasers would trap and control the particles. Clara, the artists, and I drew the engine, traded comments, and revised the figure tens of times. In early drafts, the lasers resembled the sketches in atomic physicists’ Powerpoints. Before the final draft, the lasers transformed into brass-and-glass beauties. They evoke the scientific instruments crafted through the early 1900s, before chunky gray aesthetics dulled labs.

MBL-mobile

Scientific American published the feature this month; you can read it in print or, here, online. Many thanks to Clara for the invitation, for shepherding the article into print, and for her enthusiasm. To repurpose the end of the article, “You’re reading about this confluence of old and new on Quantum Frontiers. But you might as well be holding a novel by H. G. Wells or Jules Verne.”

 

Figures courtesy of the Scientific American graphics team.

A new possibility for quantum networks

It has been roughly 1 year since Dr Jon Kindem and I finished at Caltech (JK graduating with a PhD and myself – JB – graduating from my postdoc to take up a junior faculty position at the University of Sydney). During our three-and-a-half-year overlap in the IQIM we often told each other that we should write something for Quantum Frontiers. As two of the authors of a paper reporting a recent breakthrough for rare-earth ion spin qubits (Nature, 2020), it was now or never. Here we go…

Throughout 2019, telecommunication companies began deploying 5th generation (5G) network infrastructure to allow our wireless communication to be faster, more reliable, and cope with greater capacity. This roll out of 5G technology promises to support up to 10x the number of devices operating with speeds 10x faster than what is possible with 4th generation (4G) networks. If you stop and think about new opportunities 4G networks unlocked for working, shopping, connecting, and more, it is easy to see why some people are excited about the new world 5G networks might offer.

Classical networks like 5G and fiber optic networks (the backbone of the internet) share classical information: streams of bits (zeros and ones) that encode our conversations, tweets, music, podcasts, videos and anything else we communicate through our digital devices. Every improvement in the network hardware (for example an optical switch with less loss or a faster signal router) contributes to big changes in speed and capacity. The bottom line is that with enough advances, the network evolves to the point where things that were previously impossible (like downloading a movie in the late 90s) become instantaneous.

If you were using the internet in the 90s/00s then you would recognize this sound.
 If you are Gen Z or Gen Alpha then you’ll probably need to google dial-up spectrogram.
[Captured from this YouTube video].   

Alongside the hype and advertising around 5G networks, we are part of the world-wide effort to develop a fundamentally different network (with a little less advertising, but similar amounts of hype). Rather than being a bigger, better version of 5G, this new network is trying to build a quantum internet: a set of technologies that will allows us to connect and share information at the quantum level. For an insight into the quantum internet origin story, read this post about the pioneering experiments that took place at Caltech in Prof. Jeff Kimble’s group.

Quantum technologies operate using the counter-intuitive phenomena of quantum mechanics like superposition and entanglement. Quantum networks need to distribute this superposition and entanglement between different locations. This is a much harder task than distributing bits in a regular network because quantum information is extremely susceptible to loss and noise. If realized, this quantum internet could enable powerful quantum computing clusters, and create networks of quantum sensors that measure infinitesimally small fluctuations in their environment.

At this point it is worth asking the question:

Does the world really need a quantum internet?

This is an important question because a quantum internet is unlikely to improve any of the most common uses for the classical internet (internet facts and most popular searches).

2nd most viewed video on YouTube with almost 5 billion views as of April 2020.

We think there are at least three reasons why a quantum network is important:

  1. To build better quantum computers. The quantum internet will effectively transform small, isolated quantum processors into one much larger, more powerful computer. This could be a big boost in the race to scale-up quantum computing.
  2. To build quantum-encrypted communication networks. The ability of quantum technology to make or break encryption is one of the earliest reasons why quantum technology was funded. A fully-fledged quantum computer should be very efficient at hacking commonly used encryption protocols, while ideal quantum encryption provides the basis for communications secured by the fundamental properties of physics.
  3. To push the boundaries of quantum physics and measurement sensitivity by increasing the length scale and complexity of entangled systems. The quantum internet can help turn thought experiments into real experiments.

The next question is: How do we build a quantum internet?

The starting point for most long-distance quantum network strategies is to base them on the state-of-the-art technology for current classical networks: sending information using light. (But that doesn’t rule out microwave networks for local area networks, as recent work from ETH Zurich has shown).

The technology that drives quantum networks is a set of interfaces that connect matter systems (like atoms) to photons at a quantum level. These interfaces need to efficiently exchange quantum information between matter and light, and the matter part needs to be able to store the information for a time that is much longer than the time it takes for the light to get to its destination in the network. We also need to be able to entangle the quantum matter systems to connect network links, and to process quantum information for error correction. This is a significant challenge that requires novel materials and unparalleled control of light to ultimately succeed.

State-of-the-art quantum networks are still elementary links compared to the complexity and scale of modern telecommunication. One of the most advanced platforms that has demonstrated a quantum network link consists of two atomic defects in diamonds separated by 1.3 km. The defects act as the quantum light-matter interface allowing quantum information to be shared between the two remote devices. But these defects in diamond currently have limitations that prohibit the expansion of such a network. The central challenge is finding defects/emitters that are stable and robust to environmental fluctuations, while simultaneously efficiently connecting with light. While these emitters don’t have to be in solids, the allure of a scalable solid-state fabrication process akin to today’s semiconductor industry for integrated circuits is very appealing. This has motivated the research and development of a range of quantum light-matter interfaces in solids (for example, see recent work by Harvard researchers) with the goal of meeting the simultaneous goals of efficiency and stability.

The research group we were a part of at Caltech was Prof. Andrei Faraon’s group, which put forward an appealing alternative to other solid-state technologies. The team uses rare-earth atoms embedded in crystals commonly used for lasers. JK joined as the group’s 3rd graduate student in 2013, while I joined as a postdoc in 2016.

The rare-earth elements are found in the part of the periodic table that people often forget about. The elements from cerium (Ce) to ytterbium (Yb) are the most commonly used for quantum technologies.

Rare-earth atoms have long been of interest for quantum technologies such as quantum memories for light because they are very stable and are excellent at preserving quantum information. But compared to other emitters, they only interact very weakly with light, which means that one usually needs large crystals with billions of atoms all working in harmony to make useful quantum interfaces. To overcome this problem, research in the Faraon group pioneered coupling these ions to nanoscale optical cavities like these ones:

These microscopic Toblerone-like structures are fabricated directly in the crystal that plays host to the rare-earth atoms. The periodic patterning effectively acts like two mirrors that form an optical cavity to confine light, which enhances the connection between light and the rare-earth atoms. In 2017, our group showed that the improved optical interaction in these cavities can be used to shrink down optical quantum memories by orders of magnitude compared to previous demonstrations, and ones manufactured on-chip.

We have used this nanophotonic platform to open up new avenues for quantum networks based on single rare-earth atoms, a task that previously was exceptionally challenging because these atoms have very low brightness. We have worked with both neodymium and ytterbium atoms embedded in a commercially available laser crystal.

Ytterbium looks particularly promising. Working with Prof. Rufus Cone’s group at Montana State University, we showed that these ytterbium atoms absorb and emit light better than most other rare-earth atoms and that they can store quantum information long enough for extended networks (>10 ms) when cooled down to a few Kelvin (-272 degrees Celsius) [Kindem et al., Physical Review B, 98, 024404 (2018) – link to arXiv version].

By using the nanocavity to improve the brightness of these ytterbium atoms, we have now been able to identify and investigate their properties at the single atom level. We can precisely control the quantum state of the single atoms and measure them with high fidelity – both prerequisites for using these atoms in quantum information technologies. When combined with the long quantum information storage times, our work demonstrates important steps to using this system in a quantum network.

The next milestone is forming an optical link between two individual rare-earth atoms to build an elementary quantum network. This goal is in our sights and we are already working on optimizing the light-matter interface stability and efficiency. A more ambitious milestone is to provide interconnects for other types of qubits – such as superconducting qubits – to join the network. This requires a quantum transducer to convert between microwave signals and light. Rare-earth atoms are promising for transducer technologies (see recent work from the Faraon group), as are a number of other hybrid quantum systems (for example, optomechanical devices like the ones developed in the Painter group at Caltech).

It took roughly 50 years from the first message sent over ARPANET to the roll out of 5G technology.

So, when are we going to see the quantum internet?

The technology and expertise needed to build quantum links between cities are developing rapidly with impressive progress made even between 2018 and 2020. Basic quantum network capabilities will likely be up and running in the next decade, which will be an exciting time for breakthroughs in fundamental and applied quantum science. Using single rare-earth atoms is relatively new, but this technology is also advancing quickly (for example, our ytterbium material was largely unstudied just three years ago). Importantly, the discovery of new materials will continue to be important to push quantum technologies forward.

You can read more about this work in this summary article and this synopsis written by lead author JK (Caltech PhD 2019), or dive into the full paper published in Nature.

J. M. Kindem, A. Ruskuc, J. G. Bartholomew, J. Rochman, Y.-Q. Huan, and A. Faraon. Control and single-shot readout of an ion embedded in a nanophotonic cavity. Nature (2020).

Now is an especially exciting time for our field with the Thompson Lab at Princeton publishing a related paper on single rare-earth atom quantum state detection, in their case using erbium. Check out their article here.

Achieving superlubricity with graphene

Sometimes, experimental results spark enormous curiosity inspiring a myriad of questions and ideas for further experimentation. In 2004, Geim and Novoselov, from The University of Manchester, isolated a single layer of graphene from bulk graphite with the “Scotch Tape Method” for which they were awarded the 2010 Nobel Prize in Physics.  This one experimental result has branched out countless times serving as a source of inspiration in as many different fields.  We are now in the midst of an array of branching-out in graphene research, and one of those branches gaining attention is ultra low friction observed between graphene and other surface materials.  

Much has been learned about graphene in the past 15 years through an immense amount of research, most of which, in non-mechanical realms (e.g., electron transport measurements, thermal conductivity, pseudo magnetic fields in strain engineering).  However, superlubricity, a mechanical phenomenon, has become the focus among many research groups. Mechanical measurements have famously shown graphene’s tensile strength to be hundreds of times that of the strongest steel, indisputably placing it atop the list of construction materials best for a superhero suit.  Superlubricity is a tribological property of graphene and is, arguably, as equally impressive as graphene’s tensile strength.

Tribology is the study of interacting surfaces during relative motion including sources of friction and methods for its reduction.  It’s not a recent discovery that coating a surface with graphite (many layers of graphene) can lower friction between two sliding surfaces.  Current research studies the precise mechanisms and surfaces for which to minimize friction with single or several layers of graphene. 

Research published in Nature Materials in 2018 measures friction between surfaces under constant load and velocity. The experiment includes two groups; one consisting of two graphene surfaces (homogeneous junction), and another consisting of graphene and hexagonal boron nitride (heterogeneous junction).   The research group measures friction using Atomic Force Microscopy (AFM).  The hexagonal boron nitride (or graphene for a homogeneous junction) is fixed to the stage of the AFM while the graphene slides atop.  Loads are held constant at 20 𝜇N and sliding velocity constant at 200 nm/s. Ultra low friction is observed for homogeneous junctions when the underlying crystalline lattice structures of the surfaces are at a relative angle of 30 degrees.  However, this ultra low friction state is very unstable and upon sliding, the surfaces rotate towards a locked-in lattice alignment. Friction varies with respect to the relative angle between the two surface’s crystalline lattice structures. Minimum (ultra low) friction occurs at a relative angle of 30 degrees reaching a maximum when locked-in lattice alignment is realized upon sliding. While in a state of lattice alignment, shearing is rendered impossible with the experimental setup due to the relatively large amount of friction.

Friction varies with respect to the relative angle of the crystalline lattice structures and is, therefore, anisotropic.  For example, the fact it takes less force to split wood when an axe blade is applied parallel to its grains than when applied perpendicularly illustrates the anisotropic nature of wood, as the force to split wood is dependent upon the direction along which the force is applied.  Frictional anisotropy is greater in homogeneous junctions because the tendency to orient into a stuck, maximum friction alignment, is greater than with heterojunctions.  In fact, heterogeneous junctions experience frictional anisotropy three orders of magnitude less than homogeneous junctions. Heterogenous junctions display much less frictional anisotropy due to a lattice misalignment when the angle between the lattice vectors is at a minimum.  In other words, the graphene and hBN crystalline lattice structures are never parallel because the materials differ, therefore, never experience the impact of lattice alignment as do homogenous junctions. Hence, heterogeneous junctions do not become stuck in a high friction state that characterizes homogeneous ones, and experience ultra low friction during sliding at all relative crystalline lattice structure angles.

Presumably, to increase applicability, upscaling to much larger loads will be necessary. A large scale cost effective method to dramatically reduce friction would undoubtedly have an enormous impact on a great number of industries.  Cost efficiency is a key component to the realization of graphene’s potential impact, not only as it applies to superlubricity, but in all areas of application.  As access to large amounts of affordable graphene increases, so will experiments in fabricating devices exploiting the extraordinary characteristics which have placed graphene and graphene based materials on the front lines of material research the past couple decades.

In the hour of darkness and peril and need

I recited the poem “Paul Revere’s Ride” to myself while walking across campus last week. 

A few hours earlier, I’d cancelled the seminar that I’d been slated to cohost two days later. In a few hours, I’d cancel the rest of the seminars in the series. Undergraduates would begin vacating their dorms within a day. Labs would shut down, and postdocs would receive instructions to work from home.

I memorized “Paul Revere’s Ride” after moving to Cambridge, following tradition: As a research assistant at Lancaster University in the UK, I memorized e. e. cummings’s “anyone lived in a pretty how town.” At Caltech, I memorized “Kubla Khan.” Another home called for another poem. “Paul Revere’s Ride” brooked no competition: Campus’s red bricks run into Boston, where Revere’s story began during the 1700s. 

Henry Wadsworth Longfellow, who lived a few blocks from Harvard, composed the poem. It centers on the British assault against the American colonies, at Lexington and Concord, on the eve of the Revolutionary War. A patriot learned of the British troops’ movements one night. He communicated the information to faraway colleagues by hanging lamps in a church’s belfry. His colleagues rode throughout the night, to “spread the alarm / through every Middlesex village and farm.” The riders included Paul Revere, a Boston silversmith.

The Boston-area bricks share their color with Harvard’s crest, crimson. So do the protrusions on the coronavirus’s surface in colored pictures. 

Screen Shot 2020-03-13 at 6.40.04 PM

I couldn’t have designed a virus to suit Harvard’s website better.

The yard that I was crossing was about to “de-densify,” the red-brick buildings were about to empty, and my home was about to lock its doors. I’d watch regulations multiply, emails keep pace, and masks appear. Revere’s messenger friend, too, stood back and observed his home:

he climbed to the tower of the church,
Up the wooden stairs, with stealthy tread,
To the belfry-chamber overhead, [ . . . ]
By the trembling ladder, steep and tall,
To the highest window in the wall,
Where he paused to listen and look down
A moment on the roofs of the town,
And the moonlight flowing over all.

I commiserated also with Revere, waiting on tenterhooks for his message:

Meanwhile, impatient to mount and ride,
Booted and spurred, with a heavy stride,
On the opposite shore walked Paul Revere.
Now he patted his horse’s side,
Now gazed on the landscape far and near,
Then impetuous stamped the earth,
And turned and tightened his saddle-girth…

The lamps ended the wait, and Revere rode off. His mission carried a sense of urgency, yet led him to serenity that I hadn’t expected:

He has left the village and mounted the steep,
And beneath him, tranquil and broad and deep,
Is the Mystic, meeting the ocean tides…

The poem’s final stanza kicks. Its message carries as much relevance to the 21st century as Longfellow, writing about the 1700s during the 1800s, could have dreamed:

So through the night rode Paul Revere;
And so through the night went his cry of alarm
To every Middlesex village and farm,—
A cry of defiance, and not of fear,
A voice in the darkness, a knock at the door,
And a word that shall echo forevermore!
For, borne on the night-wind of the Past,
Through all our history, to the last,
In the hour of darkness and peril and need,
The people will waken and listen to hear
The hurrying hoof-beats of that steed,
And the midnight message of Paul Revere.

Reciting poetry clears my head. I can recite on autopilot, while processing other information or admiring my surroundings. But the poem usually wins my attention at last. The rhythm and rhyme sweep me along, narrowing my focus. Reciting “Paul Revere’s Ride” takes me 5-10 minutes. After finishing that morning, I repeated the poem, and began repeating it again, until arriving at my institute on the edge of Harvard’s campus.

Isolation can benefit theorists. Many of us need quiet to study, capture proofs, and disentangle ideas. Many of us need collaboration; but email, Skype, Google hangouts, and Zoom connect us. Many of us share and gain ideas through travel; but I can forfeit a  little car sickness, air turbulence, and waiting in lines. Many of us need results from experimentalist collaborators, but experimental results often take long to gather in the absence of pandemics. Many of us are introverts who enjoy a little self-isolation.

 

April is National Poetry Month in the United States. I often celebrate by intertwining physics with poetry in my April blog post. Next month, though, I’ll have other news to report. Besides, my walk demonstrated, we need poetry now. 

Paul Revere found tranquility on the eve of a storm. Maybe, when the night clears and doors reopen, science born of the quiet will flood journals. Aren’t we fortunate, as physicists, to lead lives steeped in a kind of poetry?

The shape of MIP* = RE

There’s a famous parable about a group of blind men encountering an elephant for the very first time. The first blind man, who had his hand on the elephant’s side, said that it was like an enormous wall. The second blind man, wrapping his arms around the elephant’s leg, exclaimed that surely it was a gigantic tree trunk. The third, feeling the elephant’s tail, declared that it must be a thick rope. Vehement disagreement ensues, but after a while the blind men eventually come to realize that, while each person was partially correct, there is much more to the elephant than initially thought.

6-blind-men-hans-1024x6541-1

Last month, Zhengfeng, Anand, Thomas, John and I posted MIP* = RE to arXiv. The paper feels very much like the elephant of the fable — and not just because of the number of pages! To a computer scientist, the paper is ostensibly about the complexity of interactive proofs. To a quantum physicist, it is talking about mathematical models of quantum entanglement. To the mathematician, there is a claimed resolution to a long-standing problem in operator algebras. Like the blind men of the parable, each are feeling a small part of a new phenomenon. How do the wall, the tree trunk, and the rope all fit together?

I’ll try to trace the outline of the elephant: it starts with a mystery in quantum complexity theory, curves through the mathematical foundations of quantum mechanics, and arrives at a deep question about operator algebras.

The rope: The complexity of nonlocal games

In 2004, computer scientists Cleve, Hoyer, Toner, and Watrous were thinking about a funny thing called nonlocal games. A nonlocal game G involves three parties: two cooperating players named Alice and Bob, and someone called the verifier. The verifier samples a pair of random questions (x,y) and sends x to Alice (who responds with answer a), and y to Bob (who responds with answer b). The verifier then uses some function D(x,y,a,b) that tells her whether the players win, based on their questions and answers.

All three parties know the rules of the game before it starts, and Alice and Bob’s goal is to maximize their probability of winning the game. The players aren’t allowed to communicate with each other during the game, so it’s a nontrivial task for them to coordinate an optimal strategy (i.e., how they should individually respond to the verifier’s questions) before the game starts.

The most famous example of a nonlocal game is the CHSH game (which has made several appearances on this blog already): in this game, the verifier sends a uniformly random bit x to Alice (who responds with a bit a) and a uniformly random bit y to Bob (who responds with a bit b). The players win if a \oplus b = x \wedge y (in other words, the sum of their answer bits is equal to the product of the input bits modulo 2).

What is Alice’s and Bob’s maximum winning probability? Well, it depends on what type of strategy they use. If they use a strategy that can be modeled by classical physics, then their winning probability cannot exceed 75\% (we call this the classical value of CHSH). On the other hand, if they use a strategy based on quantum physics, Alice and Bob can do better by sharing two quantum bits (qubits) that are entangled. During the game each player measures their own qubit (where the measurement depends on their received question) to obtain answers that win the CHSH game with probability \cos^2(\pi/8) \approx .854\ldots (we call this the quantum value of CHSH). So even though the entangled qubits don’t allow Alice and Bob to communicate with each other, entanglement gives them a way to win with higher probability! In technical terms, their responses are more correlated than what is possible classically.

The CHSH game comes from physics, and was originally formulated not as a game involving Alice and Bob, but rather as an experiment involving two spatially separated devices to test whether stronger-than-classical correlations exist in nature. These experiments are known as Bell tests, named after John Bell. In 1964, he proved that correlations from quantum entanglement cannot be explained by any “local hidden variable theory” — in other words, a classical theory of physics.1 He then showed that a Bell test, like the CHSH game, gives a simple statistical test for the presence of nonlocal correlations between separated systems. Since the 1960s, numerous Bell tests have been conducted experimentally, and the verdict is clear: nature does not behave classically.

Cleve, Hoyer, Toner and Watrous noticed that nonlocal games/Bell tests can be viewed as a kind of multiprover interactive proof. In complexity theory, interactive proofs are protocols where some provers are trying to convince a verifier of a solution to a long, difficult computation, and the verifier is trying to efficiently determine if the solution is correct. In a Bell test, one can think of the provers as instead trying to convince the verifier of a physical statement: that they possess quantum entanglement.

With the computational lens trained firmly on nonlocal games, it then becomes natural to ask about their complexity. Specifically, what is the complexity of approximating the optimal winning probability in a given nonlocal game G? In complexity-speak, this is phrased as a question about characterizing the class MIP* (pronounced “M-I-P star”). This is also a well-motivated question for an experimentalist conducting Bell tests: at the very least, they’d want to determine if (a) quantum players can do better than classical players, and (b) what can the best possible quantum strategy achieve?

Studying this question in the case of classical players led to some of the most important results in complexity theory, such as MIP = NEXP and the PCP Theorem. Indeed, the PCP Theorem says that it is NP-hard to approximate the classical value of a nonlocal game (i.e. the maximum winning probability of classical players) to within constant additive accuracy (say \pm \frac{1}{10}). Thus, assuming that P is not equal to NP, we shouldn’t expect a polynomial-time algorithm for this. However it is easy to see that there is a “brute force” algorithm for this problem: by taking exponential time to enumerate over all possible deterministic player strategies, one can exactly compute the classical value of nonlocal games.

When considering games with entangled players, however, it’s not even clear if there’s a similar “brute force” algorithm that solves this in any amount of time — forget polynomial time; even if we allow ourselves exponential, doubly-exponential, Ackermann function amount of time, we still don’t know how to solve this quantum value approximation problem. The problem is that there is no known upper bound on the amount of entanglement that is needed for players to play a nonlocal game. For example, for a given game G, does an optimal quantum strategy require one qubit, ten qubits, or 10^{10^{10}} qubits of entanglement? Without any upper bound, a “brute force” algorithm wouldn’t know how big of a quantum strategy to search for — it would keep enumerating over bigger and bigger strategies in hopes of finding a better one.

Thus approximating the quantum value may not even be solvable in principle! But could it really be uncomputable? Perhaps we just haven’t found the right mathematical tool to give an upper bound on the dimension — maybe we just need to come up with some clever variant of, say, Johnson-Lindenstrauss or some other dimension reduction technique.2

In 2008, there was promising progress towards an algorithmic solution for this problem. Two papers [DLTW, NPA] (appearing on arXiv on the same day!) showed that an algorithm based on semidefinite programming can produce a sequence of numbers that converge to something called the commuting operator value of a nonlocal game.3 If one could show that the commuting operator value and the quantum value of a nonlocal game coincide, then this would yield an algorithm for solving this approximation problem!

Asking whether this commuting operator and quantum values are the same, however, immediately brings us to the precipice of some deep mysteries in mathematical physics and operator algebras, far removed from computer science and complexity theory. This takes us to the next part of the elephant.

The tree: mathematical foundations of locality

The mystery about the quantum value versus the commuting operator value of nonlocal games has to do with two different ways of modeling Alice and Bob in quantum mechanics. As I mentioned earlier, quantum physics predicts that the maximum winning probability in, say, the CHSH game when Alice and Bob share entanglement is approximately 85%. As with any physical theory, these predictions are made using some mathematical framework — formal rules for modeling physical experiments like the CHSH game.

In a typical quantum information theory textbook, players in the CHSH game are usually modelled in the following way: Alice’s device is described a state space \mathcal{H}_A (all the possible states the device could be in), a particular state |\psi_A\rangle from \mathcal{H}_A, and a set of measurement operators \mathcal{M}_A (operations that can be performed by the device). It’s not necessary to know what these things are formally; the important feature is that these three things are enough to make any prediction about Alice’s device — when treated in isolation, at least. Similarly, Bob’s device can be described using its own state space \mathcal{H}_B, state |\psi_B\rangle, and measurement operators \mathcal{M}_B.

In the CHSH game though, one wants to make predictions about Alice’s and Bob’s devices together. Here the textbooks say that Alice and Bob are jointly described by the tensor product formalism, which is a natural mathematical way of “putting separate spaces together”. Their state space is denoted by \mathcal{H}_A \otimes \mathcal{H}_B. The joint state |\psi_{AB}\rangle describing the devices comes from this tensor product space. When Alice and Bob independently make their local measurements, this is described by a measurement operator from the tensor product of operators from \mathcal{M}_A and \mathcal{M}_B. The strange correlations of quantum mechanics arise when their joint state |\psi_{AB}\rangle is entangled, i.e. it cannot be written as a well-defined state on Alice’s side combined with a well-defined state on Bob’s side (even though the state space itself is two independent spaces combined together!)

The tensor product model works well; it satisfies natural properties you’d want from the CHSH experiment, such as the constraint that Alice and Bob can’t instantaneously signal to each other. Furthermore, predictions made in this model match up very accurately with experimental results!

This is the not the whole story, though. The tensor product formalism works very well in non-relativistic quantum mechanics, where things move slowly and energies are low. To describe more extreme physical scenarios — like when particles are being smashed together at near-light speeds in the Large Hadron Collider — physicists turn to the more powerful quantum field theory. However, the notion of spatiotemporal separation in relativistic settings gets especially tricky. In particular, when trying to describe quantum mechanical systems, it is no longer evident how to assign Alice and Bob their own independent state spaces, and thus it’s not clear how to put relativistic Alice and Bob in the tensor product framework!

In quantum field theory, locality is instead described using the commuting operator model. Instead of assigning Alice and Bob their own individual state spaces and then tensoring them together to get a combined space, the commuting operator model stipulates that there is just a single monolithic space \mathcal{H} for both Alice and Bob. Their joint state is described using a vector |\psi\rangle from \mathcal{H}, and Alice and Bob’s measurement operators both act on \mathcal{H}. The constraint that they can’t communicate is captured by the fact that Alice’s measurement operators commute with Bob’s operators. In other words, the order in which the players perform their measurements on the system does not matter: Alice measuring before Bob, or Bob measuring before Alice, both yield the same statistical outcomes. Locality is enforced through commutativity.

The commuting operator framework contains the tensor product framework as a special case4, so it’s more general. Could the commuting operator model allow for correlations that can’t be captured by the tensor product model, even approximately56? This question is known as Tsirelson’s problem, named after the late mathematician Boris Tsirelson.

There is a simple but useful way to phrase this question using nonlocal games. What we call the “quantum value” of a nonlocal game G (denoted by \omega^* (G)) really refers to the supremum of success probabilities over tensor product strategies for Alice and Bob. If they use strategies from the more general commuting operator model, then we call their maximum success probability the commuting operator value of G (denoted by \omega^{co}(G)). Since tensor product strategies are a special case of commuting operator strategies, we have the relation \omega^* (G) \leq \omega^{co}(G) for all nonlocal games G.

Could there be a nonlocal game G whose tensor product value is different from its commuting operator value? With tongue-in-cheek: is there a game G that Alice and Bob could succeed at better if they were using quantum entanglement at near-light speeds? It is difficult to find even a plausible candidate game for which the quantum and commuting operator values may differ. The CHSH game, for example, has the same quantum and commuting operator value; this was proved by Tsirelson.

If the tensor product and the commuting operator models are the same (i.e., the “positive” resolution of Tsirelson’s problem), then as I mentioned earlier, this has unexpected ramifications: there would be an algorithm for approximating the quantum value of nonlocal games.

How does this algorithm work? It comes in two parts: a procedure to search from below, and one to search from above. The “search from below” algorithm computes a sequence of numbers \alpha_1,\alpha_2,\alpha_3,\ldots where \alpha_d is (approximately) the best winning probability when Alice and Bob use a d-qubit tensor product strategy. For fixed d, the number \alpha_d can be computed by enumerating over (a discretization of) the space of all possible d-qubit strategies. This takes a doubly-exponential amount of time in d — but at least this is still a finite time! This naive “brute force” algorithm will slowly plod along, computing a sequence of better and better winning probabilities. We’re guaranteed that in the limit as d goes to infinity, the sequence \{ \alpha_d\} converges to the quantum value \omega^* (G). Of course the issue is that the “search from below” procedure never knows how close it is to the true quantum value.

This is where the “search from above” comes in. This is an algorithm that computes a different sequence of numbers \beta_1,\beta_2,\beta_3,\ldots where each \beta_d is an upper bound on the commuting operator value \omega^{co}(G), and furthermore as d goes to infinity, \beta_d eventually converges to \omega^{co}(G). Furthermore, each \beta_d can be computed by a technique known as semidefinite optimization; this was shown by the two papers I mentioned.

Let’s put the pieces together. If the quantum and commuting operator values of a game G coincide (i.e. \omega^* (G) = \omega^{co}(G)), then we can run the “search from below” and “search from above” procedures in parallel, interleaving the computation of the \{\alpha_d\} and \{ \beta_d\}. Since both are guaranteed to converge to the quantum value, at some point the upper bound \beta_d will come within some \epsilon to the lower bound \alpha_d, and thus we would have homed in on (an approximation of) \omega^* (G). There we have it: an algorithm to approximate the quantum value of games.

All that remains to do, surely, is to solve Tsirelson’s problem in the affirmative (that commuting operator correlations can be approximated by tensor product correlations), and then we could put this pesky question about the quantum value to rest. Right?

The wall: Connes’ embedding problem

At the end of the 1920s, polymath extraordinaire John von Neumann formulated the first rigorous mathematical framework for the recently developed quantum mechanics. This framework, now familiar to physicists and quantum information theorists everywhere, posits that quantum states are vectors in a Hilbert space, and measurements are linear operators acting on those spaces. It didn’t take long for von Neumann to realize that there was a much deeper theory of operators on Hilbert spaces waiting to be discovered. With Francis Murray, in the 1930s he started to develop a theory of “rings of operators” — today these are called von Neumann algebras.

The theory of operator algebras has since flourished into a rich and beautiful area of mathematics. It remains inseparable from mathematical physics, but has established deep connections with subjects such as knot theory and group theory. One of the most important goals in operator algebras has been to provide a classification of von Neumann algebras. In their series of papers on the subject, Murray and von Neumann first showed that classifying von Neumann algebras reduces to understanding their factors, the atoms out of which all von Neumann algebras are built. Then, they showed that factors of von Neumann algebras come in one of three species: type I, type II, and type III. Type I factors were completely classified by Murray and von Neumann, and they made much progress on characterizing certain type II factors. However progress stalled until the 1970s, when Alain Connes provided a classification of type III factors (work for which he would later receive the Fields Medal). In the same 1976 classification paper, Connes makes a casual remark about something called type II_1 factors7:

We now construct an embedding of N into \mathcal{R}. Apparently such an embedding ought to exist for all II_1 factors.

This line, written in almost a throwaway manner, eventually came to be called “Connes’ embedding problem”: does every separable II_1 factor embed into an ultrapower of the hyperfinite II_1 factor? It seems that Connes surmises that it does (and thus this is also called “Connes’ embedding conjecture“). Since 1976, this problem has grown into a central question of operator algebras, with numerous equivalent formulations and consequences across mathematics.

In 2010, two papers (again appearing on the arXiv on the same day!) showed that the reach of Connes’ embedding conjecture extends back to the foundations of quantum mechanics. If Connes’ embedding problem has a positive answer (i.e. an embedding exists), then Tsirelson’s problem (i.e. whether commuting operator can be approximated by tensor product correlations) also has a positive answer! Later it was shown by Ozawa that Connes’ embedding problem is in fact equivalent to Tsirelson’s problem.

Remember that our approach to compute the value of nonlocal games hinged on obtaining a positive answer to Tsirelson’s problem. The sequence of papers [NPA, DLTW, Fritz, JNPPSW] together show that resolving — one way or another — whether this search-from-below, search-from-above algorithm works would essentially settle Connes’ embedding conjecture. What started as a funny question at the periphery of computer science and quantum information theory has morphed into an attack on one of the central problems in operator algebras.

MIP* = RE

We’ve now ended back where we started: the complexity of nonlocal games. Let’s take a step back and try to make sense of the elephant.

Even to a complexity theorist, “MIP* = RE” may appear esoteric. The complexity classes MIP* and RE refer to a bewildering grabbag of concepts: there’s Alice, Bob, Turing machines, verifiers, interactive proofs, quantum entanglement. What is the meaning of the equality of these two classes?

First, it says that the Halting problem has an interactive proof involving quantum entangled provers. In the Halting problem, you want to decide whether a Turing machine M, if you started running it, would eventually terminate with a well-defined answer, or if it would get stuck in an infinite loop. Alan Turing showed that this problem is undecidable: there is no algorithm that can solve this problem in general. Loosely speaking, the best thing you can do is to just flick on the power switch to M, and wait to see if it eventually stops. If M gets stuck in an infinite loop — well, you’re going to be waiting forever.

MIP* = RE shows with the help of all-powerful Alice and Bob, a time-limited verifier can run an interactive proof to “shortcut” the waiting. Given the Turing machine M‘s description (its “source code”), the verifier can efficiently compute a description of a nonlocal game G_M whose behavior reflects that of M. If M does eventually halt (which could happen after a million years), then there is a strategy for Alice and Bob that causes the verifier to accept with probability 1. In other words, \omega^* (G_M) = 1. If M gets stuck in an infinite loop, then no matter what strategy Alice and Bob use, the verifier always rejects with high probability, so \omega^* (G_M) is close to 0.

By playing this nonlocal game, the verifier can obtain statistical evidence that M is a Turing machine that eventually terminates. If the verifier plays G_M and the provers win, then the verifier should believe that it is likely that M halts. If they lose, then the verifier concludes there isn’t enough evidence that M halts8. The verifier never actually runs M in this game; she has offloaded the task to Alice and Bob, who we can assume are computational gods capable of performing million-year-long computations instantly. For them, the challenge is instead to convince the verifier that if she were to wait millions of years, she would witness the termination of M. Incredibly, the amount of work put in by the verifier in the interactive proof is independent of the time it takes for M to halt!

The fact that the Halting problem has an interactive proof seems borderline absurd: if the Halting problem is unsolvable, why should we expect it to be verifiable? Although complexity theory has taught us that there can be a large gap between the complexity of verification versus search, it has always been a difference of efficiency: if solutions to a problem can be efficiently verified, then solutions can also be found (albeit at drastically higher computational cost). MIP* = RE shows that, with quantum entanglement, there can be a chasm of computability between verifying solutions and finding them.

Now let’s turn to the non-complexity consequences of MIP* = RE. The fact that we can encode the Halting problem into nonlocal games also immediately tells us that there is no algorithm whatsoever to approximate the quantum value. Suppose there was an algorithm that could approximate \omega^* (G). Then, using the transformation from Turing machines to nonlocal games mentioned above, we could use this algorithm to solve the Halting problem, which is impossible.

Now the dominoes start to fall. This means that, in particular, the proposed “search-from-below”/”search-from-above” algorithm cannot succeed in approximating \omega^* (G). There must be a game G, then, for which the quantum value is different from the commuting operator value. But this implies Tsirelson’s problem has a negative answer, and therefore Connes’ embedding conjecture is false.

We’ve only sketched the barest of outlines of this elephant, and yet it is quite challenging to hold it in the mind’s eye all at once9. This story is intertwined with some of the most fundamental developments in the past century: modern quantum mechanics, operator algebras, and computability theory were birthed in the 1930s. Einstein, Podolsky and Rosen wrote their landmark paper questioning the nature of quantum entanglement in 1935, and John Bell discovered his famous test and inequality in 1964. Connes’ formulated his conjecture in the ’70s, Tsirelson made his contributions to the foundations of quantum mechanics in the ’80s, and about the same time computer scientists were inventing the theory of interactive proofs and probabilistically checkable proofs (PCPs).

We haven’t said anything about the proof of MIP* = RE yet (this may be the subject of future blog posts), but it is undeniably a product of complexity theory. The language of interactive proofs and Turing machines is not just convenient but necessary: at its heart MIP* = RE is the classical PCP Theorem, with the help of quantum entanglement, recursed to infinity.

What is going on in this proof? What parts of it are fundamental, and which parts are unnecessary? What is the core of it that relates to Connes’ embedding conjecture? Are there other consequences of this uncomputability result? These are questions to be explored in the coming days and months, and the answers we find will be fascinating.

Acknowledgments. Thanks to William Slofstra and Thomas Vidick for helpful feedback on this post.


  1. This is why quantum correlations are called “nonlocal”, and why we call the CHSH game a “nonlocal game”: it is a test for nonlocal behavior. 
  2. A reasonable hope would be that, for every nonlocal game G, there is a generic upper bound on the number of qubits needed to approximate the optimal quantum strategy (e.g., a game G with Q possible questions and A possible answers would require at most, say, 2^{O(Q \cdot A)} qubits to play optimally). 
  3. In those papers, they called it the field theoretic value
  4. The space \mathcal{H} can be broken down into the tensor product \mathcal{H}_A \otimes \mathcal{H}_B, and Alice’s measurements only act on the \mathcal{H}_A space and Bob’s measurements only act on the \mathcal{H}_B space. In this case, Alice’s measurements clearly commute with Bob’s. 
  5. In a breakthrough work in 2017, Slofstra showed that the tensor product framework is not exactly the same as the commuting operator framework; he shows that there is a nonlocal game G where players using commuting operator strategies can win with probability 1, but when they use a tensor-product strategy they can only win with probability strictly less than 1. However the perfect commuting operator strategy can be approximated by tensor-product strategies arbitrarily well, so the quantum values and the commuting operator values of G are the same. 
  6. The commuting operator model is motivated by attempts to develop a rigorous mathematical framework for quantum field theory from first principles (see, for example algebraic quantum field theory (AQFT)). In the “vanilla” version of AQFT, tensor product decompositions between casually independent systems do not exist a priori, but mathematical physicists often consider AQFTs augmented with an additional “split property”, which does imply tensor product decompositions. Thus in such AQFTs, Tsirelson’s problem has an affirmative answer. 
  7. Type II_1 is pronounced “type two one”. 
  8. This is not the same as evidence that M loops forever! 
  9. At least, speaking for myself. 

Sense, sensibility, and superconductors

Jonathan Monroe disagreed with his PhD supervisor—with respect. They needed to measure a superconducting qubit, a tiny circuit in which current can flow forever. The qubit emits light, which carries information about the qubit’s state. Jonathan and Kater intensify the light using an amplifier. They’d fabricated many amplifiers, but none had worked. Jonathan suggested changing their strategy—with a politeness to which Emily Post couldn’t have objected. Jonathan’s supervisor, Kater Murch, suggested repeating the protocol they’d performed many times.

“That’s the definition of insanity,” Kater admitted, “but I think experiment needs to involve some of that.”

I watched the exchange via Skype, with more interest than I’d have watched the Oscars with. Someday, I hope, I’ll be able to weigh in on such a debate, despite working as a theorist. Someday, I’ll have partnered with enough experimentalists to develop insight.

I’m partnering with Jonathan and Kater on an experiment that coauthors and I proposed in a paper blogged about here. The experiment centers on an uncertainty relation, an inequality of the sort immortalized by Werner Heisenberg in 1927. Uncertainty relations imply that, if you measure a quantum particle’s position, the particle’s momentum ceases to have a well-defined value. If you measure the momentum, the particle ceases to have a well-defined position. Our uncertainty relation involves weak measurements. Weakly measuring a particle’s position doesn’t disturb the momentum much and vice versa. We can interpret the uncertainty in information-processing terms, because we cast the inequality in terms of entropies. Entropies, described here, are functions that quantify how efficiently we can process information, such as by compressing data. Jonathan and Kater are checking our inequality, and exploring its implications, with a superconducting qubit.

With chip

I had too little experience to side with Jonathan or with Kater. So I watched, and I contemplated how their opinions would sound if expressed about theory. Do I try one strategy again and again, hoping to change my results without changing my approach? 

At the Perimeter Institute for Theoretical Physics, Masters students had to swallow half-a-year of course material in weeks. I questioned whether I’d ever understand some of the material. But some of that material resurfaced during my PhD. Again, I attended lectures about Einstein’s theory of general relativity. Again, I worked problems about observers in free-fall. Again, I calculated covariant derivatives. The material sank in. I decided never to question, again, whether I could understand a concept. I might not understand a concept today, or tomorrow, or next week. But if I dedicate enough time and effort, I chose to believe, I’ll learn.

My decision rested on experience and on classes, taught by educational psychologists, that I’d taken in college. I’d studied how brains change during learning and how breaks enhance the changes. Sense, I thought, underlay my decision—though expecting outcomes to change, while strategies remain static, sounds insane.

Old cover

Does sense underlie Kater’s suggestion, likened to insanity, to keep fabricating amplifiers as before? He’s expressed cynicism many times during our collaboration: Experiment needs to involve some insanity. The experiment probably won’t work for a long time. Plenty more things will likely break. 

Jonathan and I agree with him. Experiments have a reputation for breaking, and Kater has a reputation for knowing experiments. Yet Jonathan—with professionalism and politeness—remains optimistic that other methods will prevail, that we’ll meet our goals early. I hope that Jonathan remains optimistic, and I fancy that Kater hopes, too. He prophesies gloom with a quarter of a smile, and his record speaks against him: A few months ago, I met a theorist who’d collaborated with Kater years before. The theorist marveled at the speed with which Kater had operated. A theorist would propose an experiment, and boom—the proposal would work.

Sea monsters

Perhaps luck smiled upon the implementation. But luck dovetails with the sense that underlies Kater’s opinion: Experiments involve factors that you can’t control. Implement a protocol once, and it might fail because the temperature has risen too high. Implement the protocol again, and it might fail because a truck drove by your building, vibrating the tabletop. Implement the protocol again, and it might fail because you bumped into a knob. Implement the protocol a fourth time, and it might succeed. If you repeat a protocol many times, your environment might change, changing your results.

Sense underlies also Jonathan’s objections to Kater’s opinions. We boost our chances of succeeding if we keep trying. We derive energy to keep trying from creativity and optimism. So rebelling against our PhD supervisors’ sense is sensible. I wondered, watching the Skype conversation, whether Kater the student had objected to prophesies of doom as Jonathan did. Kater exudes the soberness of a tenured professor but the irreverence of a Californian who wears his hair slightly long and who tattooed his wedding band on. Science thrives on the soberness and the irreverence.

Green cover

Who won Jonathan and Kater’s argument? Both, I think. Last week, they reported having fabricated amplifiers that work. The lab followed a protocol similar to their old one, but with more conscientiousness. 

I’m looking forward to watching who wins the debate about how long the rest of the experiment takes. Either way, check out Jonathan’s talk about our experiment if you attend the American Physical Society’s March Meeting. Jonathan will speak on Thursday, March 5, at 12:03, in room 106. Also, keep an eye out for our paper—which will debut once Jonathan coaxes the amplifier into synching with his qubit.

Interaction + Entanglement = Efficient Proofs of Halting

A couple weeks ago my co-authors Zhengfeng Ji (UTS Sydney), Heny Yuen (University of Toronto) and Anand Natarajan and John Wright (both at Caltech’s IQIM, with John soon moving to UT Austin) & I posted a manuscript on the arXiv preprint server entitled

MIP*=RE

The magic of the single-letter formula quickly made its effect, and our posting received some attention on the blogosphere (see links below). Within computer science, complexity theory is at an advantage in its ability to capture powerful statements in few letters: who has not head of P, NP, and, for readers of this blog, BQP and QMA? (In contrast, I am under no illusion that my vague attempt at a more descriptive title has, by the time you reach this line, all but vanished from the reader’s memory.)

Even accounting for this popularity however, it is a safe bet that fewer of our readers have heard of MIP* or RE. Yet we are promised that the above-stated equality has great consequences for physics (“Tsirelson’s problem” in the study of nonlocality) and mathematics (“Connes’ embedding problem” in the theory of von Neumann algebras). How so — how can complexity-theoretic alphabet soup have any consequence for, on the one hand, physical reality, and on the other, abstract mathematics?

The goal of this post and the next one is to help the interested reader grasp the significance of interactive proofs (that lie between the symbols MIP*) and undecidability (that lies behind RE) for quantum mechanics.

The bulk of the present post is an almost identical copy of a post I wrote for my personal blog. To avoid accusations of self-plagiarism, I will substantiate it with a little picture and a story, see below. The post gives a very personal take on the research that led to the aforementioned result. In the next post, my co-author Henry Yuen has offered to give a more scientific introduction to the result and its significance.

Before proceeding, it is important to make it clear that the research described in this post and the next has not been refereed or thoroughly vetted by the community. This process will take place over the coming months, and we should wait until it is completed before placing too much weight on the results. As an author, I am proud of my work; yet I am aware that there is due process to be made before the claims can be officialised. As such, these posts only represent my opinion (and Henry’s) and not necessarily that of the wider scientific community.

For more popular introductions to our result, see the blog posts of Scott Aaronson, Dick Lipton, and Gil Kalai and reporting by Davide Castelvecchi for Nature and Emily Conover for Science.

Now for the personal post…and the promised picture. IMG_8074Isn’t it beautiful? The design is courtesy of Tony Metger and Alexandru Gheorghiu, the first a visiting student and the second a postdoctoral scholar at Caltech’s IQIM. While Tony and Andru came up with the idea, the execution is courtesy of the bakery store employee, who graciously implemented the custom design (apparently writing equations on top of cakes is not  common enough to be part of the standard offerings, so they had to go for the custom option). Although it is unclear if the executioner grasped the full depth of the signs they were copying, note how perfect the execution: not a single letter is out of place! Thanks to Tony, Andru, and the anonymous chef for the tasty souvenir.

Now for the story. In an earlier post on my personal research blog, I had reported on the beautiful recent result by Natarajan and Wright showing the astounding power of multi-prover interactive proofs with quantum provers sharing entanglement: in letters, \text{NEEXP} \subseteq \text{MIP}^\star. In the remainder of this post I will describe our follow-up work with Ji, Natarajan, Wright, and Yuen. In this post I will tell the story from a personal point of view, with all the caveats that this implies: the “hard science” will be limited (but there could be a hint as to how “science”, to use a big word, “progresses”, to use an ill-defined one; see also the upcoming post by Henry Yuen for more), the story is far too long, and it might be mostly of interest to me only. It’s a one-sided story, but that has to be. (In particular below I may at times attribute credit in the form “X had this idea”. This is my recollection only, and it is likely to be inaccurate. Certainly I am ignoring a lot of important threads.) I wrote this because I enjoyed recollecting some of the best moments in the story just as much as some the hardest; it is fun to look back and find meanings in ideas that initially appeared disconnected. Think of it as an example of how different lines of work can come together in unexpected ways; a case for open-ended research. It’s also an antidote against despair that I am preparing for myself: whenever I feel I’ve been stuck on a project for far too long, I’ll come back to this post and ask myself if it’s been 14 years yet — if not, then press on.

It likely comes as a surprise to me only that I am no longer fresh out of the cradle. My academic life started in earnest some 14 years ago, when in the Spring of 2006 I completed my Masters thesis in Computer Science under the supervision of Julia Kempe, at Orsay in France. I had met Julia the previous term: her class on quantum computing was, by far, the best-taught and most exciting course in the Masters program I was attending, and she had gotten me instantly hooked. Julia agreed to supervise my thesis, and suggested that I look into some interesting recent result by Stephanie Wehner that linked the study of entanglement and nonlocality in quantum mechanics to complexity-theoretic questions about interactive proof systems (specifically, this was Stephanie’s paper showing that \text{XOR-MIP}^\star \subseteq \text{QIP}(2)).

At the time the topic was very new. It had been initiated the previous year with a beautiful paper by Cleve et al. (that I have recommended to many a student since!) It was a perfect fit for me: the mathematical aspects of complexity theory and quantum computing connected to my undergraduate background, while the relative concreteness of quantum mechanics (it is a physical theory after all) spoke to my desire for real-world connection (not “impact” or even “application” — just “connection”). Once I got myself up to speed in the area (which consisted of three papers: the two I already mentioned, together with a paper by Kobayashi and Matsumoto where they studied interactive proofs with quantum messages), Julia suggested looking into the “entangled-prover” class \text{MIP}^\star introduced in the aforementioned paper by Cleve et al. Nothing was known about this class! Nothing besides the trivial inclusion of single-prover interactive proofs, IP, and the containment in…ALL, the trivial class that contains all languages.

Yet the characterization MIP=NEXP of its classical counterpart by Babai et al. in the 1990s had led to one of the most productive lines of work in complexity of the past few decades, through the PCP theorem and its use from hardness of approximation to efficient cryptographic schemes. Surely, studying \text{MIP}^\star had to be a productive direction? In spite of its well-established connection to classical complexity theory, via the formalism of interactive proofs, this was a real gamble. The study of entanglement from the complexity-theoretic perspective was entirely new, and bound to be fraught with difficulty; very few results were available and the existing lines of works, from the foundations of non-locality to more recent endeavors in device-independent cryptography, provided little other starting point than strong evidence that even the simplest examples came with many unanswered questions. But my mentor was fearless, and far from a novice in terms of defraying new areas, having done pioneering work in areas ranging from quantum random walks to Hamiltonian complexity through adiabatic computation. Surely this would lead to something?

It certainly did. More sleepless nights than papers, clearly, but then the opposite would only indicate dullness. Julia’s question led to far more unexpected consequences than I, or I believe she, could have imagined at the time. I am writing this post to celebrate, in a personal way, the latest step in 15 years of research by dozens of researchers: today my co-authors and I uploaded to the quant-ph arXiv what we consider a complete characterization of the power of entangled-prover interactive proof systems by proving the equality \text{MIP}^\star = \text{RE}, the class of all recursively enumerable languages (a complete problem for RE is the halting problem). Without going too much into the result itself (if you’re interested, look for an upcoming post here that goes into the proof a bit more), and since this is a more personal post, I will continue on with some personal thoughts about the path that got us there.

When Julia & I started working on the question, our main source of inspiration were the results by Cleve et al. showing that the non-local correlations of entanglement had interesting consequences when seen through the lens of interactive proof systems in complexity theory. Since the EPR paper, a lot of work in understanding entanglement had already been accomplished in the Physics community, most notably by Mermin, Peres, Bell, and more recently the works in device-independent quantum cryptography by Acin, Pironio, Scarani and many others, stimulated by Ekert’s proposal for quantum key distribution and Mayers and Yao’s idea for “device-independent cryptography”. By then we certainly knew that “spooky action-at-a-distance” did not entail any faster-than-light communication, and indeed was not really “action-at-a-distance” in the first place but merely “correlation-at-a-distance”. What Cleve et al. recognized is that these “spooky correlations-at-a-distance” were sufficiently special so as to not only give numerically different values in “Bell inequalities”, the tool invented by Bell to evidence non-locality in quantum mechanics, but also have some potentially profound consequences in complexity theory.

In particular, examples such as the “Magic Square game” demonstrated that enough correlation could be gained from entanglement so as to defeat basic proof systems whose soundness relied only on the absence of communication between the provers, an assumption that until then had been wrongly equated with the assumption that any computation performed by the provers could be modeled entirely locally. I think that the fallacy of this implicit assumption came as a surprise to complexity theorists, who may still not have entirely internalized it. Yet the perfect quantum strategy for the Magic Square game provides a very concrete “counter-example” to the soundness of the “clause-vs-variable” game for 3SAT. Indeed this game, a reformulation by Aravind and Cleve-Mermin of a Bell Inequality discovered by Mermin and Peres in 1990, can be easily re-framed as a 3SAT system of equations that is not satisfiable, and yet is such that the associated two-player clause-vs-variable game has a perfect quantum strategy. It is this observation, made in the paper by Cleve et al., that gave the first strong hint that the use of entanglement in interactive proof systems could make many classical results in the area go awry.

By importing the study of non-locality into complexity theory Cleve et al. immediately brought it into the realm of asymptotic analysis. Complexity theorists don’t study fixed objects, they study families of objects that tend to have a uniform underlying structure and whose interesting properties manifest themselves “in the limit”. As a result of this new perspective focus shifted from the study of single games or correlations to infinite families thereof. Some of the early successes of this translation include the “unbounded violations” that arose from translating asymptotic separations in communication complexity to the language of Bell inequalities and correlations (e.g. this paper). These early successes attracted the attention of some physicists working in foundations as well as some mathematical physicists, leading to a productive exploration that combined tools from quantum information, functional analysis and complexity theory.

The initial observations made by Cleve et al. had pointed to \text{MIP}^\star as a possibly interesting complexity class to study. Rather amazingly, nothing was known about it! They had shown that under strong restrictions on the verifier’s predicate (it should be an XOR of two answer bits), a collapse took place: by the work of Hastad, XOR-MIP equals NEXP, but \text{MIP}^\star is included in EXP. This seemed very fortuitous (the inclusion is proved via a connection with semidefinite programming that seems tied to the structure of XOR-MIP protocols): could entanglement induce a collapse of the entire, unrestricted class? We thought (at this point mostly Julia thought, because I had no clue) that this ought not to be the case, and so we set ourselves to show that the equality \text{MIP}^\star=\text{NEXP}, that would directly parallel Babai et al.’s characterization MIP=NEXP, holds. We tried to show this by introducing techniques to “immunize” games against entanglement: modify an interactive proof system so that its structure makes it “resistant” to the kind of “nonlocal powers” that can be used to defeat the clause-vs-variable game (witness the Magic Square). This was partially successful, and led to one of the papers I am most proud of — I am proud of it because I think it introduced elementary techniques (such as the use of the Cauchy-Schwarz inequality — inside joke — more seriously, basic things such as “prover-switching”, “commutation tests”, etc.) that are now routine manipulations in the area. The paper was a hard sell! It’s good to remember the first rejections we received. They were not unjustified: the main point of criticism was that we were only able to establish a hardness result for exponentially small completeness-soundness gap. A result for such a small gap in the classical setting follows directly from a very elementary analysis based on the Cook-Levin theorem. So then why did we have to write so many pages (and so many applications of Cauchy-Schwarz!) to arrive at basically the same result (with a ^\star)?

Eventually we got lucky and the paper was accepted to a conference. But the real problem, of establishing any non-trivial lower bound on the class \text{MIP}^\star with constant (or, in the absence of any parallel repetition theorem, inverse-polynomial) completeness-soundness gap, remained. By that time I had transitioned from a Masters student in France to a graduate student in Berkeley, and the problem (pre-)occupied me during some of the most difficult years of my Ph.D. I fully remember spending my first year entirely thinking about this (oh and sure, that systems class I had to pass to satisfy the Berkeley requirements), and then my second year — yet, getting nowhere. (I checked the arXiv to make sure I’m not making this up: two full years, no posts.) I am forever grateful to my fellow student Anindya De for having taken me out of the cycle of torture by knocking on my door with one of the most interesting questions I have studied, that led me into quantum cryptography and quickly resulted in an enjoyable paper. It was good to feel productive again! (Though the paper had fun reactions as well: after putting it on the arXiv we quickly heard from experts in the area that we had solved an irrelevant problem, and that we better learn about information theory — which we did, eventually leading to another paper, etc.) The project had distracted me and I set interactive proofs aside; clearly, I was stuck.

About a year later I visited IQC in Waterloo. I don’t remember in what context the visit took place. What I do remember is a meeting in the office of Tsuyoshi Ito, at the time a postdoctoral scholar at IQC. Tsuyoshi asked me to explain our result with Julia. He then asked a very pointed question: the bedrock for the classical analysis of interactive proof systems is the “linearity test” of Blum-Luby-Rubinfeld (BLR). Is there any sense in which we could devise a quantum version of that test?

What a question! This was great. At first it seemed fruitless: in what sense could one argue that quantum provers apply a “linear function”? Sure, quantum mechanics is linear, but that is besides the point. The linearity is a property of the prover’s answers as a function of their question. So what to make of the quantum state, the inherent randomness, etc.?

It took us a few months to figure it out. Once we got there however, the answer was relatively simple — the prover should be making a question-independent measurement that returns a linear function that it applies to its question in order to obtain the answer returned to the verifier — and it opened the path to our subsequent paper showing that the inclusion of NEXP in \text{MIP}^\star indeed holds. Tsuyoshi’s question about linearity testing had allowed us to make the connection with PCP techniques; from there to MIP=NEXP there was only one step to make, which is to analyze multi-linearity testing. That step was suggested by my Ph.D. advisor, Umesh Vazirani, who was well aware of the many pathways towards the classical PCP theorem, since the theorem had been obtained in great part by his former student Sanjeev Arora. It took a lot of technical work, yet conceptually a single question from my co-author had sufficed to take me out of a 3-year slumber.

This was in 2012, and I thought we were done. For some reason the converse inclusion, of \text{MIP}^\star in NEXP, seemed to resist our efforts, but surely it couldn’t resist much longer. Navascues et al. had introduced a hierarchy of semidefinite programs that seemed to give the right answer (technically they could only show convergence to a relaxation, the commuting value, but that seemed like a technicality; in particular, the values coincide when restricted to finite-dimensional strategies, which is all we computer scientists cared about). There were no convergence bounds on the hierarchy, yet at the same time commutative SDP hierarchies were being used to obtain very strong results in combinatorial optimization, and it seemed like it would only be a matter of time before someone came up with an analysis of the quantum case. (I had been trying to solve a related “dimension reduction problem” with Oded Regev for years, and we were making no progress; yet it seemed someone ought to!)

In Spring 2014 during an open questions session at a workshop at the Simons Institute in Berkeley Dorit Aharonov suggested that I ask the question of the possible inclusion of QMA-EXP, the exponential-sized-proofs analogue of QMA, in \text{MIP}^\star. A stronger result than the inclusion of NEXP (under assumptions), wouldn’t it be a more natural “fully quantum” analogue of MIP=NEXP? Dorit’s suggestion was motivated by research on the “quantum PCP theorem”, that aims to establish similar hardness results in the realm of the local Hamiltonian problem; see e.g. this post for the connection. I had no idea how to approach the question — I also didn’t really believe the answer could be positive — but what can you do, if Dorit asks you something… So I reluctantly went to the board and asked the question. Joe Fitzsimons was in the audience, and he immediately picked it up! Joe had the fantastic ideas of using quantum error-correction, or more specifically secret-sharing, to distribute a quantum proof among the provers. His enthusiasm overcame my skepticism, and we eventually showed the desired inclusion. Maybe \text{MIP}^\star was bigger than \text{NEXP} after all.

Our result, however, had a similar deficiency as the one with Julia, in that the completeness-soundness gap was exponentially small. Obtaining a result with a constant gap took 3 years of couple more years of work and the fantastic energy and insights of a Ph.D. student at MIT, Anand Natarajan. Anand is the first person I know of to have had the courage to dive into the most technical aspects of the analysis of the aforementioned results, while also bringing in the insights of a “true quantum information theorist” that were supported by Anand’s background in Physics and upbringing in the group of Aram Harrow at MIT. (In contrast I think of myself more as a “raw” mathematician; I don’t really understand quantum states other than as positive-semidefinite matrices…not that I understand math either of course; I suppose I’m some kind of a half-baked mish-mash.) Anand had many ideas but one of the most beautiful ones led to what he poetically called the “Pauli braiding test”, a “truly quantum” analogue of the BLR linearity test that amounts to doing two linearity tests in conjugate bases and piecing the results together into a robust test for {n}-qubit entanglement (I wrote about our work on this here).

At approximately the same time, Zhengfeng Ji had another wonderful idea that was in some sense orthogonal to our work. (My interpretation of) Zhengfeng’s idea is that one can see an interactive proof system as a computation (verifier-prover-verifier) and use Kitaev’s circuit-to-Hamiltonian construction to transform the entire computation into a “quantum CSP” (in the same sense that the local Hamiltonian problem is a quantum analogue of classical constraint satisfaction problems (CSP)) that could then itself be verified by a quantum multi-prover interactive proof system…with exponential gains in efficiency! Zhengfeng’s result implied an exponential improvement in complexity compared to the result by Julia and myself, showing inclusion of NEEXP, instead of NEXP, in \text{MIP}^\star. However, Zhengfeng’s technique suffered from the same exponentially small completeness-soundness gap as we had, so that the best lower bound on \text{MIP}^\star per se remained NEXP.

Both works led to follow-ups. With Natarajan we promoted the Pauli braiding test into a “quantum low-degree test” that allowed us to show the inclusion of QMA-EXP into \text{MIP}^\star, with constant gap, thereby finally answering the question posed by Aharonov 4 years after it was asked. (I should also say that by then all results on \text{MIP}^\star started relying on a sequence of parallel repetition results shown by Bavarian, Yuen, and others; I am skipping this part.) In parallel, with Ji, Fitzsimons, and Yuen we showed that Ji’s compression technique could be “iterated” an arbitrary number of times. In fact, by going back to “first principles” and representing verifiers uniformly as Turing machines we realized that the compression technique could be used iteratively to (up to small caveats) give a new proof of the fact (first shown by Slofstra using an embedding theorem for finitely presented group) that the zero-gap version of \text{MIP}^\star contains the halting problem. In particular, the entangled value is uncomputable! This was not the first time that uncomputability crops in to a natural problem in quantum computing (e.g. the spectral gap paper), yet it still surprises when it shows up. Uncomputable! How can anything be uncomputable!

As we were wrapping up our paper Henry Yuen realized that our “iterated compression of interactive proof systems” was likely optimal, in the following sense. Even a mild improvement of the technique, in the form of a slower closing of the completeness-soundness gap through compression, would yield a much stronger result: undecidability of the constant-gap class \text{MIP}^\star. It was already known by work of Navascues et al., Fritz, and others, that such a result would have, if not surprising, certainly consequences that seemed like they would be taking us out of our depth. In particular, undecidability of any language in \text{MIP}^\star would imply a negative resolution to a series of equivalent conjectures in functional analysis, from Tsirelson’s problem to Connes’ Embedding Conjecture through Kirchberg’s QWEP conjecture. While we liked our result, I don’t think that we believed it could resolve any conjecture(s) in functional analysis.

So we moved on. At least I moved on, I did some cryptography for a change. But Anand Natarajan and his co-author John Wright did not stop there. They had the last major insight in this story, which underlies their recent STOC best paper described in the previous post. Briefly, they were able to combine the two lines of work, by Natarajan & myself on low-degree testing and by Ji et al. on compression, to obtain a compression that is specially tailored to the existing \text{MIP}^\star protocol for NEXP and compresses that protocol without reducing its completeness-soundness gap. This then let them show Ji’s result that \text{MIP}^\star contains NEEXP, but this time with constant gap! The result received well-deserved attention. In particular, it is the first in this line of works to not suffer from any caveats (such as a closing gap, or randomized reductions, or some kind of “unfair” tweak on the model that one could attribute the gain in power to), and it implies an unconditional separation between MIP and \text{MIP}^\star.

As they were putting the last touches on their result, suddenly something happened, which is that a path towards a much bigger result opened up. What Natarajan & Wright had achieved is a one-step gapless compression. In our iterated compression paper we had observed that iterated gapless compression would lead to \text{MIP}^\star=\text{RE}, implying negative answers to the aforementioned conjectures. So then?

I suppose it took some more work, but in some way all the ideas had been laid out in the previous 15 years of work in the complexity of quantum interactive proof systems; we just had to put it together. And so a decade after the characterization QIP = PSPACE of single-prover quantum interactive proof systems, we have arrived at a characterization of quantum multiprover interactive proof systems, \text{MIP}^\star = \text{RE}. With one author in common between the two papers: congratulations Zhengfeng!

Even though we just posted a paper, in a sense there is much more left to do. I am hopeful that our complexity-theoretic result will attract enough interest from the mathematicians’ community, and especially operator algebraists, for whom CEP is a central problem, that some of them will be willing to devote time to understanding the result. I also recognize that much effort is needed on our own side to make it accessible in the first place! I don’t doubt that eventually complexity theory will not be needed to obtain the purely mathematical consequences; yet I am hopeful that some of the ideas may eventually find their way into the construction of interesting mathematical objects (such as, who knows, a non-hyperlinear group).

That was a good Masters project…thanks Julia!