Those students came to mind as I grew to know David Limmer. David is an assistant professor of chemistry at the University of California, Berkeley. He studies statistical mechanics far from equilibrium, using information theory. Though a theorist ardent about mathematics, he partners with experimentalists. He can pass as a physicist and keeps an eye on topics as far afield as black holes. According to his faculty page, I discovered while writing this article, he’s even three years older than I.

I met David in the final year of my PhD. I was looking ahead to postdocking, as his postdoc fellowship was fading into memory. The more we talked, the more I thought, I’d like to be like him.

I had the good fortune to collaborate with David on a paper published by *Physical Review A* this spring (as an Editors’ Suggestion!). The project has featured in *Quantum Frontiers* as the inspiration for a rewriting of “I’m a little teapot.”

We studied a molecule prevalent across nature and technologies. Such molecules feature in your eyes, solar-fuel-storage devices, and more. The molecule has two clumps of atoms. One clump may rotate relative to the other if the molecule absorbs light. The rotation switches the molecule from a “closed” configuration to an “open” configuration.

These molecular switches are small, quantum, and far from equilibrium; so modeling them is difficult. Making assumptions offers traction, but many of the assumptions disagreed with David. He wanted general, thermodynamic-style bounds on the probability that one of these molecular switches would switch. Then, he ran into me.

I traffic in mathematical models, developed in quantum information theory, called *resource theories*. We use resource theories to calculate which states can transform into which in thermodynamics, as a dime can transform into ten pennies at a bank. David and I modeled his molecule in a resource theory, then bounded the molecule’s probability of switching from “closed” to “open.” I accidentally composed a theme song for the molecule; you can sing along with this post.

That post didn’t mention what David and I discovered about quantum clocks. But what better backdrop for a mental trip to elementary school or to three years into the future?

I’ve blogged about autonomous quantum clocks (and ancient Assyria) before. Autonomous quantum clocks differ from quantum clocks of another type—the most precise clocks in the world. Scientists operate the latter clocks with lasers; autonomous quantum clocks need no operators. Autonomy benefits you if you want for a machine, such as a computer or a drone, to operate independently. An autonomous clock in the machine ensures that, say, the computer applies the right logical gate at the right time.

What’s an autonomous quantum clock? First, what’s a clock? A clock has a degree of freedom (e.g., a pair of hands) that represents the time and that moves steadily. When the clock’s hands point to 12 PM, you’re preparing lunch; when the clock’s hands point to 6 PM, you’re reading *Quantum Frontiers*. An autonomous quantum clock has a degree of freedom that represents the time fairly accurately and moves fairly steadily. (The quantum uncertainty principle prevents a perfect quantum clock from existing.)

Suppose that the autonomous quantum clock constitutes one part of a machine, such as a quantum computer, that the clock guides. When the clock is in one quantum state, the rest of the machine undergoes one operation, such as one quantum logical gate. (Experts: The rest of the machine evolves under one Hamiltonian.) When the clock is in another state, the rest of the machine undergoes another operation (evolves under another Hamiltonian).

Physicists have been modeling quantum clocks using the resource theory with which David and I modeled our molecule. The math with which we represented our molecule, I realized, coincided with the math that represents an autonomous quantum clock.

Think of the molecular switch as a machine that operates (mostly) independently and that contains an autonomous quantum clock. The rotating clump of atoms constitutes the clock hand. As a hand rotates down a clock face, so do the nuclei rotate downward. The hand effectively points to 12 PM when the switch occupies its “closed” position. The hand effectively points to 6 PM when the switch occupies its “open” position.

The nuclei account for most of the molecule’s weight; electrons account for little. They flit about the landscape shaped by the atomic clumps’ positions. The landscape governs the electrons’ behavior. So the electrons form the rest of the quantum machine controlled by the nuclear clock.

Experimentalists can create and manipulate these molecular switches easily. For instance, experimentalists can set the atomic clump moving—can “wind up” the clock—with ultrafast lasers. In contrast, the only other autonomous quantum clocks that I’d read about live in theory land. Can these molecules bridge theory to experiment? Reach out if you have ideas!

And check out David’s theory lab on Berkeley’s website and on Twitter. We all need older siblings to look up to.

]]>Two weeks ago I participated in a scientific marathon, the Sciathon. The structure of this event roughly resembled a hackathon. I am sure many readers are familiar with the idea of a hackathon from personal experience. For those unfamiliar — a hackathon is an intense collaborative event, usually organized over the weekend, during which people with different backgrounds work in groups to create prototypes of functioning software or hardware. For me, it was the very first time to have firsthand experience with a hackathon-like event!

The Sciathon was organized by the Lindau Nobel Laureate Meetings (more about the meetings with Nobel laureates, which happen annually in the lovely German town of Lindau, in another blogpost, I promise!) This year, unfortunately, the face-to-face meeting in Lindau was postponed until the summer of 2021. Instead, the Lindau Nobel Laureate Meetings alumni and this year’s would-be attendees had an opportunity to gather for the Sciathon, as well as the Online Science Days earlier this week, during which the best Sciathon projects were presented.

The participants of the Sciathon could choose to contribute new views, perspectives and solutions to three main topics: Lindau Guidelines, Communicating Climate Change and Capitalism After Corona. The first topic concerned an open, cooperative science community where data and knowledge are freely shared, the second — how scientists could show that the climate crisis is just as big a threat as the SARS-CoV-19 virus, and the last — how to remodel our current economic systems so that they are more robust to unexpected sudden crises. More detailed descriptions of each topic can be found on the official Sciathon webpage.

My group of ten eager scientists, mostly physicists, from master students to postdoctoral researchers, focused on the first topic. In particular, our goal was to develop a method of familiarizing high school students with the basics of quantum information and computation. We envisioned creating an online notebook, where an engaging story would be intertwined with interactive blocks of Python code utilizing the open-source quantum computing toolkit Qiskit. This hands-on approach would enable students to play with quantum systems described in the story-line by simply running the pre-programmed commands with a click of the mouse and then observe how “experiment” matches “the theory”. We decided to work with a system comprising one or two qubits and explain such fundamental concepts in quantum physics as superposition, entanglement and measurement. The last missing part was a captivating story.

The story we came up with involved two good friends from the lab, Miss Schrödinger and Miss Pauli, as well as their kittens, Alice and Bob. At first, Alice and Bob seemed to be ordinary cats, however whenever they sipped quantum milk, they would turn into quantum cats, or as quantum physicists would say — kets. Do I have to remind the reader that a quantum cat, unlike an ordinary one, could be both awake and asleep at the same time?

Miss Schrödinger was a proud cat owner who not only loved her cat, but also would take hundreds of pictures of Alice and eagerly upload them on social media. Much to Miss Schrödinger’s surprise, none of the pictures showed Alice partly awake and partly asleep — the ket would always collapse to the cat awake or the cat asleep! Every now and then, Miss Pauli would come to visit Miss Schrödinger and bring her own cat Bob. While the good friends were chit-chatting over a cup of afternoon tea, the cats sipped a bit of quantum milk and started to play with a ball of wool, resulting in a cute mess of two kittens tangled up in wool. Every time after coming back home, Miss Pauli would take a picture of Bob and share it with Miss Schrödinger, who would obviously also take a picture of Alice. After a while, the young scientists started to notice some strange correlations between the states of their cats…

The adventures of Miss Schrödinger and her cat continue! For those interested, you can watch a short video about our project!

Overall, I can say that I had a lot of fun participating in the Sciathon. It was an intense yet extremely gratifying event. In addition to the obvious difficulty of racing against the clock, our group also had to struggle with coordinating video calls between group members scattered across three almost equidistant time zones — Eastern Australian, Central European and Central US! During the Sciathon I had a chance to interact with other science enthusiasts from different backgrounds and work on something from outside my area of expertise. I would strongly encourage anyone to participate in hackathon-like events to break the daily routine, particularly monotonous during the lockdown, and unleash one’s creative spirit. Such events can also be viewed as an opportunity to communicate science and scientific progress to the public. Lastly, I would like to thank other members of my team — collaborating with you during the Sciathon was a blast!

Marrying a quantum information scientist comes with dangers not advertised in any *Brides* magazine (I assume; I’ve never opened a copy of *Brides* magazine). Never mind the perils of gathering together Auntie So-and-so and Cousin Such-and-such, who’ve quarreled since you were six; or spending tens of thousands of dollars on one day; or assembling two handfuls of humans during a pandemic. Beware the risks of marrying someone who unconsciously types “entropy” when trying to type “entry,” twice in a row.

**1) She’ll introduce you to friends as “a classical computer scientist.”** They’d assume, otherwise, that he does quantum computer science. Of course. Wouldn’t you?

**2) The quantum punning will commence months before the wedding.** One colleague wrote, “Many congratulations! Now you know the true meaning of entanglement.” Quantum particles can share entanglement. If you measure entangled particles, your outcomes can exhibit correlations stronger than any produceable by classical particles. As a card from another colleague read, “May you stay forever entangled, with no decoherence.”

I’d rather not dedicate much of a wedding article to decoherence, but suppose that two particles are maximally entangled (can generate the strongest correlations possible). Suppose that particle 2 heats up or suffers bombardment by other particles. The state of particle 2 *decoheres* as the entanglement between 1 and 2 frays. Equivalently, particle 2 entangles with its environment, and particle 2 can entangle only so much: The more entanglement 2 shares with the environment, the less entanglement 2 can share with 1. Physicists call entanglement—ba-duh-*bum*—*monogamous*.

The matron-of-honor toast featured another entanglement joke, as well as five more physics puns.^{1} (She isn’t a scientist, but she did her research.) She’ll be on Zoom till Thursday; try the virtual veal.

**3) When you ask what sort of engagement ring she’d like, she’ll mention black diamonds.** Experimentalists and engineers are building quantum computers from systems of many types, including diamond. Diamond consists of carbon atoms arranged in a lattice. Imagine expelling two neighboring carbon atoms and replacing one with a nitrogen atom. You’ll create a *nitrogen-vacancy center* whose electrons you can control with light. Such centers color the diamond black but let you process quantum information.

If I’d asked my fiancé for a quantum computer, we’d have had to wait 20 years to marry. He gave me an heirloom stone instead.

**4) When a wedding-gown shopkeeper asks which sort of train she’d prefer, she’ll inquire about Maglevs.** I dislike shopping, as the best man knows better than most people. In middle school, while our classmates spent their weekends at the mall, we stayed home and read books. But I filled out gown shops’ questionnaires.

“They want to know what kinds of material I like,” I told the best man over the phone, “and what styles, and what type of train. I had to pick from four types of train. I didn’t even know there were four types of train!”

“Steam?” guessed the best man. “Diesel?”

His suggestions appealed to me as a quantum thermodynamicist. Thermodynamics is the physics of energy, which engines process. Quantum thermodynamicists study how quantum phenomena, such as entanglement, can improve engines.

“Get the Maglev train,” the best man added. “Low emissions.”

“Ooh,” I said, “that’s superconducting.” Superconductors are quantum systems in which charge can flow forever, without dissipating. Labs at Yale, at IBM, and elsewhere are building quantum computers from superconductors. A superconductor consists of electrons that pair up with help from their positively charged surroundings—Cooper pairs. Separating Cooper-paired electrons requires an enormous amount of energy. What other type of train would better suit a wedding?

I set down my phone more at ease. Later, pandemic-era business closures constrained me to wearing a knee-length dress that I’d worn at graduations. I didn’t mind dodging the train.

**5) When you ask what style of wedding dress she’ll wear, she’ll say that she likes her clothing as she likes her equations.** Elegant in their simplicity.

**6) You’ll plan your wedding for wedding season only because the rest of the year conflicts with more seminars, conferences, and colloquia.** The quantum-information-theory conference of the year takes place in January. We wanted to visit Australia in late summer, and Germany in autumn, for conferences. A quantum-thermodynamics conference takes place early in the spring, and the academic year ends in May. Happy is the June bride; happier is the June bride who isn’t preparing a talk.

**7) An MIT chaplain will marry you.** Who else would sanctify the union of a physicist and a computer scientist?

**8) You’ll acquire more in-laws than you bargained for.** Biological parents more than suffice for most spouses. My husband has to contend with academic in-laws, as my PhD supervisor is called my “academic father.”

**9) Your wedding can double as a conference.** Had our wedding taken place in person, collaborations would have flourished during the cocktail hour. Papers would have followed; their acknowledgements sections would have nodded at the wedding; and I’d have requested copies of all manuscripts for our records—which might have included our wedding album.

**10) You’ll have trouble identifying a honeymoon destination where she won’t be tempted to give a seminar. **I thought that my then-fiancé would enjoy Vienna, but it boasts a quantum institute. So do Innsbruck and Delft. A colleague-friend works in Budapest, and I owe Berlin a professional visit. The list grew—or, rather, our options shrank. But he turned out not to mind my giving a seminar. The pandemic then cancelled our trip, so we’ll stay abroad for a week after some postpandemic European conference (hint hint).

**11) Your wedding will feature on the blog of Caltech’s Institute for Quantum Information and Matter. **Never mind *The New York Times*. Where else would you expect to find a quantum information physicist? I feel fortunate to have found someone with whom I wouldn’t rather be anywhere else.

^{1}“I know that if Nicole picked him to stand by her side, he must be a FEYNMAN and not a BOZON.”

Sound familiar?

Spring—crocuses, daffodils, and hyacinths budding; leaves unfurling; and birds warbling—burst upon Cambridge, Massachusetts last month. The city’s shutdown vied with the season’s vivaciousness. I relieved the tension by rereading *The Wind in the Willows*, which I’ve read every spring since 2017.

Project Gutenberg offers free access to Kenneth Grahame’s 1908 novel. He wrote the book for children, but never mind that. Many masterpieces of literature happen to have been written for children.

One line in the novel demanded, last year, that I memorize it. On page one, Mole is cleaning his house beneath the Earth’s surface. He’s been dusting and whitewashing for hours when the spring calls to him. Life is pulsating on the ground and in the air above him, and he can’t resist joining the party. Mole throws down his cleaning supplies and tunnels upward through the soil: “he scraped and scratched and scrabbled and scrooged, and then he scrooged again and scrabbled and scratched and scraped.”

The quotation appealed to me not only because of its alliteration and chiasmus. Mole’s journey reminded me of research.

Take a paper that I published last month with Michael Beverland of Microsoft Research and Amir Kalev of the Joint Center for Quantum Information and Computer Science (now of the Information Sciences Institute at the University of Southern California). We translated a discovery from the abstract, mathematical language of quantum-information-theoretic thermodynamics into an experimental proposal. We had to scrabble, but we kept on scrooging.

Over four years ago, other collaborators and I uncovered a thermodynamics problem, as did two other groups at the same time. Thermodynamicists often consider small systems that interact with large environments, like a magnolia flower releasing its perfume into the air. The two systems—magnolia flower and air—exchange things, such as energy and scent particles. The total amount of energy in the flower and the air remains constant, as does the total number of perfume particles. So we call the energy and the perfume-particle number *conserved* *quantities*.

We represent quantum conserved quantities with matrices and . We nearly always assume that, in this thermodynamic problem, those matrices commute with each other: . Almost no one mentions this assumption; we make it without realizing. Eliminating this assumption invalidates a derivation of the state reached by the small system after a long time. But why assume that the matrices commute? Noncommutation typifies quantum physics and underlies quantum error correction and quantum cryptography.

What if the little system exchanges with the large system thermodynamic quantities represented by matrices that don’t commute with each other?

Colleagues and I began answering this question, four years ago. The small system, we argued, thermalizes to near a quantum state that contains noncommuting matrices. We termed that state, , *the non-Abelian thermal state*. The ’s represent conserved quantities, and the ’s resemble temperatures. The real number ensures that, if you measure any property of the state, you’ll obtain some outcome. Our arguments relied on abstract mathematics, resource theories, and more quantum information theory.

Over the past four years, noncommuting conserved quantities have propagated across quantum-information-theoretic thermodynamics.^{1} Watching the idea take root has been exhilarating, but the quantum information theory didn’t satisfy me. I wanted to see a real physical system thermalize to near the non-Abelian thermal state.

Michael and Amir joined the mission to propose an experiment. We kept nosing toward a solution, then dislodging a rock that would shower dirt on us and block our path. But we scrabbled onward.

Imagine a line of ions trapped by lasers. Each ion contains the physical manifestation of a qubit—a quantum two-level system, the basic unit of quantum information. You can think of a qubit as having a quantum analogue of angular momentum, called *spin*. The spin has three components, one per direction of space. These spin components are represented by matrices , , and that don’t commute with each other.

A couple of qubits can form the small system, analogous to the magnolia flower. The rest of the qubits form the large system, analogous to the air. I constructed a Hamiltonian—a matrix that dictates how the qubits evolve—that transfers quanta of all the spin’s components between the small system and the large. (Experts: The Heisenberg Hamiltonian transfers quanta of all the spin components between two qubits while conserving .)

The Hamiltonian led to our first scrape: I constructed an integrable Hamiltonian, by accident. Integrable Hamiltonians can’t thermalize systems. A system thermalizes by losing information about its initial conditions, evolving to a state with an exponential form, such as . We clawed at the dirt and uncovered a solution: My Hamiltonian coupled together nearest-neighbor qubits. If the Hamiltonian coupled also next-nearest-neighbor qubits, or if the ions formed a 2D or 3D array, the Hamiltonian would be nonintegrable.

We had to scratch at every stage—while formulating the setup, preparation procedure, evolution, measurement, and prediction. But we managed; *Physical Review E* published our paper last month. We showed how a quantum system can evolve to the non-Abelian thermal state. Trapped ions, ultracold atoms, and quantum dots can realize our experimental proposal. We imported noncommuting conserved quantities in thermodynamics from quantum information theory to condensed matter and atomic, molecular, and optical physics.

As Grahame wrote, the Mole kept “working busily with his little paws and muttering to himself, ‘Up we go! Up we go!’ till at last, pop! his snout came out into the sunlight and he found himself rolling in the warm grass of a great meadow.”

^{1}See our latest paper’s introduction for references. *https://journals.aps.org/pre/abstract/10.1103/PhysRevE.101.042117*

A variation on the paragraph above began the article that I sent to *Scientific American* last year. Clara Moskowitz, an editor, asked which novel I’d quoted the paragraph from. I’d made the text up, I confessed.

Most of my publications, which wind up in physics journals, don’t read like novels. But I couldn’t resist when Clara invited me to write a feature about quantum steampunk, the confluence of quantum information and thermodynamics. *Quantum Frontiers* regulars will anticipate paragraphs two and three of the article:

Welcome to steampunk. This genre has expanded across literature, art and film over the past several decades. Its stories tend to take place near nascent factories and in grimy cities, in Industrial Age England and the Wild West—in real-life settings where technologies were burgeoning. Yet steampunk characters extend these inventions into futuristic technologies, including automata and time machines. The juxtaposition of old and new creates an atmosphere of romanticism and adventure. Little wonder that steampunk fans buy top hats and petticoats, adorn themselves in brass and glass, and flock to steampunk conventions.

These fans dream the adventure. But physicists today who work at the intersection of three fields—quantum physics, information theory and thermodynamics—live it. Just as steampunk blends science-fiction technology with Victorian style, a modern field of physics that I call “quantum steampunk” unites 21st-century technology with 19th-century scientific principles.

The *Scientific American* graphics team dazzled me. For years, I’ve been hankering to work with artists on visualizing quantum steampunk. I had an opportunity after describing an example of quantum steampunk in the article. The example consists of a quantum many-body engine that I designed with members Christopher White, Sarang Gopalakrishnan, and Gil Refael of Caltech’s Institute for Quantum Information and Matter. Our engine is a many-particle system ratcheted between two phases accessible to quantum matter, analogous to liquid and solid. The engine can be realized with, e.g., ultracold atoms or trapped ions. Lasers would trap and control the particles. Clara, the artists, and I drew the engine, traded comments, and revised the figure tens of times. In early drafts, the lasers resembled the sketches in atomic physicists’ Powerpoints. Before the final draft, the lasers transformed into brass-and-glass beauties. They evoke the scientific instruments crafted through the early 1900s, before chunky gray aesthetics dulled labs.

*Scientific American* published the feature this month; you can read it in print or, here, online. Many thanks to Clara for the invitation, for shepherding the article into print, and for her enthusiasm. To repurpose the end of the article, “You’re* *reading about this confluence of old and new on *Quantum Frontiers*. But you might as well be holding a novel by H. G. Wells or Jules Verne.”

*Figures courtesy of the *Scientific American *graphics team.*

Throughout 2019, telecommunication companies began deploying 5^{th} generation (5G) network infrastructure to allow our wireless communication to be faster, more reliable, and cope with greater capacity. This roll out of 5G technology promises to support up to 10x the number of devices operating with speeds 10x faster than what is possible with 4^{th} generation (4G) networks. If you stop and think about new opportunities 4G networks unlocked for working, shopping, connecting, and more, it is easy to see why some people are excited about the new world 5G networks might offer.

Classical networks like 5G and fiber optic networks (the backbone of the internet) share classical information: streams of bits (zeros and ones) that encode our conversations, tweets, music, podcasts, videos and anything else we communicate through our digital devices. Every improvement in the network hardware (for example an optical switch with less loss or a faster signal router) contributes to big changes in speed and capacity. The bottom line is that with enough advances, the network evolves to the point where things that were previously impossible (like downloading a movie in the late 90s) become instantaneous.

Alongside the hype and advertising around 5G networks, we are part of the world-wide effort to develop a fundamentally different network (with a little less advertising, but similar amounts of hype). Rather than being a bigger, better version of 5G, this new network is trying to build a quantum internet: a set of technologies that will allows us to connect and share information at the quantum level. For an insight into the quantum internet origin story, read this post about the pioneering experiments that took place at Caltech in Prof. Jeff Kimble’s group.

Quantum technologies operate using the counter-intuitive phenomena of quantum mechanics like superposition and entanglement. Quantum networks need to distribute this superposition and entanglement between different locations. This is a much harder task than distributing bits in a regular network because quantum information is extremely susceptible to loss and noise. If realized, this quantum internet could enable powerful quantum computing clusters, and create networks of quantum sensors that measure infinitesimally small fluctuations in their environment.

At this point it is worth asking the question:

*Does the world really need a quantum internet?*

This is an important question because a quantum internet is unlikely to improve any of the most common uses for the classical internet (internet facts and most popular searches).

We think there are at least three reasons why a quantum network is important:

- To build better quantum computers. The quantum internet will effectively transform small, isolated quantum processors into one much larger, more powerful computer. This could be a big boost in the race to scale-up quantum computing.
- To build quantum-encrypted communication networks. The ability of quantum technology to make or break encryption is one of the earliest reasons why quantum technology was funded. A fully-fledged quantum computer should be very efficient at hacking commonly used encryption protocols, while ideal quantum encryption provides the basis for communications secured by the fundamental properties of physics.
- To push the boundaries of quantum physics and measurement sensitivity by increasing the length scale and complexity of entangled systems. The quantum internet can help turn
*thought experiments*into*real experiments*.

The next question is: *How do we build a quantum internet?*

The starting point for most long-distance quantum network strategies is to base them on the state-of-the-art technology for current classical networks: sending information using light. (But that doesn’t rule out microwave networks for local area networks, as recent work from ETH Zurich has shown).

The technology that drives quantum networks is a set of interfaces that connect matter systems (like atoms) to photons at a quantum level. These interfaces need to efficiently exchange quantum information between matter and light, and the matter part needs to be able to store the information for a time that is much longer than the time it takes for the light to get to its destination in the network. We also need to be able to entangle the quantum matter systems to connect network links, and to process quantum information for error correction. This is a significant challenge that requires novel materials and unparalleled control of light to ultimately succeed.

State-of-the-art quantum networks are still elementary links compared to the complexity and scale of modern telecommunication. One of the most advanced platforms that has demonstrated a quantum network link consists of two atomic defects in diamonds separated by 1.3 km. The defects act as the quantum light-matter interface allowing quantum information to be shared between the two remote devices. But these defects in diamond currently have limitations that prohibit the expansion of such a network. The central challenge is finding defects/emitters that are stable and robust to environmental fluctuations, while simultaneously efficiently connecting with light. While these emitters don’t have to be in solids, the allure of a scalable solid-state fabrication process akin to today’s semiconductor industry for integrated circuits is very appealing. This has motivated the research and development of a range of quantum light-matter interfaces in solids (for example, see recent work by Harvard researchers) with the goal of meeting the simultaneous goals of efficiency and stability.

The research group we were a part of at Caltech was Prof. Andrei Faraon’s group, which put forward an appealing alternative to other solid-state technologies. The team uses rare-earth atoms embedded in crystals commonly used for lasers. JK joined as the group’s 3^{rd} graduate student in 2013, while I joined as a postdoc in 2016.

Rare-earth atoms have long been of interest for quantum technologies such as quantum memories for light because they are very stable and are excellent at preserving quantum information. But compared to other emitters, they only interact very weakly with light, which means that one usually needs large crystals with billions of atoms all working in harmony to make useful quantum interfaces. To overcome this problem, research in the Faraon group pioneered coupling these ions to nanoscale optical cavities like these ones:

These microscopic Toblerone-like structures are fabricated directly in the crystal that plays host to the rare-earth atoms. The periodic patterning effectively acts like two mirrors that form an optical cavity to confine light, which enhances the connection between light and the rare-earth atoms. In 2017, our group showed that the improved optical interaction in these cavities can be used to shrink down optical quantum memories by orders of magnitude compared to previous demonstrations, and ones manufactured on-chip.

We have used this nanophotonic platform to open up new avenues for quantum networks based on single rare-earth atoms, a task that previously was exceptionally challenging because these atoms have very low brightness. We have worked with both neodymium and ytterbium atoms embedded in a commercially available laser crystal.

Ytterbium looks particularly promising. Working with Prof. Rufus Cone’s group at Montana State University, we showed that these ytterbium atoms absorb and emit light better than most other rare-earth atoms and that they can store quantum information long enough for extended networks (>10 ms) when cooled down to a few Kelvin (-272 degrees Celsius) [Kindem et al., Physical Review B, 98, 024404 (2018) – link to arXiv version].

By using the nanocavity to improve the brightness of these ytterbium atoms, we have now been able to identify and investigate their properties at the single atom level. We can precisely control the quantum state of the single atoms and measure them with high fidelity – both prerequisites for using these atoms in quantum information technologies. When combined with the long quantum information storage times, our work demonstrates important steps to using this system in a quantum network.

The next milestone is forming an optical link between two individual rare-earth atoms to build an elementary quantum network. This goal is in our sights and we are already working on optimizing the light-matter interface stability and efficiency. A more ambitious milestone is to provide interconnects for other types of qubits – such as superconducting qubits – to join the network. This requires a quantum transducer to convert between microwave signals and light. Rare-earth atoms are promising for transducer technologies (see recent work from the Faraon group), as are a number of other hybrid quantum systems (for example, optomechanical devices like the ones developed in the Painter group at Caltech).

It took roughly 50 years from the first message sent over ARPANET to the roll out of 5G technology.

*So, when are we going to see the quantum internet*?

The technology and expertise needed to build quantum links between cities are developing rapidly with impressive progress made even between 2018 and 2020. Basic quantum network capabilities will likely be up and running in the next decade, which will be an exciting time for breakthroughs in fundamental and applied quantum science. Using single rare-earth atoms is relatively new, but this technology is also advancing quickly (for example, our ytterbium material was largely unstudied just three years ago). Importantly, the discovery of new materials will continue to be important to push quantum technologies forward.

You can read more about this work in this summary article and this synopsis written by lead author JK (Caltech PhD 2019), or dive into the full paper published in Nature.

J. M. Kindem, A. Ruskuc, J. G. Bartholomew, J. Rochman, Y.-Q. Huan, and A. Faraon. Control and single-shot readout of an ion embedded in a nanophotonic cavity. *Nature* (2020).

Now is an especially exciting time for our field with the Thompson Lab at Princeton publishing a related paper on single rare-earth atom quantum state detection, in their case using erbium. Check out their article here.

]]>Much has been learned about graphene in the past 15 years through an immense amount of research, most of which, in non-mechanical realms (e.g., electron transport measurements, thermal conductivity, pseudo magnetic fields in strain engineering). However, superlubricity, a mechanical phenomenon, has become the focus among many research groups. Mechanical measurements have famously shown graphene’s tensile strength to be hundreds of times that of the strongest steel, indisputably placing it atop the list of construction materials best for a superhero suit. Superlubricity is a tribological property of graphene and is, arguably, as equally impressive as graphene’s tensile strength.

Tribology is the study of interacting surfaces during relative motion including sources of friction and methods for its reduction. It’s not a recent discovery that coating a surface with graphite (many layers of graphene) can lower friction between two sliding surfaces. Current research studies the precise mechanisms and surfaces for which to minimize friction with single or several layers of graphene.

Research published in *Nature Materials* in 2018 measures friction between surfaces under constant load and velocity. The experiment includes two groups; one consisting of two graphene surfaces (homogeneous junction), and another consisting of graphene and hexagonal boron nitride (heterogeneous junction). The research group measures friction using Atomic Force Microscopy (AFM). The hexagonal boron nitride (or graphene for a homogeneous junction) is fixed to the stage of the AFM while the graphene slides atop. Loads are held constant at 20 𝜇N and sliding velocity constant at 200 nm/s. Ultra low friction is observed for homogeneous junctions when the underlying crystalline lattice structures of the surfaces are at a relative angle of 30 degrees. However, this ultra low friction state is very unstable and upon sliding, the surfaces rotate towards a locked-in lattice alignment. Friction varies with respect to the relative angle between the two surface’s crystalline lattice structures. Minimum (ultra low) friction occurs at a relative angle of 30 degrees reaching a maximum when locked-in lattice alignment is realized upon sliding. While in a state of lattice alignment, shearing is rendered impossible with the experimental setup due to the relatively large amount of friction.

Friction varies with respect to the relative angle of the crystalline lattice structures and is, therefore, anisotropic. For example, the fact it takes less force to split wood when an axe blade is applied parallel to its grains than when applied perpendicularly illustrates the anisotropic nature of wood, as the force to split wood is dependent upon the direction along which the force is applied. Frictional anisotropy is greater in homogeneous junctions because the tendency to orient into a stuck, maximum friction alignment, is greater than with heterojunctions. In fact, heterogeneous junctions experience frictional anisotropy three orders of magnitude less than homogeneous junctions. Heterogenous junctions display much less frictional anisotropy due to a lattice misalignment when the angle between the lattice vectors is at a minimum. In other words, the graphene and hBN crystalline lattice structures are never parallel because the materials differ, therefore, never experience the impact of lattice alignment as do homogenous junctions. Hence, heterogeneous junctions do not become stuck in a high friction state that characterizes homogeneous ones, and experience ultra low friction during sliding at all relative crystalline lattice structure angles.

Presumably, to increase applicability, upscaling to much larger loads will be necessary. A large scale cost effective method to dramatically reduce friction would undoubtedly have an enormous impact on a great number of industries. Cost efficiency is a key component to the realization of graphene’s potential impact, not only as it applies to superlubricity, but in all areas of application. As access to large amounts of affordable graphene increases, so will experiments in fabricating devices exploiting the extraordinary characteristics which have placed graphene and graphene based materials on the front lines of material research the past couple decades.

]]>A few hours earlier, I’d cancelled the seminar that I’d been slated to cohost two days later. In a few hours, I’d cancel the rest of the seminars in the series. Undergraduates would begin vacating their dorms within a day. Labs would shut down, and postdocs would receive instructions to work from home.

I memorized “Paul Revere’s Ride” after moving to Cambridge, following tradition: As a research assistant at Lancaster University in the UK, I memorized e. e. cummings’s “anyone lived in a pretty how town.” At Caltech, I memorized “Kubla Khan.” Another home called for another poem. “Paul Revere’s Ride” brooked no competition: Campus’s red bricks run into Boston, where Revere’s story began during the 1700s.

Henry Wadsworth Longfellow, who lived a few blocks from Harvard, composed the poem. It centers on the British assault against the American colonies, at Lexington and Concord, on the eve of the Revolutionary War. A patriot learned of the British troops’ movements one night. He communicated the information to faraway colleagues by hanging lamps in a church’s belfry. His colleagues rode throughout the night, to “spread the alarm / through every Middlesex village and farm.” The riders included Paul Revere, a Boston silversmith.

The Boston-area bricks share their color with Harvard’s crest, crimson. So do the protrusions on the coronavirus’s surface in colored pictures.

The yard that I was crossing was about to “de-densify,” the red-brick buildings were about to empty, and my home was about to lock its doors. I’d watch regulations multiply, emails keep pace, and masks appear. Revere’s messenger friend, too, stood back and observed his home:

he climbed to the tower of the church,

Up the wooden stairs, with stealthy tread,

To the belfry-chamber overhead, [ . . . ]

By the trembling ladder, steep and tall,

To the highest window in the wall,

Where he paused to listen and look down

A moment on the roofs of the town,

And the moonlight flowing over all.

I commiserated also with Revere, waiting on tenterhooks for his message:

Meanwhile, impatient to mount and ride,

Booted and spurred, with a heavy stride,

On the opposite shore walked Paul Revere.

Now he patted his horse’s side,

Now gazed on the landscape far and near,

Then impetuous stamped the earth,

And turned and tightened his saddle-girth…

The lamps ended the wait, and Revere rode off. His mission carried a sense of urgency, yet led him to serenity that I hadn’t expected:

He has left the village and mounted the steep,

And beneath him, tranquil and broad and deep,

Is the Mystic, meeting the ocean tides…

The poem’s final stanza kicks. Its message carries as much relevance to the 21st century as Longfellow, writing about the 1700s during the 1800s, could have dreamed:

So through the night rode Paul Revere;

And so through the night went his cry of alarm

To every Middlesex village and farm,—

A cry of defiance, and not of fear,

A voice in the darkness, a knock at the door,

And a word that shall echo forevermore!

For, borne on the night-wind of the Past,

Through all our history, to the last,

In the hour of darkness and peril and need,

The people will waken and listen to hear

The hurrying hoof-beats of that steed,

And the midnight message of Paul Revere.

Reciting poetry clears my head. I can recite on autopilot, while processing other information or admiring my surroundings. But the poem usually wins my attention at last. The rhythm and rhyme sweep me along, narrowing my focus. Reciting “Paul Revere’s Ride” takes me 5-10 minutes. After finishing that morning, I repeated the poem, and began repeating it again, until arriving at my institute on the edge of Harvard’s campus.

Isolation can benefit theorists. Many of us need quiet to study, capture proofs, and disentangle ideas. Many of us need collaboration; but email, Skype, Google hangouts, and Zoom connect us. Many of us share and gain ideas through travel; but I can forfeit a little car sickness, air turbulence, and waiting in lines. Many of us need results from experimentalist collaborators, but experimental results often take long to gather in the absence of pandemics. Many of us are introverts who enjoy a little self-isolation.

April is National Poetry Month in the United States. I often celebrate by intertwining physics with poetry in my April blog post. Next month, though, I’ll have other news to report. Besides, my walk demonstrated, we need poetry now.

Paul Revere found tranquility on the eve of a storm. Maybe, when the night clears and doors reopen, science born of the quiet will flood journals. Aren’t we fortunate, as physicists, to lead lives steeped in a kind of poetry?

]]>Last month, Zhengfeng, Anand, Thomas, John and I posted *MIP* = RE* to arXiv. The paper feels very much like the elephant of the fable — and not just because of the number of pages! To a computer scientist, the paper is ostensibly about the complexity of interactive proofs. To a quantum physicist, it is talking about mathematical models of quantum entanglement. To the mathematician, there is a claimed resolution to a long-standing problem in operator algebras. Like the blind men of the parable, each are feeling a small part of a new phenomenon. How do the wall, the tree trunk, and the rope all fit together?

I’ll try to trace the outline of the elephant: it starts with a mystery in quantum complexity theory, curves through the mathematical foundations of quantum mechanics, and arrives at a deep question about operator algebras.

In 2004, computer scientists Cleve, Hoyer, Toner, and Watrous were thinking about a funny thing called nonlocal games. A nonlocal game involves three parties: two cooperating players named Alice and Bob, and someone called the verifier. The verifier samples a pair of random questions and sends to Alice (who responds with answer ), and to Bob (who responds with answer ). The verifier then uses some function that tells her whether the players win, based on their questions and answers.

All three parties know the rules of the game before it starts, and Alice and Bob’s goal is to *maximize* their probability of winning the game. The players aren’t allowed to communicate with each other during the game, so it’s a nontrivial task for them to coordinate an optimal strategy (i.e., how they should individually respond to the verifier’s questions) before the game starts.

The most famous example of a nonlocal game is the CHSH game (which has made several appearances on this blog already): in this game, the verifier sends a uniformly random bit to Alice (who responds with a bit ) and a uniformly random bit to Bob (who responds with a bit ). The players win if (in other words, the sum of their answer bits is equal to the product of the input bits modulo ).

What is Alice’s and Bob’s maximum winning probability? Well, it depends on what type of strategy they use. If they use a strategy that can be modeled by *classical* physics, then their winning probability cannot exceed (we call this the *classical value* of CHSH). On the other hand, if they use a strategy based on *quantum* physics, Alice and Bob can do better by sharing two quantum bits (qubits) that are *entangled*. During the game each player measures their own qubit (where the measurement depends on their received question) to obtain answers that win the CHSH game with probability (we call this the *quantum value* of CHSH). So even though the entangled qubits don’t allow Alice and Bob to communicate with each other, entanglement gives them a way to win with higher probability! In technical terms, their responses are *more correlated* than what is possible classically.

The CHSH game comes from physics, and was originally formulated not as a game involving Alice and Bob, but rather as an experiment involving two spatially separated devices to test whether stronger-than-classical correlations exist in nature. These experiments are known as *Bell tests*, named after John Bell. In 1964, he proved that correlations from quantum entanglement cannot be explained by any “local hidden variable theory” — in other words, a classical theory of physics.^{1} He then showed that a Bell test, like the CHSH game, gives a simple statistical test for the presence of nonlocal correlations between separated systems. Since the 1960s, numerous Bell tests have been conducted experimentally, and the verdict is clear: nature does not behave classically.

Cleve, Hoyer, Toner and Watrous noticed that nonlocal games/Bell tests can be viewed as a kind of *multiprover interactive proof*. In complexity theory, interactive proofs are protocols where some *provers* are trying to convince a *verifier* of a solution to a long, difficult computation, and the verifier is trying to efficiently determine if the solution is correct. In a Bell test, one can think of the provers as instead trying to convince the verifier of a *physical statement*: that they possess quantum entanglement.

With the computational lens trained firmly on nonlocal games, it then becomes natural to ask about their *complexity*. Specifically, what is the complexity of approximating the optimal winning probability in a given nonlocal game ? In complexity-speak, this is phrased as a question about characterizing the class *MIP** (pronounced “M-I-P star”). This is also a well-motivated question for an experimentalist conducting Bell tests: at the very least, they’d want to determine if (a) quantum players can do better than classical players, and (b) what can the best possible quantum strategy achieve?

Studying this question in the case of classical players led to some of the most important results in complexity theory, such as *MIP = NEXP* and the PCP Theorem. Indeed, the PCP Theorem says that it is *NP*-hard to approximate the classical value of a nonlocal game (i.e. the maximum winning probability of classical players) to within constant additive accuracy (say ). Thus, assuming that *P* is not equal to *NP*, we shouldn’t expect a polynomial-time algorithm for this. However it is easy to see that there is a “brute force” algorithm for this problem: by taking *exponential time* to enumerate over all possible deterministic player strategies, one can exactly compute the classical value of nonlocal games.

When considering games with entangled players, however, it’s not even clear if there’s a similar “brute force” algorithm that solves this in *any* amount of time — forget polynomial time; even if we allow ourselves exponential, doubly-exponential, Ackermann function amount of time, we still don’t know how to solve this quantum value approximation problem. The problem is that there is no known upper bound on the *amount* of entanglement that is needed for players to play a nonlocal game. For example, for a given game , does an optimal quantum strategy require one qubit, ten qubits, or qubits of entanglement? Without any upper bound, a “brute force” algorithm wouldn’t know how big of a quantum strategy to search for — it would keep enumerating over bigger and bigger strategies in hopes of finding a better one.

Thus approximating the quantum value may not even be solvable in principle! But could it *really* be uncomputable? Perhaps we just haven’t found the right mathematical tool to give an upper bound on the dimension — maybe we just need to come up with some clever variant of, say, Johnson-Lindenstrauss or some other dimension reduction technique.^{2}

In 2008, there was promising progress towards an algorithmic solution for this problem. Two papers [DLTW, NPA] (appearing on arXiv on the same day!) showed that an algorithm based on semidefinite programming can produce a sequence of numbers that converge to something called the *commuting operator value* of a nonlocal game.^{3} If one could show that the commuting operator value and the quantum value of a nonlocal game coincide, then this would yield an algorithm for solving this approximation problem!

Asking whether this commuting operator and quantum values are the same, however, immediately brings us to the precipice of some deep mysteries in mathematical physics and operator algebras, far removed from computer science and complexity theory. This takes us to the next part of the elephant.

The mystery about the quantum value versus the commuting operator value of nonlocal games has to do with two different ways of modeling Alice and Bob in quantum mechanics. As I mentioned earlier, quantum physics predicts that the maximum winning probability in, say, the CHSH game when Alice and Bob share entanglement is approximately 85%. As with any physical theory, these predictions are made using some mathematical framework — formal rules for modeling physical experiments like the CHSH game.

In a typical quantum information theory textbook, players in the CHSH game are usually modelled in the following way: Alice’s device is described a *state space* (all the possible states the device could be in), a particular *state* from , and a set of *measurement operators* (operations that can be performed by the device). It’s not necessary to know what these things are formally; the important feature is that these three things are enough to make any prediction about Alice’s device — when treated in isolation, at least. Similarly, Bob’s device can be described using its own state space , state , and measurement operators .

In the CHSH game though, one wants to make predictions about Alice’s and Bob’s devices *together*. Here the textbooks say that Alice and Bob are jointly described by the *tensor product* formalism, which is a natural mathematical way of “putting separate spaces together”. Their state space is denoted by . The joint state describing the devices comes from this tensor product space. When Alice and Bob independently make their local measurements, this is described by a measurement operator from the tensor product of operators from and . The strange correlations of quantum mechanics arise when their joint state is *entangled*, i.e. it cannot be written as a well-defined state on Alice’s side combined with a well-defined state on Bob’s side (even though the state space itself is two independent spaces combined together!)

The tensor product model works well; it satisfies natural properties you’d want from the CHSH experiment, such as the constraint that Alice and Bob can’t instantaneously signal to each other. Furthermore, predictions made in this model match up very accurately with experimental results!

This is the not the whole story, though. The tensor product formalism works very well in *non-relativistic quantum mechanics*, where things move slowly and energies are low. To describe more extreme physical scenarios — like when particles are being smashed together at near-light speeds in the Large Hadron Collider — physicists turn to the more powerful *quantum field theory*. However, the notion of spatiotemporal separation in relativistic settings gets especially tricky. In particular, when trying to describe quantum mechanical systems, it is no longer evident how to assign Alice and Bob their own independent state spaces, and thus it’s not clear how to put relativistic Alice and Bob in the tensor product framework!

In quantum field theory, locality is instead described using the *commuting operator model*. Instead of assigning Alice and Bob their own individual state spaces and then tensoring them together to get a combined space, the commuting operator model stipulates that there is just a *single* monolithic space for both Alice and Bob. Their joint state is described using a vector from , and Alice and Bob’s measurement operators both act on . The constraint that they can’t communicate is captured by the fact that Alice’s measurement operators *commute* with Bob’s operators. In other words, the order in which the players perform their measurements on the system does not matter: Alice measuring before Bob, or Bob measuring before Alice, both yield the same statistical outcomes. Locality is enforced through commutativity.

The commuting operator framework contains the tensor product framework as a special case^{4}, so it’s more general. Could the commuting operator model allow for correlations that can’t be captured by the tensor product model, even approximately^{5}^{6}? This question is known as *Tsirelson’s problem*, named after the late mathematician Boris Tsirelson.

There is a simple but useful way to phrase this question using nonlocal games. What we call the “quantum value” of a nonlocal game (denoted by ) really refers to the supremum of success probabilities over tensor product strategies for Alice and Bob. If they use strategies from the more general commuting operator model, then we call their maximum success probability the *commuting operator value* of (denoted by ). Since tensor product strategies are a special case of commuting operator strategies, we have the relation for all nonlocal games .

Could there be a nonlocal game whose tensor product value is *different* from its commuting operator value? With tongue-in-cheek: is there a game that Alice and Bob could succeed at better if they were using quantum entanglement at near-light speeds? It is difficult to find even a plausible candidate game for which the quantum and commuting operator values may differ. The CHSH game, for example, has the same quantum and commuting operator value; this was proved by Tsirelson.

If the tensor product and the commuting operator models are the same (i.e., the “positive” resolution of Tsirelson’s problem), then as I mentioned earlier, this has unexpected ramifications: there would be an algorithm for approximating the quantum value of nonlocal games.

How does this algorithm work? It comes in two parts: a procedure to *search from below*, and one to *search from above*. The “search from below” algorithm computes a sequence of numbers where is (approximately) the best winning probability when Alice and Bob use a -qubit tensor product strategy. For fixed , the number can be computed by enumerating over (a discretization of) the space of all possible -qubit strategies. This takes a *doubly-exponential* amount of time in — but at least this is still a finite time! This naive “brute force” algorithm will slowly plod along, computing a sequence of better and better winning probabilities. We’re guaranteed that in the limit as goes to infinity, the sequence converges to the quantum value . Of course the issue is that the “search from below” procedure never knows how close it is to the true quantum value.

This is where the “search from above” comes in. This is an algorithm that computes a different sequence of numbers where each is an *upper bound* on the commuting operator value , and furthermore as goes to infinity, eventually converges to . Furthermore, each can be computed by a technique known as semidefinite optimization; this was shown by the two papers I mentioned.

Let’s put the pieces together. If the quantum and commuting operator values of a game coincide (i.e. ), then we can run the “search from below” and “search from above” procedures in parallel, interleaving the computation of the and . Since both are guaranteed to converge to the quantum value, at some point the upper bound will come within some to the lower bound , and thus we would have homed in on (an approximation of) . There we have it: an algorithm to approximate the quantum value of games.

All that remains to do, surely, is to solve Tsirelson’s problem in the affirmative (that commuting operator correlations can be approximated by tensor product correlations), and then we could put this pesky question about the quantum value to rest. Right?

At the end of the 1920s, polymath extraordinaire John von Neumann formulated the first rigorous mathematical framework for the recently developed quantum mechanics. This framework, now familiar to physicists and quantum information theorists everywhere, posits that quantum states are vectors in a Hilbert space, and measurements are linear operators acting on those spaces. It didn’t take long for von Neumann to realize that there was a much deeper theory of operators on Hilbert spaces waiting to be discovered. With Francis Murray, in the 1930s he started to develop a theory of “rings of operators” — today these are called von Neumann algebras.

The theory of operator algebras has since flourished into a rich and beautiful area of mathematics. It remains inseparable from mathematical physics, but has established deep connections with subjects such as knot theory and group theory. One of the most important goals in operator algebras has been to provide a classification of von Neumann algebras. In their series of papers on the subject, Murray and von Neumann first showed that classifying von Neumann algebras reduces to understanding their *factors*, the atoms out of which all von Neumann algebras are built. Then, they showed that factors of von Neumann algebras come in one of three species: type , type , and type . Type factors were completely classified by Murray and von Neumann, and they made much progress on characterizing certain type factors. However progress stalled until the 1970s, when Alain Connes provided a classification of type factors (work for which he would later receive the Fields Medal). In the same 1976 classification paper, Connes makes a casual remark about something called type factors^{7}:

We now construct an embedding of into . Apparently such an embedding ought to exist for all factors.

This line, written in almost a throwaway manner, eventually came to be called “Connes’ embedding problem”: does every separable factor embed into an ultrapower of the hyperfinite factor? It seems that Connes surmises that it does (and thus this is also called “Connes’ embedding *conjecture*“). Since 1976, this problem has grown into a central question of operator algebras, with numerous equivalent formulations and consequences across mathematics.

In 2010, two papers (again appearing on the arXiv on the same day!) showed that the reach of Connes’ embedding conjecture extends back to the foundations of quantum mechanics. If Connes’ embedding problem has a positive answer (i.e. an embedding exists), then Tsirelson’s problem (i.e. whether commuting operator can be approximated by tensor product correlations) *also* has a positive answer! Later it was shown by Ozawa that Connes’ embedding problem is in fact *equivalent* to Tsirelson’s problem.

Remember that our approach to compute the value of nonlocal games hinged on obtaining a positive answer to Tsirelson’s problem. The sequence of papers [NPA, DLTW, Fritz, JNPPSW] together show that resolving — one way or another — whether this search-from-below, search-from-above algorithm works would essentially settle Connes’ embedding conjecture. What started as a funny question at the periphery of computer science and quantum information theory has morphed into an attack on one of the central problems in operator algebras.

We’ve now ended back where we started: the complexity of nonlocal games. Let’s take a step back and try to make sense of the elephant.

Even to a complexity theorist, “*MIP* = RE*” may appear esoteric. The complexity classes *MIP** and *RE* refer to a bewildering grabbag of concepts: there’s Alice, Bob, Turing machines, verifiers, interactive proofs, quantum entanglement. What is the meaning of the equality of these two classes?

First, it says that the *Halting problem has an interactive proof involving quantum entangled provers*. In the Halting problem, you want to decide whether a Turing machine , if you started running it, would eventually terminate with a well-defined answer, or if it would get stuck in an infinite loop. Alan Turing showed that this problem is *undecidable*: there is no algorithm that can solve this problem in general. Loosely speaking, the best thing you can do is to just flick on the power switch to , and wait to see if it eventually stops. If gets stuck in an infinite loop — well, you’re going to be waiting forever.

*MIP* = RE* shows with the help of all-powerful Alice and Bob, a time-limited verifier can run an interactive proof to “shortcut” the waiting. Given the Turing machine ‘s description (its “source code”), the verifier can efficiently compute a description of a nonlocal game whose behavior reflects that of . If does eventually halt (which could happen after a million years), then there is a strategy for Alice and Bob that causes the verifier to accept with probability . In other words, . If gets stuck in an infinite loop, then no matter what strategy Alice and Bob use, the verifier always rejects with high probability, so is close to .

By playing this nonlocal game, the verifier can obtain *statistical evidence* that is a Turing machine that eventually terminates. If the verifier plays and the provers win, then the verifier should believe that it is likely that halts. If they lose, then the verifier concludes there isn’t enough evidence that halts^{8}. The verifier never actually runs in this game; she has offloaded the task to Alice and Bob, who we can assume are computational gods capable of performing million-year-long computations instantly. For them, the challenge is instead to *convince* the verifier that if she *were* to wait millions of years, she would witness the termination of . Incredibly, the amount of work put in by the verifier in the interactive proof is *independent* of the time it takes for to halt!

The fact that the Halting problem has an interactive proof seems borderline absurd: if the Halting problem is unsolvable, why should we expect it to be *verifiable*? Although complexity theory has taught us that there can be a large gap between the complexity of verification versus search, it has always been a difference of *efficiency*: if solutions to a problem can be efficiently verified, then solutions can also be found (albeit at drastically higher computational cost). *MIP* = RE* shows that, with quantum entanglement, there can be a chasm of *computability* between verifying solutions and finding them.

Now let’s turn to the non-complexity consequences of *MIP* = RE*. The fact that we can encode the Halting problem into nonlocal games also immediately tells us that there is no algorithm whatsoever to approximate the quantum value. Suppose there was an algorithm that could approximate . Then, using the transformation from Turing machines to nonlocal games mentioned above, we could use this algorithm to solve the Halting problem, which is impossible.

Now the dominoes start to fall. This means that, in particular, the proposed “search-from-below”/”search-from-above” algorithm *cannot* succeed in approximating . There must be a game , then, for which the quantum value is different from the commuting operator value. But this implies Tsirelson’s problem has a negative answer, and therefore Connes’ embedding conjecture is false.

We’ve only sketched the barest of outlines of this elephant, and yet it is quite challenging to hold it in the mind’s eye all at once^{9}. This story is intertwined with some of the most fundamental developments in the past century: modern quantum mechanics, operator algebras, and computability theory were birthed in the 1930s. Einstein, Podolsky and Rosen wrote their landmark paper questioning the nature of quantum entanglement in 1935, and John Bell discovered his famous test and inequality in 1964. Connes’ formulated his conjecture in the ’70s, Tsirelson made his contributions to the foundations of quantum mechanics in the ’80s, and about the same time computer scientists were inventing the theory of interactive proofs and probabilistically checkable proofs (PCPs).

We haven’t said anything about the proof of *MIP* = RE* yet (this may be the subject of future blog posts), but it is undeniably a product of complexity theory. The language of interactive proofs and Turing machines is not just convenient but necessary: at its heart *MIP* = RE* is the classical PCP Theorem, with the help of quantum entanglement, recursed to infinity.

What is going on in this proof? What parts of it are fundamental, and which parts are unnecessary? What is the core of it that relates to Connes’ embedding conjecture? Are there other consequences of this uncomputability result? These are questions to be explored in the coming days and months, and the answers we find will be fascinating.

**Acknowledgments.** Thanks to William Slofstra and Thomas Vidick for helpful feedback on this post.

- This is why quantum correlations are called “nonlocal”, and why we call the CHSH game a “nonlocal game”: it is a test for nonlocal behavior.
- A reasonable hope would be that, for every nonlocal game , there is a generic upper bound on the number of qubits needed to approximate the optimal quantum strategy (e.g., a game with possible questions and possible answers would require at most, say, qubits to play optimally).
- In those papers, they called it the
*field theoretic value*. - The space can be broken down into the tensor product , and Alice’s measurements only act on the space and Bob’s measurements only act on the space. In this case, Alice’s measurements clearly commute with Bob’s.
- In a breakthrough work in 2017, Slofstra showed that the tensor product framework is
*not*exactly the same as the commuting operator framework; he shows that there is a nonlocal game where players using commuting operator strategies can win with probability , but when they use a tensor-product strategy they can only win with probability strictly less than . However the perfect commuting operator strategy can be approximated by tensor-product strategies arbitrarily well, so the quantum values and the commuting operator values of are the same. - The commuting operator model is motivated by attempts to develop a rigorous mathematical framework for quantum field theory from first principles (see, for example algebraic quantum field theory (AQFT)). In the “vanilla” version of AQFT, tensor product decompositions between casually independent systems do not exist
*a priori*, but mathematical physicists often consider AQFTs augmented with an additional “split property”, which*does*imply tensor product decompositions. Thus in such AQFTs, Tsirelson’s problem has an affirmative answer. - Type is pronounced “type two one”.
- This is
*not*the same as evidence that loops forever! - At least, speaking for myself.

“That’s the definition of insanity,” Kater admitted, “but I think experiment needs to involve some of that.”

I watched the exchange via Skype, with more interest than I’d have watched the Oscars with. Someday, I hope, I’ll be able to weigh in on such a debate, despite working as a theorist. Someday, I’ll have partnered with enough experimentalists to develop insight.

I’m partnering with Jonathan and Kater on an experiment that coauthors and I proposed in a paper blogged about here. The experiment centers on an uncertainty relation, an inequality of the sort immortalized by Werner Heisenberg in 1927. Uncertainty relations imply that, if you measure a quantum particle’s position, the particle’s momentum ceases to have a well-defined value. If you measure the momentum, the particle ceases to have a well-defined position. Our uncertainty relation involves *weak measurements*. Weakly measuring a particle’s position doesn’t disturb the momentum much and vice versa. We can interpret the uncertainty in information-processing terms, because we cast the inequality in terms of entropies. *Entropies*, described here, are functions that quantify how efficiently we can process information, such as by compressing data. Jonathan and Kater are checking our inequality, and exploring its implications, with a superconducting qubit.

I had too little experience to side with Jonathan or with Kater. So I watched, and I contemplated how their opinions would sound if expressed about theory. Do I try one strategy again and again, hoping to change my results without changing my approach?

At the Perimeter Institute for Theoretical Physics, Masters students had to swallow half-a-year of course material in weeks. I questioned whether I’d ever understand some of the material. But some of that material resurfaced during my PhD. Again, I attended lectures about Einstein’s theory of general relativity. Again, I worked problems about observers in free-fall. Again, I calculated covariant derivatives. The material sank in. I decided never to question, again, whether I could understand a concept. I might not understand a concept today, or tomorrow, or next week. But if I dedicate enough time and effort, I chose to believe, I’ll learn.

My decision rested on experience and on classes, taught by educational psychologists, that I’d taken in college. I’d studied how brains change during learning and how breaks enhance the changes. Sense, I thought, underlay my decision—though expecting outcomes to change, while strategies remain static, sounds insane.

Does sense underlie Kater’s suggestion, likened to insanity, to keep fabricating amplifiers as before? He’s expressed cynicism many times during our collaboration: *Experiment needs to involve some insanity.* *The experiment probably won’t work for a long time. Plenty more things will likely break.*

Jonathan and I agree with him. Experiments have a reputation for breaking, and Kater has a reputation for knowing experiments. Yet Jonathan—with professionalism and politeness—remains optimistic that other methods will prevail, that we’ll meet our goals early. I hope that Jonathan remains optimistic, and I fancy that Kater hopes, too. He prophesies gloom with a quarter of a smile, and his record speaks against him: A few months ago, I met a theorist who’d collaborated with Kater years before. The theorist marveled at the speed with which Kater had operated. A theorist would propose an experiment, and *boom*—the proposal would work.

Perhaps luck smiled upon the implementation. But luck dovetails with the sense that underlies Kater’s opinion: Experiments involve factors that you can’t control. Implement a protocol once, and it might fail because the temperature has risen too high. Implement the protocol again, and it might fail because a truck drove by your building, vibrating the tabletop. Implement the protocol again, and it might fail because you bumped into a knob. Implement the protocol a fourth time, and it might succeed. If you repeat a protocol many times, your environment might change, changing your results.

Sense underlies also Jonathan’s objections to Kater’s opinions. We boost our chances of succeeding if we keep trying. We derive energy to keep trying from creativity and optimism. So rebelling against our PhD supervisors’ sense is sensible. I wondered, watching the Skype conversation, whether Kater the student had objected to prophesies of doom as Jonathan did. Kater exudes the soberness of a tenured professor but the irreverence of a Californian who wears his hair slightly long and who tattooed his wedding band on. Science thrives on the soberness and the irreverence.

Who won Jonathan and Kater’s argument? Both, I think. Last week, they reported having fabricated amplifiers that work. The lab followed a protocol similar to their old one, but with more conscientiousness.

I’m looking forward to watching who wins the debate about how long the rest of the experiment takes. Either way, check out Jonathan’s talk about our experiment if you attend the American Physical Society’s March Meeting. Jonathan will speak on Thursday, March 5, at 12:03, in room 106. Also, keep an eye out for our paper—which will debut once Jonathan coaxes the amplifier into synching with his qubit.

]]>