Quantum Information Meets Quantum Matter: Now Published!

Two things you should know about me are: (1) I have unbounded admiration for scientists who can actually finish writing a book, and (2) I’m a firm believer that exciting progress can be ignited when two fields fuse together. So I’m doubly thrilled that Quantum Information Meets Quantum Matter, by IQIM physicist Xie Chen and her colleagues Bei Zeng, Duan-Lu Zhou, and Xiao-Gang Wen, has now been published by Springer.

The authors kindly invited me to write a foreword for the book, which I was happy to contribute. That foreword is reproduced here, with the permission of the publisher.

Foreword

In 1989 I attended a workshop at the University of Minnesota. The organizers had hoped the workshop would spawn new ideas about the origin of high-temperature superconductivity, which had recently been discovered. But I was especially impressed by a talk about the fractional quantum Hall effect by a young physicist named Xiao-Gang Wen.

From Wen I heard for the first time about a concept called topological order. He explained that for some quantum phases of two-dimensional matter the ground state becomes degenerate when the system resides on a surface of nontrivial topology such as a torus, and that the degree of degeneracy provides a useful signature for distinguishing different phases. I was fascinated.

Up until then, studies of phases of matter and the transitions between them usually built on principles annunciated decades earlier by Lev Landau. Landau had emphasized the crucial role of symmetry, and of local order parameters that distinguish different symmetry realizations. Though much of what Wen said went over my head, I did manage to glean that he was proposing a way to distinguish quantum phases founded on much different principles that Landau’s. As a particle physicist I deeply appreciated the power of Landau theory, but I was also keenly aware that the interface of topology and physics had already yielded many novel and fruitful insights.

Mulling over these ideas on the plane ride home, I scribbled a few lines of verse:

Now we are allowed
To disavow Landau.
Wow …

Without knowing where it might lead, one could sense the opening of a new chapter.

At around that same time, another new research direction was beginning to gather steam, the study of quantum information. Richard Feynman and Yuri Manin had suggested that a computer processing quantum information might perform tasks beyond the reach of ordinary digital computers. David Deutsch formalized the idea, which attracted the attention of computer scientists, and eventually led to Peter Shor’s discovery that a quantum computer can factor large numbers in polynomial time. Meanwhile, Alexander Holevo, Charles Bennett and others seized the opportunity to unify Claude Shannon’s information theory with quantum physics, erecting new schemes for quantifying quantum entanglement and characterizing processes in which quantum information is acquired, transmitted, and processed.

The discovery of Shor’s algorithm caused a burst of excitement and activity, but quantum information science remained outside the mainstream of physics, and few scientists at that time glimpsed the rich connections between quantum information and the study of quantum matter. One notable exception was Alexei Kitaev, who had two remarkable insights in the 1990s. He pointed out that finding the ground state energy of a quantum system defined by a “local” Hamiltonian, when suitably formalized, is as hard as any problem whose solution can be verified with a quantum computer. This idea launched the study of Hamiltonian complexity. Kitaev also discerned the relationship between Wen’s concept of topological order and the quantum error-correcting codes that can protect delicate quantum superpositions from the ravages of environmental decoherence. Kitaev’s notion of a topological quantum computer, a mere theorist’s fantasy when proposed in 1997, is by now pursued in experimental laboratories around the world (though the technology still has far to go before truly scalable quantum computers will be capable of addressing hard problems).

Thereafter progress accelerated, led by a burgeoning community of scientists working at the interface of quantum information and quantum matter. Guifre Vidal realized that many-particle quantum systems that are only slightly entangled can be succinctly described using tensor networks. This new method extended the reach of mean-field theory and provided an illuminating new perspective on the successes of the Density Matrix Renormalization Group (DMRG). By proving that the ground state of a local Hamiltonian with an energy gap has limited entanglement (the area law), Matthew Hastings showed that tensor network tools are widely applicable. These tools eventually led to a complete understanding of gapped quantum phases in one spatial dimension.

The experimental discovery of topological insulators focused attention on the interplay of symmetry and topology. The more general notion of a symmetry-protected topological (SPT) phase arose, in which a quantum system has an energy gap in the bulk but supports gapless excitations confined to its boundary which are protected by specified symmetries. (For topological insulators the symmetries are particle-number conservation and time-reversal invariance.) Again, tensor network methods proved to be well suited for establishing a complete classification of one-dimensional SPT phases, and guided progress toward understanding higher dimensions, though many open questions remain.

We now have a much deeper understanding of topological order than when I first heard about it from Wen nearly 30 years ago. A central new insight is that topologically ordered systems have long-range entanglement, and that the entanglement has universal properties, like topological entanglement entropy, which are insensitive to the microscopic details of the Hamiltonian. Indeed, topological order is an intrinsic property of a quantum state and can be identified without reference to any particular Hamiltonian at all. To understand the meaning of long-range entanglement, imagine a quantum computer which applies a sequence of geometrically local operations to an input quantum state, producing an output product state which is completely disentangled. If the time required to complete this disentangling computation is independent of the size of the system, then we say the input state is short-ranged entangled; otherwise it is long-range entangled. More generally (loosely speaking), two states are in different quantum phases if no constant-time quantum computation can convert one state to the other. This fundamental connection between quantum computation and quantum order has many ramifications which are explored in this book.

When is the right time for a book that summarizes the status of an ongoing research area? It’s a subtle question. The subject should be sufficiently mature that enduring concepts and results can be identified and clearly explained. If the pace of progress is sufficiently rapid, and the topics emphasized are not well chosen, then an ill-timed book might become obsolete quickly. On the other hand, the subject ought not to be too mature; only if there are many exciting open questions to attack will the book be likely to attract a sizable audience eager to master the material.

I feel confident that Quantum Information Meets Quantum Matter is appearing at an opportune time, and that the authors have made wise choices about what to include. They are world-class experts, and are themselves responsible for many of the scientific advances explained here. The student or senior scientist who studies this book closely will be well grounded in the tools and ideas at the forefront of current research at the confluence of quantum information science and quantum condensed matter physics.

Indeed, I expect that in the years ahead a steadily expanding community of scientists, including computer scientists, chemists, and high-energy physicists, will want to be well acquainted with the ideas at the heart of Quantum Information Meets Quantum Matter. In particular, growing evidence suggests that the quantum physics of spacetime itself is an emergent manifestation of long-range quantum entanglement in an underlying more fundamental quantum theory. More broadly, as quantum technology grows ever more sophisticated, I believe that the theoretical and experimental study of highly complex many-particle systems will be an increasingly central theme of 21st century physical science. It that’s true, Quantum Information Meets Quantum Matter is bound to hold an honored place on the bookshelves of many scientists for years to come.

John Preskill
Pasadena, California
September 2018

 

 

“A theorist I can actually talk with”

Haunted mansions have ghosts, football teams have mascots, and labs have in-house theorists. I found myself posing as a lab’s theorist at Caltech. The gig began when Oskar Painter, a Caltech experimentalist, emailed that he’d read my first paper about quantum chaos. Would I discuss the paper with the group?

Oskar’s lab was building superconducting qubits, tiny circuits in which charge can flow forever. The lab aimed to control scores of qubits, to develop a quantum many-body system. Entanglement—strong correlations that quantum systems can sustain and everyday systems can’t—would spread throughout the qubits. The system could realize phases of matter—like many-particle quantum chaos—off-limits to most materials.

How could Oskar’s lab characterize the entanglement, the entanglement’s spread, and the phases? Expert readers will suggest measuring an entropy, a gauge of how much information this part of the system holds about that part. But experimentalists have had trouble measuring entropies. Besides, one measurement can’t capture many-body entanglement; such entanglement involves too many intricacies. Oskar was searching for arrows to add to his lab’s measurement quiver.

Mascot

In-house theorist?

I’d proposed a protocol for measuring a characterization of many-body entanglement, quantum chaos, and thermalization—a property called “the out-of-time-ordered correlator.” The protocol appealed to Oskar. But practicalities limit quantum many-body experiments: The more qubits your system contains, the more the system can contact its environment, like stray particles. The stronger the interactions, the more the environment entangles with the qubits, and the less the qubits entangle with each other. Quantum information leaks from the qubits into their surroundings; what happens in Vegas doesn’t stay in Vegas. Would imperfections mar my protocol?

I didn’t know. But I knew someone who could help us find out.

Justin Dressel works at Chapman University as a physics professor. He’s received the highest praise that I’ve heard any experimentalist give a theorist: “He’s a theorist I can actually talk to.” With other collaborators, Justin and I simplified my scheme for measuring out-of-time-ordered correlators. Justin knew what superconducting-qubit experimentalists could achieve, and he’d been helping them reach for more.

How about, I asked Justin, we simulate our protocol on a computer? We’d code up virtual superconducting qubits, program in interactions with the environment, run our measurement scheme, and assess the results’ noisiness. Justin had the tools to simulate the qubits, but he lacked the time. 

Know any postdocs or students who’d take an interest? I asked.

Chapman.001

Chapman University’s former science center. Don’t you wish you spent winters in California?

José Raúl González Alonso has a smile like a welcome sign and a coffee cup glued to one hand. He was moving to Chapman University to work as a Grand Challenges Postdoctoral Fellow. José had built simulations, and he jumped at the chance to study quantum chaos.

José confirmed Oskar’s fear and other simulators’ findings: The environment threatens measurements of the out-of-time-ordered correlator. Suppose that you measure this correlator at each of many instants, you plot the correlator against time, and you see the correlator drop. If you’ve isolated your qubits from their environment, we can expect them to carry many-body entanglement. Golden. But the correlator can drop if, instead, the environment is harassing your qubits. You can misdiagnose leaking as many-body entanglement.

OTOC plots

Our triumvirate identified a solution. Justin and I had discovered another characterization of quantum chaos and many-body entanglement: a quasiprobability, a quantum generalization of a probability.  

The quasiprobability contains more information about the entanglement than the out-of-time-ordered-correlator does. José simulated measurements of the quasiprobability. The quasiprobability, he found, behaves one way when the qubits entangle independently of their environment and behaves another way when the qubits leak. You can measure the quasiprobability to decide whether to trust your out-of-time-ordered-correlator measurement or to isolate your qubits better. The quasiprobability enables us to avoid false positives.

Physical Review Letters published our paper last month. Working with Justin and José deepened my appetite for translating between the abstract and the concrete, for proving abstractions as a theorist’s theorist and realizing them experimentally as a lab’s theorist. Maybe, someday, I’ll earn the tag “a theorist I can actually talk with” from an experimentalist. For now, at least I serve better than a football-team mascot.

Symmetries and quantum error correction

It’s always exciting when you can bridge two different physical concepts that seem to have nothing in common—and it’s even more thrilling when the results have as broad a range of possible fields of application as from fault-tolerant quantum computation to quantum gravity.

Physicists love to draw connections between distinct ideas, interconnecting concepts and theories to uncover new structure in the landscape of scientific knowledge. Put together information theory with quantum mechanics and you’ve opened a whole new field of quantum information theory. More recently, machine learning tools have been combined with many-body physics to find new ways to identify phases of matter, and ideas from quantum computing were applied to Pozner molecules to obtain new plausible models of how the brain might work.

In a recent contribution, my collaborators and I took a shot at combining the two physical concepts of quantum error correction and physical symmetries. What can we say about a quantum error-correcting code that conforms to a physical symmetry? Surprisingly, a continuous symmetry prevents the code from doing its job: A code can conform well to the symmetry, or it can correct against errors accurately, but it cannot do both simultaneously.

By a continuous symmetry, we mean a transformation that is characterized by a set of continuous parameters, such as angles. For instance, if I am holding an atom in my hand (more realistically, it’ll be confined in some fancy trap with lots of lasers), then I can rotate it around and about in space:

A rotation like this is fully specified by an axis and an angle, which are continuous parameters. Other transformations that we could think of are, for instance, time evolution, or a continuous family of unitary gates that we might want to apply to the system.

On the other hand, a code is a way of embedding some logical information into physical systems:

By cleverly distributing the information that we care about over several physical systems, an error-correcting code is able to successfully recover the original logical information even if the physical systems are exposed to some noise. Quantum error-correcting codes are particularly promising for quantum computing, since qubits tend to lose their information really fast (current typical ones can hold their information for a few seconds). In this way, instead of storing the actual information we care about on a single qubit, we use extra qubits which we prepare in a complicated state that is designed to protect this information from the noise.

Covariant codes for quantum computation

A code that is compatible with respect to a physical symmetry is called covariant. This property ensures that if I apply a symmetry transformation on the logical information, this is equivalent to applying corresponding symmetry transformations on each of the physical systems.

Suppose I would like to flip my qubit from “0” to “1” and from “1” to “0”. If my information is stored in an encoded form, then in principle I first need to decode the information to uncover the original logical information, apply the flip operation, and then re-encode the new logical information back onto the physical qubits. A covariant code allows to perform the transformation directly on the physical qubits, without having to decode the information first:

The advantage of this scheme is that the logical information is never exposed and remains protected all along the computation.

But here’s the catch: Eastin and Knill famously proved that error-correcting codes can be at most covariant with respect to a finite set of transformations, ruling out universal computation with transversal gates. In other words, the computations we can perform using this scheme are very limited because we can’t perform any continuous symmetry transformation.

Interestingly, however, there’s a loophole: If we consider macroscopic systems, such as a particle with a very large value of spin, then it becomes possible again to construct codes that are covariant with respect to continuous transformations.

How is that possible, you ask? How do we transition from the microscopic regime, where covariant codes are ruled out for continuous symmetries, to the macroscopic regime, where they are allowed? We provide an answer by resorting to approximate quantum error correction. Namely, we consider the situation where the code does not have to correct each error exactly, but only has to reconstruct a good approximation of the logical information. As it turns out, there is a quantitative limit to how accurately a code can correct against errors if it is covariant with respect to a continuous symmetry, represented by the following equation:

where epsilon specifies how inaccurately the code error-corrects (epsiloneqzero means the code can correct against errors perfectly), n is the number of physical subsystems, and the DeltaT_logical and DeltaT_physical are measures of “how strongly” the symmetry transformation can act on the logical and physical subsystems.

Let’s try to understand the right-hand side of this equation. In physics, continuous symmetries are generated by what we call physical charges. These are physical quantities that are associated with the symmetry, and that characterize how the symmetry acts on each state of the system. For instance, the charge that corresponds to time evolution is simply energy: States that label high energies have a rapidly varying phase whereas the phase of low-energy states changes slowly in time. Above, we indicate by DeltaT_logical the range of possible charge values on the logical system and by DeltaT_physical the corresponding range of charge values on each physical subsystem. In typical settings, this range of charge values is related to the dimension of the system—the more states the system has, intuitively, the greater range of charges it can accommodate.

The above equation states that the inaccuracy of the code must be larger than some value given on the right-hand side of the equation, which depends on the number of subsystems n and the ranges of charge values on the logical system and physical subsystems. The right-hand side becomes small in two regimes: if each subsystem can accommodate a large range of charge values, or if there is a large number of physical systems. In these regimes, our limitation vanishes, and we can circumvent the Eastin-Knill theorem and construct good covariant error-correcting codes. This allows us to connect the two regimes that seemed incompatible earlier, the microscopic regime where there cannot be any covariant codes, and the macroscopic regime where they are allowed.

From quantum computation to many-body physics and quantum gravity

Quantum error-correcting codes not only serve to protect information in a quantum computation against noise, but they also provide a conceptual toolbox to understand complex physical systems where a quantum state is delocalized over many physical subsystems. The tight connections between quantum error correction and many-body physics have been put to light following a long history of pioneering research at Caltech in these fields. And as if that weren’t enough, quantum error correcting codes were also shown to play a crucial role in understanding quantum gravity.

There is an abundance of natural physical symmetries to consider both in many-body physics and in quantum gravity, and that gives us a good reason to be excited about characterizing covariant codes. For instance, there are natural approximate quantum error correcting codes that appear in some statistical mechanical models by cleverly picking global energy eigenstates. These codes are covariant with respect to time evolution by construction, since the codewords are energy eigenstates. Now, we understand more precisely under which conditions such codes can be constructed.

Perhaps an even more illustrative example is that of time evolution in holographic quantum gravity, that is, in the AdS/CFT correspondence. This model of quantum gravity has the property that it is equivalent to a usual quantum field theory that lives on the boundary of the universe. What’s more, the correspondence which tells us how the bulk quantum gravity theory is mapped to the boundary is, in fact, a quantum error-correcting code. If we add a time axis, then the picture becomes a cylinder where the interior is the theory of quantum gravity, and where the cylinder itself represents a traditional quantum field theory:

AdSCFT-01

Since the bulk theory and the boundary theory are equivalent, the action of time evolution must be faithfully represented in both pictures. But this is in apparent contradiction with the Eastin-Knill theorem, from which it follows that a quantum error-correcting code cannot be covariant with respect to a continuous symmetry. We now understand how this is, in fact, not a contradiction: As we’ve seen, codes may be covariant with respect to continuous symmetries in the presence of systems with a large number of degrees of freedom, such as a quantum field theory.

What’s next?

There are some further results in our paper that I have not touched upon in this post, including a precise approximate statement of the Eastin-Knill theorem in terms of system dimensions, and a fun machinery to construct covariant codes for more general systems such as oscillators and rotors.

We have only scratched the surface of the different applications I’ve mentioned, by studying the properties of covariant codes in general. I’m now excited to dive into more detail with our wonderful team to study deeper applications to correlations in many-body systems, global symmetries in quantum gravity, accuracy limits of quantum clocks and precision limits to quantum metrology in the presence of noise.

This has been an incredibly fun project to work on. Such a collaboration illustrates again the benefit of interacting with great scientists with a wide range of areas of expertise including representation theory, continuous variable systems, and quantum gravity. Thanks Sepehr, Victor, Grant, Fernando, Patrick, and John, for this fantastic experience.

Humans can intuit quantum physics.

One evening this January, audience members packed into a lecture hall in MIT’s physics building. Undergraduates, members of the public, faculty members, and other scholars came to watch a film premiere and a panel discussion. NOVA had produced the film, “Einstein’s Quantum Riddle,” which stars entanglement. Entanglement is a relationship between quantum systems such as electrons. Measuring two entangled electrons yields two outcomes, analogous to the numbers that face upward after you roll two dice. The quantum measurements’ outcomes can exhibit correlations stronger than any measurements of any classical, or nonquantum, systems can. Which die faces point upward can share only so much correlation, even if the dice hit each other.

einstein's q. riddle

Dice feature in the film’s explanations of entanglement. So does a variation on the shell game, in which one hides a ball under one of three cups, shuffles the cups, and challenges viewers to guess which cup is hiding the ball. The film derives its drama from the Cosmic Bell test. Bell tests are experiments crafted to show that classical physics can’t describe entanglement. Scientists recently enhanced Bell tests using light from quasars—ancient, bright, faraway galaxies. Mix astrophysics with quantum physics, and an edgy, pulsing soundtrack follows.

The Cosmic Bell test grew from a proposal by physicists at MIT and the University of Chicago. The coauthors include David Kaiser, a historian of science and a physicist on MIT’s faculty. Dave co-organized the premiere and the panel discussion that followed. The panel featured Dave; Paola Cappellaro, an MIT quantum experimentalist; Alan Guth, an MIT cosmologist who contributed to the Bell test; Calvin Leung, an MIT PhD student who contributed; Chris Schmidt, the film’s producer; and me. Brindha Muniappan, the Director of Education and Public Programs at the MIT Museum, moderated the discussion.

panel

think that the other panelists were laughing with me.

Brindha asked what challenges I face when explaining quantum physics, such as on this blog. Quantum theory wears the labels “weird,” “counterintuitive,” and “bizarre” in journalism, interviews, blogs, and films. But the thorn in my communicational side reflects quantum “weirdness” less than it reflects humanity’s self-limitation: Many people believe that we can’t grasp quantum physics. They shut down before asking me to explain.

Examples include a friend and Quantum Frontiers follower who asks, year after year, for books about quantum physics. I suggest literature—much by Dave Kaiser—he reads some, and we discuss his impressions. He’s learning, he harbors enough curiosity to have maintained this routine for years, and he has technical experience as a programmer. But he’s demurred, several times, along the lines of “But…I don’t know. I don’t think I’ll ever understand it. Humans can’t understand quantum physics, can we? It’s too weird.” 

Quantum physics defies many expectations sourced from classical physics. Classical physics governs how basketballs arch, how paint dries, how sunlight slants through your window, and other everyday experiences. Yet we can gain intuition about quantum physics. If we couldn’t, how could we solve problems and accomplish research? Physicists often begin solving problems by trying to guess the answer from intuition. We reason our way toward a guess by stripping away complications, constructing toy models, and telling stories. We tell stories about particles hopping from site to site on lattices, particles trapped in wells, and arrows flipping upward and downward. These stories don’t capture all of quantum physics, but they capture the essentials. After grasping the essentials, we translate them into math, check how far our guesses lie from truth, and correct our understanding. Intuition about quantum physics forms the compass that guides problem solving.

Growing able to construct, use, and mathematize such stories requires work. You won’t come to understand quantum theory by watching NOVA films, though films can prime you for study. You can gain a facility with quantum theory through classes, problem sets, testing, research, seminars, and further processing. You might not have the time or inclination to. Even if you have, you might not come to understand why quantum theory describes our universe: Science can’t necessarily answer all “why” questions. But you can grasp what quantum theory implies about our universe.

People grasp physics arguably more exotic than quantum theory, without exciting the disbelief excited by a grasp of quantum theory. Consider the Voyager spacecraft launched in 1977. Voyager has survived solar winds and -452º F weather, imaged planets, and entered interstellar space. Classical physics—the physics of how basketballs arch—describes much of Voyager’s experience. But even if you’ve shot baskets, how much intuition do you have about interstellar space? I know physicists who claim to have more intuition about quantum physics than about much classical. When astrophysicists discuss Voyager and interstellar space, moreover, listeners don’t fret that comprehension lies beyond them. No one need fret when quantum physicists discuss the electrons in us.

Fretting might not occur to future generations: Outreach teams are introducing kids to quantum physics through games and videos. Caltech’s Institute for Quantum Information and Matter has partnered with Google to produce QCraft, a quantum variation on Minecraft, and with the University of Southern California on quantum chess. In 2017, the American Physical Society’s largest annual conference featured a session called “Gamification and other Novel Approaches in Quantum Physics Outreach.” Such outreach exposes kids to quantum terminology and concepts early. Quantum theory becomes a playground to explore, rather than a source of intimidation. Players will grow up primed to think about quantum-mechanics courses not “Will my grade-point average survive this semester?” but “Ah, so this is the math under the hood of entanglement.”

qcraft 2

Sociology restricts people to thinking quantum physics weird. But quantum theory defies classical expectations less than it could. Measurement outcomes could share correlations stronger than the correlations sourced by entanglement. How strong could the correlations grow? How else could physics depart farther from classical physics than quantum physics does? Imagine the worlds governed by all possible types of physics, called “generalized probabilistic theories” (GPTs). GPTs form a landscape in which quantum theory constitutes an island, on which classical physics constitutes a hill. Compared with the landscape’s outskirts, our quantum world looks tame.

GPTs fall under the research category of quantum foundations. Quantum foundations concerns why the math that describes quantum systems describes quantum systems, reformulations of quantum theory, how quantum theory differs from classical mechanics, how quantum theory could deviate but doesn’t, and what happens during measurements of quantum systems. Though questions about quantum foundations remain, they don’t block us from intuiting about quantum theory. A stable owner can sense when a horse has colic despite lacking a veterinary degree.

Moreover, quantum-foundations research has advanced over the past few decades. Collaborations and tools have helped: Theorists have been partnering with experimentalists, such as on the Cosmic Bell test and on studies of measurement. Information theory has engendered mathematical tools for quantifying entanglement and other quantum phenomena. Information theory has also firmed up an approach called “operationalism.” Operationalists emphasize preparation procedures, evolutions, and measurements. Focusing on actions and data concretizes arguments and facilitates comparisons with experiments. As quantum-foundations research has advanced, so have quantum information theory, quantum experiments, quantum technologies, and interdisciplinary cross-pollination. Twentieth-century quantum physicists didn’t imagine the community, perspectives, and knowledge that we’ve accrued. So don’t adopt 20th-century pessimism about understanding quantum theory. Einstein grasped much, but today’s scientific community grasps more. Richard Feynman said, “I think I can safely say that nobody understands quantum mechanics.” Feynman helped spur the quantum-information revolution; he died before its adolescence. Besides, Feynman understood plenty about quantum theory. Intuition jumps off the pages of his lecture notes and speeches.

landscape

Landscape beyond quantum theory

I’ve swum in oceans and lakes, studied how the moon generates tides, and canoed. But piloting a steamboat along the Mississippi would baffle me. I could learn, given time, instruction, and practice; so can you learn quantum theory. Don’t let “weirdness,” “bizarreness,” or “counterintuitiveness” intimidate you. Humans can intuit quantum physics.

I get knocked down…

“You’ll have to have a thick skin.”

Marcelo Gleiser, a college mentor of mine, emailed the warning. I’d sent a list of physics PhD programs and requested advice about which to attend. Marcelo’s and my department had fostered encouragement and consideration.

Suit up, Marcelo was saying.

Criticism fuels science, as Oxford physicist David Deutsch has written. We have choices about how we criticize. Some criticism styles reflect consideration for the criticized work’s creator. Tufts University philosopher Daniel Dennett has devised guidelines for “criticizing with kindness”:1

1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.

2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

3. You should mention anything you have learned from your target.

4. Only then are you permitted to say so much as a word of rebuttal or criticism.

Scientists skip to step four often—when refereeing papers submitted to journals, when posing questions during seminars, when emailing collaborators, when colleagues sketch ideas at a blackboard. Why? Listening and criticizing require time, thought, and effort—three of a scientist’s most valuable resources. Should any scientist spend those resources on an idea of mine, s/he deserves my gratitude. Spending empathy atop time, thought, and effort can feel supererogatory. Nor do all scientists prioritize empathy and kindness. Others of us prioritize empathy but—as I have over the past five years—grown so used to its latency, I forget to demonstrate it.

Doing science requires facing not only criticism, but also “That doesn’t make sense,” “Who cares?” “Of course not,” and other morale boosters.

Doing science requires resilience.

Resilience

So do measurements of quantum information (QI) scrambling. Scrambling is a subtle, late, quantum stage of equilibration2 in many-body systems. Example systems include chains of spins,3 such as in ultracold atoms, that interact with each other strongly. Exotic examples include black holes in anti-de Sitter space.4

Imagine whacking one side of a chain of interacting spins. Information about the whack will disseminate throughout the chain via entanglement.5 After a long interval (the scrambling time, t_*), spins across the systems will share many-body entanglement. No measurement of any few, close-together spins can disclose much about the whack. Information will have scrambled across the system.

QI scrambling has the subtlety of an assassin treading a Persian carpet at midnight. Can we observe scrambling?

Carpet

A Stanford team proposed a scheme for detecting scrambling using interferometry.6 Justin Dressel, Brian Swingle, and I proposed a scheme based on weak measurements, which refrain from disturbing the measured system much. Other teams have proposed alternatives.

Many schemes rely on effective time reversal: The experimentalist must perform the quantum analog of inverting particles’ momenta. One must negate the Hamiltonian \hat{H}, the observable that governs how the system evolves: \hat{H} \mapsto - \hat{H}.

At least, the experimentalist must try. The experimentalist will likely map \hat{H} to - \hat{H} + \varepsilon. The small error \varepsilon could wreak havoc: QI scrambling relates to chaos, exemplified by the butterfly effect. Tiny perturbations, such as the flap of a butterfly’s wings, can snowball in chaotic systems, as by generating tornadoes. Will the \varepsilon snowball, obscuring observations of scrambling?

Snowball

It needn’t, Brian and I wrote in a recent paper. You can divide out much of the error until t_*.

You can detect scrambling by measuring an out-of-time-ordered correlator (OTOC), an object I’ve effused about elsewhere. Let’s denote the time-t correlator by F(t). You can infer an approximation \tilde{F}(t) to F(t) upon implementing an \varepsilon-ridden interferometry or weak-measurement protocol. Remove some steps from that protocol, Brian and I say. Infer a simpler, easier-to-measure object \tilde{F}_{\rm simple}(t). Divide the two measurement outcomes to approximate the OTOC:

F(t)  \approx \frac{ \tilde{F}(t) }{ \tilde{F}_{\rm simple}(t) }.

OTOC measurements exhibit resilience to error.

Arm 2

Physicists need resilience. Brian criticizes with such grace, he could serve as the poster child for Daniel Dennett’s guidelines. But not every scientist could. How can we withstand kindness-lite criticism?

By drawing confidence from what we’ve achieved, with help from mentors like Marcelo. I couldn’t tell what about me—if anything—could serve as a rock on which to plant a foot, as an undergrad. Mentors identified what I had too little experience to appreciate. You question what you don’t understand, they said. You assimilate perspectives from textbooks, lectures, practice problems, and past experiences. You scrutinize details while keeping an eye on the big picture. So don’t let so-and-so intimidate you.

I still lack my mentors’ experience, but I’ve imbibed a drop of their insight. I savor calculations that I nail, congratulate myself upon nullifying referees’ concerns, and celebrate the theorems I prove.

I’ve also created an email folder entitled “Nice messages.” In go “I loved your new paper; combining those topics was creative,” “Well done on the seminar; I’m now thinking of exploring that field,” and other rarities. The folder affords an umbrella when physics clouds gather.

Finally, I try to express appreciation of others’ work.7 Science thrives on criticism, but scientists do science. And scientists are human—undergrads, postdocs, senior researchers, and everyone else.

Doing science—and attempting to negate Hamiltonians—we get knocked down. But we can get up again.

 

Around the time Brian and I released “Resilience” two other groups proposed related renormalizations. Check out their schemes here and here.

1Thanks to Sean Carroll for alerting me to this gem of Dennett’s.

2A system equilibrates as its large-scale properties, like energy, flatline.

3Angular-momentum-like quantum properties

4Certain space-times different from ours

5Correlations, shareable by quantum systems, stronger than any achievable by classical systems

6The cancellation (as by a crest of one wave and a trough of another) of components of a quantum state, or the addition of components (as two waves’ crests)

7Appreciation of specific qualities. “Nice job” can reflect a speaker’s belief but often reflects a desire to buoy a receiver whose work has few merits to elaborate on. I applaud that desire and recommend reinvesting it. “Nice job” carries little content, which evaporates under repetition. Specificity provides content: “Your idea is alluringly simple but could reverberate across multiple fields” has gristle.

The Quantum Wave in Computing

Summer is a great time for academics. Imagine: three full months off! Hit the beach. Tune that golf pitch. Hike the sierras. Go on a cruise. Watch soccer with the brazilenos (there’s been better years for that one). Catch the sunset by the Sydney opera house. Take a nap.

IMG_20180619_145256

A visiting researcher taking full advantage of the Simons Institute’s world-class relaxation facilities. And yes, I bet you he’s proving a theorem at the same time.

Think that’s outrageous? We have it even better. Not only do we get to travel the globe worry-free, but we prove theorems while doing it. For some of us summer is the only time of year when we manage to prove theorems. Ideas accumulate during the year, blossom during the conferences and workshops that mark the start of the summer, and hatch during the few weeks that many of us set aside as “quiet time” to finally “wrap things up”.

I recently had the pleasure of contributing to the general well-being of my academic colleagues by helping to co-organize (with Andrew Childs, Ignacio Cirac, and Umesh Vazirani) a 2-month long program on “Challenges in Quantum Computation” at the Simons Institute in Berkeley. In this post I report on the program and describe one of the highlights discussed during it: Mahadev’s very recent breakthrough on classical verification of quantum computation.

Challenges in Quantum Computation

The Simons Institute has been in place on the UC Berkeley campus since the Fall of 2013, and in fact one of their first programs was on “Quantum Hamiltonian Complexity”, in Spring 2014 (see my account of one of the semester’s workshops here). Since then the institute has been hosting a pair of semester-long programs at a time, in all areas of theoretical computer science and neighboring fields. Our “summer cluster” had a slightly different flavor: shorter, smaller, it doubled up as the prelude to a full semester-long program scheduled for Spring 2020 (provisional title: The Quantum Wave in Computing, a title inspired from Umesh Vazirani’s recent tutorial at STOC’18 in Los Angeles) — (my interpretation of) the idea being that the ongoing surge in experimental capabilities supports a much broader overhaul of some of the central questions of computer science, from the more applied (such as, programming languages and compilers), to the most theoretical (such as, what complexity classes play the most central role).

This summer’s program hosted a couple dozen participants at a time. Some stayed for the full 2 months, while others visited for shorter times. The Simons Institute is a fantastic place for collaborative research. The three-story building is entirely devoted to us. There are pleasant yet not-too-comfortable shared offices, but the highlight is the two large communal rooms meant for organized and spontaneous discussion. Filled with whiteboards, bright daylight, comfy couches, a constant supply of tea, coffee, and cookies, and eager theorists!

After a couple weeks of settling down the program kicked off with an invigorating workshop. Our goal for the workshop was to frame the theoretical questions raised by the sudden jump in the capabilities of experimental quantum devices that we are all witnessing. There were talks describing progress in experiments (superconducting qubits, ion traps, and cold atoms were represented), suggesting applications for the new devices (from quantum simulation & quantum chemistry to quantum optimization and machine learning through “quantum supremacy” and randomness generation), and laying the theoretical framework for trustworthy interaction with the quantum devices (interactive proofs, testing, and verifiable delegation). We had an outstanding line-up of speakers. All talks (except the panel discussions, unfortunately) were recorded, and you can watch them here.

The workshop was followed by five additional weeks of “residency”, that allowed long-term participants to digest and develop the ideas presented during the workshop. In my experience these few additional weeks, right after the workshop, make all the difference. It is the same difference as between a quick conference call and a leisurely afternoon at the whiteboard: while the former may feel productive and bring the adrenaline levels up, the latter is more suited to in-depth exploration and unexpected discoveries.

There would be much to say about the ideas discussed during the workshop and following weeks. I will describe a single one of these ideas — in my opinion, one of the most outstanding ideas to have emerged at the interface of quantum computing and theoretical computer science in recent years! The result, “Classical Verification of Quantum Computations”, is by Urmila Mahadev, a Ph.D.~student at UC Berkeley (I think she just graduated). Urmila gave a wonderful talk on her result at the workshop, and I highly recommend watching the recorded video. In the remainder of this post I’ll provide an overview of the result. I also wrote a slightly more technical introduction that eager readers will find here.

A cryptographic leash on quantum systems

Mahadev’s result is already famous: announced on the blog of Scott Aaronson, it has earned her a long-standing 25$ prize, awarded for “solving the problem of proving the results of an arbitrary quantum computation to a classical skeptic”. Or, in complexity-theoretic linguo, for showing that “every language in the class BQP admits an interactive protocol where the prover is in BQP and the verifier is in BPP”. What does this mean?

Verifying quantum computations in the high complexity regime

On his blog Scott Aaronson traces the question back to a talk given by Daniel Gottesman in 2004. An eloquent formulation appears in a subsequent paper by Dorit Aharonov and Umesh Vazirani, aptly titled “Is Quantum Mechanics Falsifiable? A computational perspective on the foundations of Quantum Mechanics”.

Here is the problem. As readers of this blog are well aware, Feynman’s idea of a quantum computer, and the subsequent formalization by Bernstein and Vazirani of the Quantum Turing Machine, layed the theoretical foundation for the construction of computing devices whose inner functioning is based on the laws of quantum physics. Most readers also probably realize that we currently believe that these quantum devices will have the ability to efficiently solve computational problems (the class of which is denoted BQP) that are thought to be beyond the reach of classical computers (represented by the class BPP). A prominent example is factoring, but there are many others. The most elementary example is arguably Feynman’s original proposal: a quantum computer can be used to simulate the evolution of any quantum mechanical system “in real time”. In contrast, the best classical simulations available can take exponential time to converge even on concrete examples of practical interest. This places a computational impediment to scientific progress: the work of many physicists, chemists, and biologists, would be greatly sped up if only they could perform simulations at will.

So this hypothetical quantum device claims (or will likely claim) that it has the ability to efficiently solve computational problems for which there is no known efficient classical algorithm. Not only this but, as is widely believed in complexity-theoretic circles (a belief recently strenghtened by the proof of an oracle separation between BQP and PH by Tal and Raz), for some of these problems, even given the answer, there does not exist a classical proof that the answer is correct. The quantum device’s claim cannot be verified! This seems to place the future of science at the mercy of an ingenuous charlatan, with good enough design & marketing skills, that would convince us that it is providing the solution to exponentially complex problems by throwing stardust in our eyes. (Wait, did this happen already?)

Today is the most exciting time in quantum computing since the discovery of Shor’s algorithm for factoring: while we’re not quite ready to run that particular algorithm yet, experimental capabilities have ramped up to the point where we are just about to probe the “high-complexity” regime of quantum mechanics, by making predictions that cannot be emulated, or even verified, using the most powerful classical supercomputers available. What confidence will we have that the predictions have been obtained correctly? Note that this question is different from the question of testing the validity of the theory of quantum mechanics itself. The result presented here assumes the validity of quantum mechanics. What it offers is a method to test, assuming the correctness of quantum mechanics, that a device performs the calculation that it claims to have performed. If the device has supra-quantum powers, all bets are off. Even assuming the correctness of quantum mechanics, however, the device may, intentionally or not (e.g. due to faulty hardware), mislead the experimentalist. This is the scenario that Mahadev’s result aims to counter.

Interactive proofs

The first key idea is to use the power of interaction. The situation can be framed as follows: given a certain computation, such that a device (henceforth called “prover”) has the ability to perform the computation, but another entity, the classical physicist (henceforth called “verifier”) does not, is there a way for the verifier to extract the right answer from the prover with high confidence — given that the prover may not be trusted, and may attempt to use its superior computing power to mislead the verifier instead of performing the required computation?

The simplest scenario would be one where the verifier can execute the computation herself, and check the prover’s outcome. The second simplest scenario is one where the verifier cannot execute the computation, but there is a short proof that the prover can provide that allows her to fully certify the outcome. These two scenario correspond to problems in BPP and NP respectively; an example of the latter is factoring. As argued earlier, not all quantum computations (BQP) are believed to fall within these two classes. Both direct computation and proof verification are ruled out. What can we do? Use interaction!

The framework of interactive proofs originates in complexity theory in the 1990s. An interactive proof is a protocol through which a verifier (typically a computationally bounded entity, such as the physicist and her classical laptop) interacts with a more powerful, but generally untrusted, prover (such as the experimental quantum device). The goal of the protocol is for the verifier to certify the validity of a certain computational statement.

Here is a classical example (the expert — or impatient — reader may safely skip this). The example is for a problem that lies in co-NP, but is not believed to lie in NP. Suppose that both the verifier and prover have access to two graphs, {G} and {H}, such that the verifier wishes to raise an “ACCEPT” flag if and only if the two graphs are not isomorphic. In general this is a hard decision to make, because the verifier would have to check all possible mappings from one graph to the other, of which there are exponentially many. Here is how the verifier can extract the correct answer by interacting with a powerful, untrusted prover. First, the verifier flips a fair coin. If the coin comes up heads, she selects a random relabeling of the vertices of {G}. If the coin comes up tail, she selects a random relabeling of the vertices of {H}. The verifier then sends the relabeled graph to the prover, and asks the prover to guess which graph the verifier has hidden. If the prover provides the correct answer (easy to check), the verifier concludes that the graphs were not isomorphic. Otherwise, she concludes that they were isomorphic. It is not hard to see that, if the graphs are indeed not isomorphic, the prover always has a means to correctly identify the hidden graph, and convince the verifier to make the right decision. But if the graphs are isomorphic, then there is no way for the prover to distinguish the random relabelings (since the distributions obtained by randomly relabeling each graph are identical), and so the verifier makes the right decision with probability 1/2. Repeating the protocol a few times, with a different choice of relabeling each time, quickly drives the probability of making an error to {0}.

A deep result from the 1990s exactly charaterizes the class of computational problems (languages) that a classical polynomial-time verifier can decide in this model: IP = PSPACE. In words, any problem whose solution can be found in polynomial space has an interactive proof in which the verifier only needs polynomial time. Now observe that PSPACE contains NP, and much more: in fact PSPACE contains BQP as well (and even QMA)! (See this nice recent article in Quanta for a gentle introduction to these complexity classes, and more.) Thus any problem that can be decided on a quantum computer can also be decided without a quantum computer, by interacting with a powerful entity, the prover, that can convince the verifier of the right answer without being able to induce her in error (in spite of the prover’s greater power).

Are we not done? We’re not! The problem is that the result PSPACE = IP, even when specialized to BQP, requires (for all we know) a prover whose power matches that of PSPACE (almost: see e.g. this recent result for a slighlty more efficient prover). And as much as our experimental quantum device inches towards the power of BQP, we certainly wouldn’t dare ask it to perform a PSPACE-hard computation. So even though in principle there do exist interactive proofs for BQP-complete languages, these interactive proofs require a prover whose computational power goes much beyond what we believe is physically achievable. But that’s useless (for us): back to square zero.

Interactive proofs with quantum provers

Prior to Mahadev’s result, a sequence of beautiful results in the late 2000’s introduced a clever extension of the model of interactive proofs by allowing the verifier to make use of a very limited quantum computer. For example, the verifier may have the ability to prepare single qubits in two possible bases of her choice, one qubit at a time, and send them to the prover. Or the verifier may have the ability to receive single qubits from the prover, one at a time, and measure them in one of two bases of her choice. In both cases it was shown that the verifier could combine such limited quantum capacity with the possibility to interact with a quantum polynomial-time prover to verify arbitrary polynomial-time quantum computation. The idea for the protocols crucially relied on the ability of the verifier to prepare qubits in a way that any deviation by the prover from the presecribed honest behavior would be detected (e.g. by encoding information in mutually unbiased bases unknown to the prover). For a decade the question remained open: can a completely classical verifier certify the computation performed by a quantum prover?

Mahadev’s result brings a positive resolution to this question. Mahadev describes a protocol with the following properties. First, as expected, for any quantum computation, there is a quantum prover that will convince the classical verifier of the right outcome for the computation. This property is called completeness of the protocol. Second, no prover can convince the classical verifier to accept a wrong outcome. This property is called soundness of the protocol. In Mahadev’s result the latter property comes with a twist: soundness holds provided the prover cannot break post-quantum cryptography. In contrast, the earlier results mentioned in the previous paragraph obtained protocols that were sound against an arbitrarily powerful prover. The additional cryptographic assumption gives Mahadev’s result a “win-win” flavor: either the protocol is sound, or someone in the quantum cloud has figured out how to break an increasingly standard cryptographic assumption (namely, post-quantum security of the Learning With Errors problem) — in all cases, a verified quantum feat!

In the remaining of the post I will give a high-level overview of Mahadev’s protocol and its analysis. For more detail, see the accompanying blog post.

The protocol is constructed in two steps. The first step builds on insights from works preceding this one. This step reduces the problem of verifying the outcome of an arbitrary quantum computation to a seemingly much simpler problem, that nevertheless encapsulates all the subtlety of the verification task. The problem is the following — in keeping with the terminology employed by Mahadev, I’ll call it the qubit commitment problem. Suppose that a prover claims to have prepared a single-qubit state of its choice; call it {| \psi \rangle} ({| \psi \rangle} is not known to the verifier). Suppose the verifier challenges the prover for the outcome of a measurement performed on {| \psi \rangle}, either in the computational basis (the eigenbasis of the Pauli Z), or in the Hadamard basis (the eigenbasis of the Pauli X). Which basis to use is the verifier’s choice, but of course only one basis can be asked. Does there exist a protocol that guarantees that, at the end of the protocol, the verifier will be able to produce a bit that matches the true outcome of a measurement of {| \psi \rangle} in the chosen basis? (More precisely, it should be that the verifier’s final bit has the same distribution as the outcome of a measurement of {| \psi \rangle} in the chosen basis.)

The reduction that accomplishes this first step combines Kitaev’s circuit-to-Hamiltonian construction with some gadgetry from perturbation theory, and I will not describe it here. An important property of the reduction is that it is ultimately sufficient that the verifier has the guarantee that the measurement outcomes she obtains in either case, computational or Hadamard, are consistent with measurement outcomes for the correct measurements performed on some quantum state. In principle the state does not need to be related to anything the prover does (though of course the analysis will eventually define that state from the prover), it only needs to exist. Specifically, we wish to rule out situations where e.g. the prover claims that both outcomes are deterministically “0”, a claim that would violate the uncertainty principle. (For the sake of the argument, let’s ignore that in the case of a single qubit the space of outcomes allowed by quantum mechanics can be explicitly mapped out — in the actual protocol, the prover commits to {n} qubits, not just one.)

Committing to a qubit

The second step of the protocol construction introduces a key idea. In order to accomplish the sought-after commitment, the verifier is going to engage in an initial commitment phase with the prover. In this phase, the prover is required to provide classical information to the verifier, that “commits” it to a specific qubit. This committed qubit is the state on which the prover will later perform the measurement asked by the verifier. The classical information obtained in the commitment phase will bind the prover to reporting the correct outcome, for both of the verifier’s basis choice — or risk being caught cheating.bit_commit_cartoon

How does this work? Commitments to bits, or even qubits, are an old story in cryptography. The standard method for committing to a bit {b} is based on the use of a one-way permutation {f}, together with a hardcore predicate {h} for {f} (i.e.~an efficiently computable function {h: \{0,1\}^n\rightarrow \{0,1\}} such that given {f(x)}, it is hard to predict {h(x)}). The construction goes as follows. The committer selects a uniformly random string {r} and sends {(y,m)=(f(r),h(r)\oplus b)}. To unveil the commitment {b}, it is enough to reveal a string {r} such that {f(r)=y}; the receiver can then compute {h(r)} and decode {b=h(r)\oplus m}. The point is that since {f} is a permutation, the value {y} uniquely “commits” the sender to an {r}, and thus to a {b}; however, given {y=f(r)} for an unknown {r} the hardcore predicate {h(r)} looks uniformly random, thus {(y,m)} reveals no information about {b} to the receiver.

What is new in Mahadev’s scheme is not only that the commitment is to a qubit, instead of a bit, but even more importabtly that the commitment is provided by classical information, which is necessary to obtain a classical protocol. (Commitments to qubits, using qubits, can be obtained by combining the quantum one-time pad with the commitment scheme described above.) To explain how this is achieved we’ll need a slightly more advanced crypographic primitive: a pair of injective trapdoor one-way functions {f_0,f_1:\{0,1\}^n\rightarrow\{0,1\}^n}. This means that it is easy to evaluate both functions on any input, but that given a value {y} in their common range, it is hard to find a preimage of {y} under either function — except if one is given the trapdoor information. (Note that this is an over-simplification of the actual primitive used by Mahadev, which has additional properties, including that of being “claw-free”.)

The commitment phase of the protocol works as follows. Starting from a state {| \psi \rangle=\alpha| 0 \rangle+\beta| 1 \rangle} of its choice, the prover is supposed to perform the following steps. First, the prover creates a uniform superposition over the common domain of {f_0} and {f_1}. Then it evaluates either function, {f_0} or {f_1}, in an additional register, by controlling on the qubit of {| \psi \rangle}. Finally, the prover measures the register that contains the image of {f_0} or {f_1}. This achieves the following sequence of transformations:

\displaystyle \begin{array}{rcl} \alpha| 0 \rangle+\beta| 1 \rangle &\mapsto& (\alpha| 0 \rangle + \beta| 1 \rangle) \otimes \Big(2^{-n/2} \sum_{x\in\{0,1\}^n} | x \rangle\Big) \\ &\mapsto & 2^{-n/2} \sum_x \alpha | 0 \rangle| x \rangle| f_0(x) \rangle + \beta | 1 \rangle| f_1(x) \rangle\\ &\mapsto & \big(\alpha| 0 \rangle| x_0 \rangle+\beta| 1 \rangle| x_1 \rangle\big)| y \rangle\;, \end{array}

where {y\in\{0,1\}^n} is the measured image. The string {y} is called the prover’s commitment string. It is required to report it to the verifier.

In what sense is {y} a commitment to the state {| \psi \rangle}? The key point is that, once it has measured {y}, the prover has “lost control” over its qubit — it has effectively handed over that control to the verifier. For example, the prover no longer has the ability to perform an arbitrary rotation on its qubit. Why? The prover knows {y} (it had to report it to the verifier) but not {x_0} and {x_1} (this is the claw-free assumption on the pair {(f_0,f_1)}). What this means — though of course it has to be shown — is that the prover can no longer recover the state {| \psi \rangle}! It does not have the ability to “uncompute” {x_0} and {x_1}. Thus the qubit has been “set in cryptographic stone”. In contrast, the verifier can use the trapdoor information to recover {x_0} and {x_1}. This gives her extra leverage on the prover. This asymmetry, introduced by cryptography, is what eventually allows the verifier to extract a truthful measurement outcome from the prover (or detect lying).

It is such a wonderful idea! It stuns me every time Urmila explains it. Proving it is of course rather delicate. In this post I make an attempt at going into the idea in a little more depth. The best resource remains Urmila’s paper, as well as her talk at the Simons Institute.

Open questions

What is great about this result is not that it closes a decades-old open question, but that by introducing a truly novel idea it opens up a whole new field of investigation. Some of the ideas that led to the result were already fleshed out by Mahadev in her work on homomorphic encryption for quantum circuits, and I expect many more results to continue building on these ideas.

An obvious outstanding question is whether the cryptography is needed at all: could there be a scheme achieving the same result as Mahadev’s, but without computational assumptions on the prover? It is known that if such a scheme exists, it is unlikely to have the property of being blind, meaning that the prover learns nothing about the computation that the verifier wishes it to execute (aside from an upper bound on its length); see this paper for “implausibility” results in this direction. Mahadev’s protocol relies on “post-hoc” verification, and is not blind. Urmila points out that it is likely the protocol could be made blind by composing it with her protocol for homomorphic encryption. Could there be a different protocol, that would not go through post-hoc verification, but instead directly guide the prover through the evaluation of a universal circuit on an encrypted input, gate by gate, as did some previous works?

 

 

So long, and thanks for all the Fourier transforms

The air conditioning in Caltech’s Annenberg Center for Information Science and Technology broke this July. Pasadena reached 87°F on the fourth, but my office missed the memo. The thermostat read 62°.

Hyperactive air conditioning suits a thermodynamicist’s office as jittery wifi suits an electrical-engineering building. Thermodynamicists call air conditioners “heat pumps.” A heat pump funnels heat—the energy of random motion—from cooler bodies to hotter. Heat flows spontaneously only from hot to cold on average, according to the Second Law of Thermodynamics. Pumping heat against its inclination costs work, organized energy drawn from a reliable source.

Reliable sources include batteries, coiled springs, and ACME anvils hoisted into the air. Batteries have chemical energy that power electric fans. ACME anvils have gravitational potential energy that splat coyotes.

Thermostat

I hoisted binder after binder onto my desk this July. The binders felt like understudies for ACME anvils, bulging with papers. They contained notes I’d written, and articles I’d read, for research throughout the past five years. My Caltech sojourn was switching off its lights and drawing its shutters. A control theorist was inheriting my desk. I had to move my possessions to an office downstairs, where I’d moonlight until quitting town.

Quitting town.

I hadn’t expected to feel at home in southern California, after stints in New and old England. But research and researchers drew me to California and then hooked me. Caltech’s Institute for Quantum Information and Matter (IQIM) has provided an intellectual home, colleagues-cum-friends, and a base from which to branch out to other scholars and institutions.

The IQIM has provided also the liberty to deck out my research program as a college dorm room with posters—according to my tastes, values, and exuberances. My thesis demanded the title “Quantum steampunk: Quantum information, thermodynamics, their intersection, and applications thereof across physics.” I began developing the concept of quantum steampunk on this blog. Writing a manifesto for the concept, in the thesis’s introduction, proved a delight:

The steampunk movement has invaded literature, film, and art over the past three decades. Futuristic technologies mingle, in steampunk works, with Victorian and wild-west settings. Top hats, nascent factories, and grimy cities counterbalance time machines, airships, and automata. The genre arguably originated in 1895, with the H.G. Wells novel The Time Machine. Recent steampunk books include the best-selling The Invention of Hugo Cabret; films include the major motion picture Wild Wild West; and artwork ranges from painting to jewelry to sculpture.

Steampunk captures the romanticism of fusing the old with the cutting-edge. Technologies proliferated during the Victorian era: locomotives, Charles Babbage’s analytical engine, factories, and more. Innovation facilitated exploration. Add time machines, and the spirit of adventure sweeps you away. Little wonder that fans flock to steampunk conventions, decked out in overcoats, cravats, and goggles.

What steampunk fans dream, quantum-information thermodynamicists live.

Thermodynamics budded during the late 1800s, when steam engines drove the Industrial Revolution. Sadi Carnot, Ludwig Boltzmann, and other thinkers wondered how efficiently engines could operate. Their practical questions led to fundamental insights—about why time flows; how much one can know about a physical system; and how simple macroscopic properties, like temperature, can capture complex behaviors, like collisions by steam particles. An idealization of steam—the classical ideal gas—exemplifies the conventional thermodynamic system. Such systems contain many particles, behave classically, and are often assumed to remain in equilibrium.

But thermodynamic concepts—such as heat, work, and equilibrium—characterize small scales, quantum systems, and out-of-equilibrium processes. Today’s experimentalists probe these settings, stretching single DNA strands with optical tweezers [4], cooling superconducting qubits to build quantum computers [5, 6], and extracting work from single-electron boxes [7]. These settings demand reconciliation with 19th-century thermodynamics. We need a toolkit for fusing the old with the new.

Quantum information (QI) theory provides such a toolkit. Quantum phenomena serve as resources for processing information in ways impossible with classical systems. Quantum computers can solve certain computationally difficult problems quickly; quantum teleportation transmits information as telephones cannot; quantum cryptography secures messages; and quantum metrology centers on high- precision measurements. These applications rely on entanglement (strong correlations between quantum systems), disturbances by measurements, quantum uncertainty, and discreteness.

Technological promise has driven fundamental insights, as in thermodynamics. QI theory has blossomed into a mathematical toolkit that includes entropies, uncertainty relations, and resource theories. These tools are reshaping fundamental science, in applications across physics, computer science, and chemistry.

QI is being used to update thermodynamics, in the field of quantum thermodynamics (QT) [8, 9]. QT features entropies suited to small scales; quantum engines; the roles of coherence in thermalization and transport; and the transduction of information into work, à la Maxwell’s demon [10].

This thesis (i) contributes to the theory of QI thermodynamics and (ii) applies the theory, as a toolkit, across physics. Spheres touched on include atomic, molecular, and optical (AMO) physics; nonequilibrium statistical mechanics; condensed matter; chemistry; and high-energy physics. I propose the name quantum steampunk for this program…

Never did I anticipate, in college, that a PhD could reflect my identity and style. I feared losing myself and my perspective in a subproblem of a subproblem of a subproblem. But I found myself blessed with the chance to name the aesthetic that’s guided my work, the scent I’ve unconsciously followed from book to class to research project to conversation, to paper, since…middle school, come to think of it. I’m grateful for that opportunity.

Q. steampunk

Whump, went my quantum-engine binder on my desk. I’d stuck an address label, pointing to Annenberg, to the binder. If the binder walked away, whoever found it would know where it belonged. Scratching at the label with a fingernail failed to budge the sticker. I stuck a label addressed to Cambridge, Massachusetts alongside the Pasadena address.

I’m grateful to be joining Harvard as an ITAMP (Institute for Theoretical Atomic, Molecular, and Optical Physics) Postdoctoral Fellow. You’ll be able to catch me in Harvard’s physics department, in ITAMP, or at MIT, starting this September.

While hunting for a Cambridge apartment, I skyped with potential roommates. I’d inquire about locations, about landlords and landladies, about tidiness, and about heating. The heating system’s pretty old, most tenants would admit. We keep the temperature between 60 and 65 degrees, to keep costs down. I’d nod and extol the layering of sweaters, but I shivered inside.

One tenant surprised me. The heating…works too well, she said. It’s pretty warm, to tell the truth. I thought about heat pumps and quantum engines, about picnics in the Pasadena sunshine, about the Julys I’d enjoyed while the world around me had sweated. Within an hour, I’d committed to sharing the apartment.

Boxes

Some of you have asked whether I’ll continue blogging for Quantum Frontiers. Yes: Extricating me from the IQIM requires more than 3,000 miles.

See you in Cambridge.

 

With apologies to Douglas Adams.