The Curious Behavior of Topological Insulators

IQIM hosts a Summer Research Institute that invites high school Physics teachers to work directly with staff, students, and researchers in the lab.  Last summer I worked with Marcus Teague, a highly intelligent and very patient Caltech Staff Scientist in the Yeh Group, to help set up an experiment for studying exotic material samples under circularly polarized light.  I had researched, ordered, and assembled parts for the optics and vacuum chamber.  As I returned to Caltech this summer, I was eager to learn how the Yeh Group had proceeded with the study.

Yeh group 2017

Yeh group (2017): I am the one on the front-left of the picture, next to Dr. Yeh and in front of Kyle Chen. Benjamin Fackrell, another physics teacher interning at the Yeh lab, is all the way to the right.

The optics equipment I had researched, ordered, and helped to set up last summer is being used currently to study topological insulator (TI) samples that Kyle Chien-Chang Chen, a doctoral candidate, has worked on in the Yeh Lab.  Yes, a high school Physics teacher played a small role in their real research! It is exciting and humbling to have a connection to real-time research.

7234_ZOQuartWavplatMount_1

Quartz quarter-wave plates are important elements in many experiments involving light. They convert linearly polarized light to circularly polarized light.

Kyle receives a variety of TI samples from UCLA; the current sample up for review is Bismuth Antimony Telluride \mathrm{(BiSb)}_2\mathrm{Te}_3.  Depending on the particular sample and the type of testing, Kyle has a variety of procedures to prep the samples for study.  And this summer, Kyle has help from visiting Canadian student Adrian Llanos. Below are figures of some of the monolayer and bilayer structures for topological insulators studied in the lab.

2016 0808 sample from UCLA

Pictures of samples from UCLA

Under normal conditions, a topological insulator (TI) is only conductive on the surface. The center of a TI sample is an insulator. But when the surface states open an energy gap, the surface of the TI becomes insulating. The energy gap is the amount of energy necessary to remove an electron from the top valence band to become free to move about.  This gap is the result of the interaction between the conduction band and valence band surface states from the opposing surfaces of a thin film. The resistance of the conducting surface actually increases. The Yeh group is hoping that the circularly polarized light can help align the spin of the Chromium electrons, part of the bilayer of the TI.  At the same time, light has other effects, like photo-doping, which excites more electrons into the conduction bands and thus reduces the resistance. The conductivity of the surface of the TI changes as the preferentially chosen spin up or spin down is manipulated by the circularly polarized light or by the changing magnetic field.

PPMS

A physical property measurement system.

This interesting experiment on TI samples is taking place within a device called a Physical Property Measurement System (PPMS).  The PPMS is able to house the TI sample and the optics equipment to generate circularly polarized light, while allowing the researchers to vary the temperature and magnetic field.  The Yeh Group is able to artificially turn up the magnetic field or the circularly polarized light in order to control the resistance and current signal within the sample.  The properties of surface conductivity are studied up to 8 Tesla (over one-hundred thousand times the Earth’s magnetic field), and from room temperature (just under 300 Kelvin) to just below 2 Kelvin (colder than outer space).

right-hand-rule

Right-Hand-Rule used to determine the direction of the magnetic (Lorentz) force.

In the presence of a magnetic field, when a current is applied to a conductor, the electrons will experience a force at a right angle to the magnetic field, following the right-hand rule (or the Physics gang sign, as we affectionately call it in my classroom).  This causes the electrons to curve perpendicular to their original path and perpendicular to the magnetic field. The build up of electrons on one end of the conductor creates a potential difference. This potential difference perpendicular to the original current is known as the ordinary Hall Effect.  The ratio of the induced voltage to the applied current is known as the Hall Resistance.

Under very low temperatures, the Quantum Hall Effect is observed. As the magnetic field is changed, the Hall Voltage increases in set quantum amounts, as opposed to gradually. Likewise, the Hall Resistance is quantized.  It is a such an interesting phenomenon!

For a transport measurement of the TI samples, Kyle usually uses a Hall Bar Geometry in order to measure the Hall Effect accurately. Since the sample is sufficiently large, he can simply solder it for measurement.

hall_resitance_featured

Transport Measurements of TI Samples follow the same setup as Quantum Hall measurements on graphene: Current runs through electrodes attached to the North/South ends of the sample, while electron flow is measured longitudinally, as well as along the East/West ends (Hall conductance).

What is really curious is that the Bismuth Antimony Telluride samples are exhibiting the Hall Effect even when no external magnetic field is applied!  When the sample is measured, there is a Hall Resistance despite no external magnetic field. Hence the sample itself must be magnetic.  This phenomenon is called the Anomalous Hall Effect.

According to Kyle, there is no fancy way to measure the magnetization directly; it is only a matter of measuring a sample’s Hall Resistance. The Hall Resistance should be zero when there is no Anomalous Hall Effect, and when there is ferromagnetism (spins want to align in the direction of their neighbors), you see a non-zero value.  What is really interesting is that they assume ferromagnetism would break the time-reversal symmetry and thus open a gap at the surface states.  A very strange behavior that is also observed is that the longitudinal resistance increases gradually.  

Running PPMS

Running PPMS

Typically the quantum Hall Resistance increases in quantum increments.  Even if the surface gap is open, the sample is not insulating because the gap is small (<0.3 eV); hence, under these conditions this TI is behaving much more like a semiconductor!

Next, the group will examine these samples using the Scanning Tunneling Microscope (STM).  The STM will be able to provide local topological information by examining 1 micron by 1 micron areas.  In comparison, the PPMS research with these samples is telling the story of the global behavior of the sample.  The combination of information from the PPMS and STM research will provide a more holistic story of the behavior of these unique samples.

I am thrilled to see how the group has used what we started with last summer to find interesting new results.  I am fascinated to see what they learn in the coming months with the different samples and STM testing. And I am quite excited to share these applications with my students in the upcoming new school year.  Another summer packed with learning!

The Quantum Wave in Computing

Summer is a great time for academics. Imagine: three full months off! Hit the beach. Tune that golf pitch. Hike the sierras. Go on a cruise. Watch soccer with the brazilenos (there’s been better years for that one). Catch the sunset by the Sydney opera house. Take a nap.

IMG_20180619_145256

A visiting researcher taking full advantage of the Simons Institute’s world-class relaxation facilities. And yes, I bet you he’s proving a theorem at the same time.

Think that’s outrageous? We have it even better. Not only do we get to travel the globe worry-free, but we prove theorems while doing it. For some of us summer is the only time of year when we manage to prove theorems. Ideas accumulate during the year, blossom during the conferences and workshops that mark the start of the summer, and hatch during the few weeks that many of us set aside as “quiet time” to finally “wrap things up”.

I recently had the pleasure of contributing to the general well-being of my academic colleagues by helping to co-organize (with Andrew Childs, Ignacio Cirac, and Umesh Vazirani) a 2-month long program on “Challenges in Quantum Computation” at the Simons Institute in Berkeley. In this post I report on the program and describe one of the highlights discussed during it: Mahadev’s very recent breakthrough on classical verification of quantum computation.

Challenges in Quantum Computation

The Simons Institute has been in place on the UC Berkeley campus since the Fall of 2013, and in fact one of their first programs was on “Quantum Hamiltonian Complexity”, in Spring 2014 (see my account of one of the semester’s workshops here). Since then the institute has been hosting a pair of semester-long programs at a time, in all areas of theoretical computer science and neighboring fields. Our “summer cluster” had a slightly different flavor: shorter, smaller, it doubled up as the prelude to a full semester-long program scheduled for Spring 2020 (provisional title: The Quantum Wave in Computing, a title inspired from Umesh Vazirani’s recent tutorial at STOC’18 in Los Angeles) — (my interpretation of) the idea being that the ongoing surge in experimental capabilities supports a much broader overhaul of some of the central questions of computer science, from the more applied (such as, programming languages and compilers), to the most theoretical (such as, what complexity classes play the most central role).

This summer’s program hosted a couple dozen participants at a time. Some stayed for the full 2 months, while others visited for shorter times. The Simons Institute is a fantastic place for collaborative research. The three-story building is entirely devoted to us. There are pleasant yet not-too-comfortable shared offices, but the highlight is the two large communal rooms meant for organized and spontaneous discussion. Filled with whiteboards, bright daylight, comfy couches, a constant supply of tea, coffee, and cookies, and eager theorists!

After a couple weeks of settling down the program kicked off with an invigorating workshop. Our goal for the workshop was to frame the theoretical questions raised by the sudden jump in the capabilities of experimental quantum devices that we are all witnessing. There were talks describing progress in experiments (superconducting qubits, ion traps, and cold atoms were represented), suggesting applications for the new devices (from quantum simulation & quantum chemistry to quantum optimization and machine learning through “quantum supremacy” and randomness generation), and laying the theoretical framework for trustworthy interaction with the quantum devices (interactive proofs, testing, and verifiable delegation). We had an outstanding line-up of speakers. All talks (except the panel discussions, unfortunately) were recorded, and you can watch them here.

The workshop was followed by five additional weeks of “residency”, that allowed long-term participants to digest and develop the ideas presented during the workshop. In my experience these few additional weeks, right after the workshop, make all the difference. It is the same difference as between a quick conference call and a leisurely afternoon at the whiteboard: while the former may feel productive and bring the adrenaline levels up, the latter is more suited to in-depth exploration and unexpected discoveries.

There would be much to say about the ideas discussed during the workshop and following weeks. I will describe a single one of these ideas — in my opinion, one of the most outstanding ideas to have emerged at the interface of quantum computing and theoretical computer science in recent years! The result, “Classical Verification of Quantum Computations”, is by Urmila Mahadev, a Ph.D.~student at UC Berkeley (I think she just graduated). Urmila gave a wonderful talk on her result at the workshop, and I highly recommend watching the recorded video. In the remainder of this post I’ll provide an overview of the result. I also wrote a slightly more technical introduction that eager readers will find here.

A cryptographic leash on quantum systems

Mahadev’s result is already famous: announced on the blog of Scott Aaronson, it has earned her a long-standing 25$ prize, awarded for “solving the problem of proving the results of an arbitrary quantum computation to a classical skeptic”. Or, in complexity-theoretic linguo, for showing that “every language in the class BQP admits an interactive protocol where the prover is in BQP and the verifier is in BPP”. What does this mean?

Verifying quantum computations in the high complexity regime

On his blog Scott Aaronson traces the question back to a talk given by Daniel Gottesman in 2004. An eloquent formulation appears in a subsequent paper by Dorit Aharonov and Umesh Vazirani, aptly titled “Is Quantum Mechanics Falsifiable? A computational perspective on the foundations of Quantum Mechanics”.

Here is the problem. As readers of this blog are well aware, Feynman’s idea of a quantum computer, and the subsequent formalization by Bernstein and Vazirani of the Quantum Turing Machine, layed the theoretical foundation for the construction of computing devices whose inner functioning is based on the laws of quantum physics. Most readers also probably realize that we currently believe that these quantum devices will have the ability to efficiently solve computational problems (the class of which is denoted BQP) that are thought to be beyond the reach of classical computers (represented by the class BPP). A prominent example is factoring, but there are many others. The most elementary example is arguably Feynman’s original proposal: a quantum computer can be used to simulate the evolution of any quantum mechanical system “in real time”. In contrast, the best classical simulations available can take exponential time to converge even on concrete examples of practical interest. This places a computational impediment to scientific progress: the work of many physicists, chemists, and biologists, would be greatly sped up if only they could perform simulations at will.

So this hypothetical quantum device claims (or will likely claim) that it has the ability to efficiently solve computational problems for which there is no known efficient classical algorithm. Not only this but, as is widely believed in complexity-theoretic circles (a belief recently strenghtened by the proof of an oracle separation between BQP and PH by Tal and Raz), for some of these problems, even given the answer, there does not exist a classical proof that the answer is correct. The quantum device’s claim cannot be verified! This seems to place the future of science at the mercy of an ingenuous charlatan, with good enough design & marketing skills, that would convince us that it is providing the solution to exponentially complex problems by throwing stardust in our eyes. (Wait, did this happen already?)

Today is the most exciting time in quantum computing since the discovery of Shor’s algorithm for factoring: while we’re not quite ready to run that particular algorithm yet, experimental capabilities have ramped up to the point where we are just about to probe the “high-complexity” regime of quantum mechanics, by making predictions that cannot be emulated, or even verified, using the most powerful classical supercomputers available. What confidence will we have that the predictions have been obtained correctly? Note that this question is different from the question of testing the validity of the theory of quantum mechanics itself. The result presented here assumes the validity of quantum mechanics. What it offers is a method to test, assuming the correctness of quantum mechanics, that a device performs the calculation that it claims to have performed. If the device has supra-quantum powers, all bets are off. Even assuming the correctness of quantum mechanics, however, the device may, intentionally or not (e.g. due to faulty hardware), mislead the experimentalist. This is the scenario that Mahadev’s result aims to counter.

Interactive proofs

The first key idea is to use the power of interaction. The situation can be framed as follows: given a certain computation, such that a device (henceforth called “prover”) has the ability to perform the computation, but another entity, the classical physicist (henceforth called “verifier”) does not, is there a way for the verifier to extract the right answer from the prover with high confidence — given that the prover may not be trusted, and may attempt to use its superior computing power to mislead the verifier instead of performing the required computation?

The simplest scenario would be one where the verifier can execute the computation herself, and check the prover’s outcome. The second simplest scenario is one where the verifier cannot execute the computation, but there is a short proof that the prover can provide that allows her to fully certify the outcome. These two scenario correspond to problems in BPP and NP respectively; an example of the latter is factoring. As argued earlier, not all quantum computations (BQP) are believed to fall within these two classes. Both direct computation and proof verification are ruled out. What can we do? Use interaction!

The framework of interactive proofs originates in complexity theory in the 1990s. An interactive proof is a protocol through which a verifier (typically a computationally bounded entity, such as the physicist and her classical laptop) interacts with a more powerful, but generally untrusted, prover (such as the experimental quantum device). The goal of the protocol is for the verifier to certify the validity of a certain computational statement.

Here is a classical example (the expert — or impatient — reader may safely skip this). The example is for a problem that lies in co-NP, but is not believed to lie in NP. Suppose that both the verifier and prover have access to two graphs, {G} and {H}, such that the verifier wishes to raise an “ACCEPT” flag if and only if the two graphs are not isomorphic. In general this is a hard decision to make, because the verifier would have to check all possible mappings from one graph to the other, of which there are exponentially many. Here is how the verifier can extract the correct answer by interacting with a powerful, untrusted prover. First, the verifier flips a fair coin. If the coin comes up heads, she selects a random relabeling of the vertices of {G}. If the coin comes up tail, she selects a random relabeling of the vertices of {H}. The verifier then sends the relabeled graph to the prover, and asks the prover to guess which graph the verifier has hidden. If the prover provides the correct answer (easy to check), the verifier concludes that the graphs were not isomorphic. Otherwise, she concludes that they were isomorphic. It is not hard to see that, if the graphs are indeed not isomorphic, the prover always has a means to correctly identify the hidden graph, and convince the verifier to make the right decision. But if the graphs are isomorphic, then there is no way for the prover to distinguish the random relabelings (since the distributions obtained by randomly relabeling each graph are identical), and so the verifier makes the right decision with probability 1/2. Repeating the protocol a few times, with a different choice of relabeling each time, quickly drives the probability of making an error to {0}.

A deep result from the 1990s exactly charaterizes the class of computational problems (languages) that a classical polynomial-time verifier can decide in this model: IP = PSPACE. In words, any problem whose solution can be found in polynomial space has an interactive proof in which the verifier only needs polynomial time. Now observe that PSPACE contains NP, and much more: in fact PSPACE contains BQP as well (and even QMA)! (See this nice recent article in Quanta for a gentle introduction to these complexity classes, and more.) Thus any problem that can be decided on a quantum computer can also be decided without a quantum computer, by interacting with a powerful entity, the prover, that can convince the verifier of the right answer without being able to induce her in error (in spite of the prover’s greater power).

Are we not done? We’re not! The problem is that the result PSPACE = IP, even when specialized to BQP, requires (for all we know) a prover whose power matches that of PSPACE (almost: see e.g. this recent result for a slighlty more efficient prover). And as much as our experimental quantum device inches towards the power of BQP, we certainly wouldn’t dare ask it to perform a PSPACE-hard computation. So even though in principle there do exist interactive proofs for BQP-complete languages, these interactive proofs require a prover whose computational power goes much beyond what we believe is physically achievable. But that’s useless (for us): back to square zero.

Interactive proofs with quantum provers

Prior to Mahadev’s result, a sequence of beautiful results in the late 2000’s introduced a clever extension of the model of interactive proofs by allowing the verifier to make use of a very limited quantum computer. For example, the verifier may have the ability to prepare single qubits in two possible bases of her choice, one qubit at a time, and send them to the prover. Or the verifier may have the ability to receive single qubits from the prover, one at a time, and measure them in one of two bases of her choice. In both cases it was shown that the verifier could combine such limited quantum capacity with the possibility to interact with a quantum polynomial-time prover to verify arbitrary polynomial-time quantum computation. The idea for the protocols crucially relied on the ability of the verifier to prepare qubits in a way that any deviation by the prover from the presecribed honest behavior would be detected (e.g. by encoding information in mutually unbiased bases unknown to the prover). For a decade the question remained open: can a completely classical verifier certify the computation performed by a quantum prover?

Mahadev’s result brings a positive resolution to this question. Mahadev describes a protocol with the following properties. First, as expected, for any quantum computation, there is a quantum prover that will convince the classical verifier of the right outcome for the computation. This property is called completeness of the protocol. Second, no prover can convince the classical verifier to accept a wrong outcome. This property is called soundness of the protocol. In Mahadev’s result the latter property comes with a twist: soundness holds provided the prover cannot break post-quantum cryptography. In contrast, the earlier results mentioned in the previous paragraph obtained protocols that were sound against an arbitrarily powerful prover. The additional cryptographic assumption gives Mahadev’s result a “win-win” flavor: either the protocol is sound, or someone in the quantum cloud has figured out how to break an increasingly standard cryptographic assumption (namely, post-quantum security of the Learning With Errors problem) — in all cases, a verified quantum feat!

In the remaining of the post I will give a high-level overview of Mahadev’s protocol and its analysis. For more detail, see the accompanying blog post.

The protocol is constructed in two steps. The first step builds on insights from works preceding this one. This step reduces the problem of verifying the outcome of an arbitrary quantum computation to a seemingly much simpler problem, that nevertheless encapsulates all the subtlety of the verification task. The problem is the following — in keeping with the terminology employed by Mahadev, I’ll call it the qubit commitment problem. Suppose that a prover claims to have prepared a single-qubit state of its choice; call it {| \psi \rangle} ({| \psi \rangle} is not known to the verifier). Suppose the verifier challenges the prover for the outcome of a measurement performed on {| \psi \rangle}, either in the computational basis (the eigenbasis of the Pauli Z), or in the Hadamard basis (the eigenbasis of the Pauli X). Which basis to use is the verifier’s choice, but of course only one basis can be asked. Does there exist a protocol that guarantees that, at the end of the protocol, the verifier will be able to produce a bit that matches the true outcome of a measurement of {| \psi \rangle} in the chosen basis? (More precisely, it should be that the verifier’s final bit has the same distribution as the outcome of a measurement of {| \psi \rangle} in the chosen basis.)

The reduction that accomplishes this first step combines Kitaev’s circuit-to-Hamiltonian construction with some gadgetry from perturbation theory, and I will not describe it here. An important property of the reduction is that it is ultimately sufficient that the verifier has the guarantee that the measurement outcomes she obtains in either case, computational or Hadamard, are consistent with measurement outcomes for the correct measurements performed on some quantum state. In principle the state does not need to be related to anything the prover does (though of course the analysis will eventually define that state from the prover), it only needs to exist. Specifically, we wish to rule out situations where e.g. the prover claims that both outcomes are deterministically “0”, a claim that would violate the uncertainty principle. (For the sake of the argument, let’s ignore that in the case of a single qubit the space of outcomes allowed by quantum mechanics can be explicitly mapped out — in the actual protocol, the prover commits to {n} qubits, not just one.)

Committing to a qubit

The second step of the protocol construction introduces a key idea. In order to accomplish the sought-after commitment, the verifier is going to engage in an initial commitment phase with the prover. In this phase, the prover is required to provide classical information to the verifier, that “commits” it to a specific qubit. This committed qubit is the state on which the prover will later perform the measurement asked by the verifier. The classical information obtained in the commitment phase will bind the prover to reporting the correct outcome, for both of the verifier’s basis choice — or risk being caught cheating.bit_commit_cartoon

How does this work? Commitments to bits, or even qubits, are an old story in cryptography. The standard method for committing to a bit {b} is based on the use of a one-way permutation {f}, together with a hardcore predicate {h} for {f} (i.e.~an efficiently computable function {h: \{0,1\}^n\rightarrow \{0,1\}} such that given {f(x)}, it is hard to predict {h(x)}). The construction goes as follows. The committer selects a uniformly random string {r} and sends {(y,m)=(f(r),h(r)\oplus b)}. To unveil the commitment {b}, it is enough to reveal a string {r} such that {f(r)=y}; the receiver can then compute {h(r)} and decode {b=h(r)\oplus m}. The point is that since {f} is a permutation, the value {y} uniquely “commits” the sender to an {r}, and thus to a {b}; however, given {y=f(r)} for an unknown {r} the hardcore predicate {h(r)} looks uniformly random, thus {(y,m)} reveals no information about {b} to the receiver.

What is new in Mahadev’s scheme is not only that the commitment is to a qubit, instead of a bit, but even more importabtly that the commitment is provided by classical information, which is necessary to obtain a classical protocol. (Commitments to qubits, using qubits, can be obtained by combining the quantum one-time pad with the commitment scheme described above.) To explain how this is achieved we’ll need a slightly more advanced crypographic primitive: a pair of injective trapdoor one-way functions {f_0,f_1:\{0,1\}^n\rightarrow\{0,1\}^n}. This means that it is easy to evaluate both functions on any input, but that given a value {y} in their common range, it is hard to find a preimage of {y} under either function — except if one is given the trapdoor information. (Note that this is an over-simplification of the actual primitive used by Mahadev, which has additional properties, including that of being “claw-free”.)

The commitment phase of the protocol works as follows. Starting from a state {| \psi \rangle=\alpha| 0 \rangle+\beta| 1 \rangle} of its choice, the prover is supposed to perform the following steps. First, the prover creates a uniform superposition over the common domain of {f_0} and {f_1}. Then it evaluates either function, {f_0} or {f_1}, in an additional register, by controlling on the qubit of {| \psi \rangle}. Finally, the prover measures the register that contains the image of {f_0} or {f_1}. This achieves the following sequence of transformations:

\displaystyle \begin{array}{rcl} \alpha| 0 \rangle+\beta| 1 \rangle &\mapsto& (\alpha| 0 \rangle + \beta| 1 \rangle) \otimes \Big(2^{-n/2} \sum_{x\in\{0,1\}^n} | x \rangle\Big) \\ &\mapsto & 2^{-n/2} \sum_x \alpha | 0 \rangle| x \rangle| f_0(x) \rangle + \beta | 1 \rangle| f_1(x) \rangle\\ &\mapsto & \big(\alpha| 0 \rangle| x_0 \rangle+\beta| 1 \rangle| x_1 \rangle\big)| y \rangle\;, \end{array}

where {y\in\{0,1\}^n} is the measured image. The string {y} is called the prover’s commitment string. It is required to report it to the verifier.

In what sense is {y} a commitment to the state {| \psi \rangle}? The key point is that, once it has measured {y}, the prover has “lost control” over its qubit — it has effectively handed over that control to the verifier. For example, the prover no longer has the ability to perform an arbitrary rotation on its qubit. Why? The prover knows {y} (it had to report it to the verifier) but not {x_0} and {x_1} (this is the claw-free assumption on the pair {(f_0,f_1)}). What this means — though of course it has to be shown — is that the prover can no longer recover the state {| \psi \rangle}! It does not have the ability to “uncompute” {x_0} and {x_1}. Thus the qubit has been “set in cryptographic stone”. In contrast, the verifier can use the trapdoor information to recover {x_0} and {x_1}. This gives her extra leverage on the prover. This asymmetry, introduced by cryptography, is what eventually allows the verifier to extract a truthful measurement outcome from the prover (or detect lying).

It is such a wonderful idea! It stuns me every time Urmila explains it. Proving it is of course rather delicate. In this post I make an attempt at going into the idea in a little more depth. The best resource remains Urmila’s paper, as well as her talk at the Simons Institute.

Open questions

What is great about this result is not that it closes a decades-old open question, but that by introducing a truly novel idea it opens up a whole new field of investigation. Some of the ideas that led to the result were already fleshed out by Mahadev in her work on homomorphic encryption for quantum circuits, and I expect many more results to continue building on these ideas.

An obvious outstanding question is whether the cryptography is needed at all: could there be a scheme achieving the same result as Mahadev’s, but without computational assumptions on the prover? It is known that if such a scheme exists, it is unlikely to have the property of being blind, meaning that the prover learns nothing about the computation that the verifier wishes it to execute (aside from an upper bound on its length); see this paper for “implausibility” results in this direction. Mahadev’s protocol relies on “post-hoc” verification, and is not blind. Urmila points out that it is likely the protocol could be made blind by composing it with her protocol for homomorphic encryption. Could there be a different protocol, that would not go through post-hoc verification, but instead directly guide the prover through the evaluation of a universal circuit on an encrypted input, gate by gate, as did some previous works?

 

 

So long, and thanks for all the Fourier transforms

The air conditioning in Caltech’s Annenberg Center for Information Science and Technology broke this July. Pasadena reached 87°F on the fourth, but my office missed the memo. The thermostat read 62°.

Hyperactive air conditioning suits a thermodynamicist’s office as jittery wifi suits an electrical-engineering building. Thermodynamicists call air conditioners “heat pumps.” A heat pump funnels heat—the energy of random motion—from cooler bodies to hotter. Heat flows spontaneously only from hot to cold on average, according to the Second Law of Thermodynamics. Pumping heat against its inclination costs work, organized energy drawn from a reliable source.

Reliable sources include batteries, coiled springs, and ACME anvils hoisted into the air. Batteries have chemical energy that power electric fans. ACME anvils have gravitational potential energy that splat coyotes.

Thermostat

I hoisted binder after binder onto my desk this July. The binders felt like understudies for ACME anvils, bulging with papers. They contained notes I’d written, and articles I’d read, for research throughout the past five years. My Caltech sojourn was switching off its lights and drawing its shutters. A control theorist was inheriting my desk. I had to move my possessions to an office downstairs, where I’d moonlight until quitting town.

Quitting town.

I hadn’t expected to feel at home in southern California, after stints in New and old England. But research and researchers drew me to California and then hooked me. Caltech’s Institute for Quantum Information and Matter (IQIM) has provided an intellectual home, colleagues-cum-friends, and a base from which to branch out to other scholars and institutions.

The IQIM has provided also the liberty to deck out my research program as a college dorm room with posters—according to my tastes, values, and exuberances. My thesis demanded the title “Quantum steampunk: Quantum information, thermodynamics, their intersection, and applications thereof across physics.” I began developing the concept of quantum steampunk on this blog. Writing a manifesto for the concept, in the thesis’s introduction, proved a delight:

The steampunk movement has invaded literature, film, and art over the past three decades. Futuristic technologies mingle, in steampunk works, with Victorian and wild-west settings. Top hats, nascent factories, and grimy cities counterbalance time machines, airships, and automata. The genre arguably originated in 1895, with the H.G. Wells novel The Time Machine. Recent steampunk books include the best-selling The Invention of Hugo Cabret; films include the major motion picture Wild Wild West; and artwork ranges from painting to jewelry to sculpture.

Steampunk captures the romanticism of fusing the old with the cutting-edge. Technologies proliferated during the Victorian era: locomotives, Charles Babbage’s analytical engine, factories, and more. Innovation facilitated exploration. Add time machines, and the spirit of adventure sweeps you away. Little wonder that fans flock to steampunk conventions, decked out in overcoats, cravats, and goggles.

What steampunk fans dream, quantum-information thermodynamicists live.

Thermodynamics budded during the late 1800s, when steam engines drove the Industrial Revolution. Sadi Carnot, Ludwig Boltzmann, and other thinkers wondered how efficiently engines could operate. Their practical questions led to fundamental insights—about why time flows; how much one can know about a physical system; and how simple macroscopic properties, like temperature, can capture complex behaviors, like collisions by steam particles. An idealization of steam—the classical ideal gas—exemplifies the conventional thermodynamic system. Such systems contain many particles, behave classically, and are often assumed to remain in equilibrium.

But thermodynamic concepts—such as heat, work, and equilibrium—characterize small scales, quantum systems, and out-of-equilibrium processes. Today’s experimentalists probe these settings, stretching single DNA strands with optical tweezers [4], cooling superconducting qubits to build quantum computers [5, 6], and extracting work from single-electron boxes [7]. These settings demand reconciliation with 19th-century thermodynamics. We need a toolkit for fusing the old with the new.

Quantum information (QI) theory provides such a toolkit. Quantum phenomena serve as resources for processing information in ways impossible with classical systems. Quantum computers can solve certain computationally difficult problems quickly; quantum teleportation transmits information as telephones cannot; quantum cryptography secures messages; and quantum metrology centers on high- precision measurements. These applications rely on entanglement (strong correlations between quantum systems), disturbances by measurements, quantum uncertainty, and discreteness.

Technological promise has driven fundamental insights, as in thermodynamics. QI theory has blossomed into a mathematical toolkit that includes entropies, uncertainty relations, and resource theories. These tools are reshaping fundamental science, in applications across physics, computer science, and chemistry.

QI is being used to update thermodynamics, in the field of quantum thermodynamics (QT) [8, 9]. QT features entropies suited to small scales; quantum engines; the roles of coherence in thermalization and transport; and the transduction of information into work, à la Maxwell’s demon [10].

This thesis (i) contributes to the theory of QI thermodynamics and (ii) applies the theory, as a toolkit, across physics. Spheres touched on include atomic, molecular, and optical (AMO) physics; nonequilibrium statistical mechanics; condensed matter; chemistry; and high-energy physics. I propose the name quantum steampunk for this program…

Never did I anticipate, in college, that a PhD could reflect my identity and style. I feared losing myself and my perspective in a subproblem of a subproblem of a subproblem. But I found myself blessed with the chance to name the aesthetic that’s guided my work, the scent I’ve unconsciously followed from book to class to research project to conversation, to paper, since…middle school, come to think of it. I’m grateful for that opportunity.

Q. steampunk

Whump, went my quantum-engine binder on my desk. I’d stuck an address label, pointing to Annenberg, to the binder. If the binder walked away, whoever found it would know where it belonged. Scratching at the label with a fingernail failed to budge the sticker. I stuck a label addressed to Cambridge, Massachusetts alongside the Pasadena address.

I’m grateful to be joining Harvard as an ITAMP (Institute for Theoretical Atomic, Molecular, and Optical Physics) Postdoctoral Fellow. You’ll be able to catch me in Harvard’s physics department, in ITAMP, or at MIT, starting this September.

While hunting for a Cambridge apartment, I skyped with potential roommates. I’d inquire about locations, about landlords and landladies, about tidiness, and about heating. The heating system’s pretty old, most tenants would admit. We keep the temperature between 60 and 65 degrees, to keep costs down. I’d nod and extol the layering of sweaters, but I shivered inside.

One tenant surprised me. The heating…works too well, she said. It’s pretty warm, to tell the truth. I thought about heat pumps and quantum engines, about picnics in the Pasadena sunshine, about the Julys I’d enjoyed while the world around me had sweated. Within an hour, I’d committed to sharing the apartment.

Boxes

Some of you have asked whether I’ll continue blogging for Quantum Frontiers. Yes: Extricating me from the IQIM requires more than 3,000 miles.

See you in Cambridge.

 

With apologies to Douglas Adams.

The World Cup from a Quantum Perspective

Two weeks into the Football World Cup and the group stages are over: 16 teams have gone home, leaving the top 16 teams to contend the knock-out stages. Those fans who enjoy a bet will be poring over the odds in search of a bargain—a mis-calculation on the part of the bookmakers. Is now the time to back Ronaldo for the golden boot, whilst Harry Kane dominates the headlines and sports betting markets? Will the hosts Russia continue to defy lowly pre-tournament expectations and make the semi-finals? Are France about to emerge from an unconvincing start to the tournament and blossom as clear front-runners?

But, whilst for most the sports betting markets may lead to the enhanced enjoyment of the tournament that a bet can bring (as well as the possibility of making a little money), for others they represent a window into the fascinating world of sports statistics. A fascination that can be captured by the simple question: how do they set the odds?

Suppose that a bookmaker has in their possession a list of outcome probabilities for matches between each pair of national football teams in the world and wants to predict the overall winner. There are 32768 possible ways for the tournament knock-out rounds to pan-out—a large, but not insurmountable number of iterations by modern computational standards.

However, if the bookmaker instead considers the tennis grand-slams, with 128 competitors in the first round, then there are a colossal 1.7 × 1038 permutations. Indeed, in a knock-out format there are 2n-1 permutations, where n is the number of entrants. And for those of a certain mindset, this exponentially growing space immediately raises the question of whether a quantum algorithm can yield a speed-up for the related prediction tasks.

A Tiny Cup

The immediate question which we want to answer here is, perhaps, who will win the World Cup. We will walk through the idea on the blackboard first, and then implement it as a quantum algorithm—which, hopefully, will give some insight into how and where quantum computers can outperform classical ones, for this particular way of answering the question.

Let us take a toy setup with four teams A, B, C and D;
the knockout stage starts with a game A vs. B, and C vs. D.
Whoever wins each game will play against each other, so here we have four possible final games: A vs. C, A vs. D, B vs. C, or B vs. D.
Let’s denote by p(X, Y) the probability that X wins when playing against Y.

The likelihood of A winning the cup is then simply given by

p(A, B) × ( p(C, D) × p(A, C) + p(D, C) × p(A, D) ),

i.e. the probability that A wins against B, times the probabilities of A winning against C in case C won against D, plus the probability of A winning against D in case D won.

How can we obtain the same quantity with a quantum algorithm?

First, we set up our Hilbert space so that it can represent all possible Cup scenarios.
Since we have four teams, we need a four-dimensional quantum system as our smallest storage unit—we commonly call those qudits as generalizations of a qubit, which having dimension 2 would be fit to store two teams only (we can always “embed” a qudit into a few qubits of the same dimension.

Remember: k qubits have dimension 2k, so we could also store the qudit as two qubits).
If we write |A\rangle, this simply stands for a qudit representing team A; if we write |A\rangle |B\rangle, then we have a state representing two teams.

To represent a full knockout tree, we follow the same logic: Take four qudits for the initial draw; add two qudits for the winners of the first two matches, and one qudit for the final winner.

For instance, one possible knockout scenario would be

|\text{Game 1}\rangle = \underbrace{|A\rangle |B\rangle |C\rangle |D\rangle}_\text{Initial Draw} \ \underbrace{|A\rangle |D\rangle}_\text{Finals} \ |D\rangle.

The probability associated with Game 1 is then precisely p(A, B) × p(D, C) × p(D, A).

Here is where quantum computing comes in.

Starting from an initial state |A\rangle |B\rangle |C\rangle |D\rangle, we create two new slots in a superposition over all possible match outcomes, weighted by the square-root of their probabilities (which we call q instead of p):

\begin{aligned} |\text{Step 1}\rangle = |A\rangle |B\rangle |C\rangle |D\rangle \big(\ \ &\text{q(A, B)q(C, D)} \,|A\rangle\ |C\rangle +\\ &\text{q(A, B)q(D, C)} \,|A\rangle\ |D\rangle +\\ &\text{q(B, A)q(C, D)} \,|B\rangle\ |C\rangle +\\ &\text{q(B, A)q(D, C)} \,|B\rangle\ |D\rangle\ \big). \end{aligned}

For the final round, we perform the same operation on those two last slots; e.g. we would map |A\rangle |C\rangle to a state |A\rangle |C\rangle ( q(A, C) |A\rangle + q(C, A) |C\rangle ). The final state is thus a superposition over eight possible weighted games (as we would expect).

So you can tell me who wins the World Cup?

Yes. Or well, probably. We find out by measuring the rightmost qudit.
As we know, the probability of obtaining a certain measurement outcome, say A, will then be determined by the square of the weights in front of the measured state; since we put in the square-roots initially we recover the original probabilities. Neat!

And since there are two possible game trees that lead to a victory of A, we have to sum them up—and we get precisely the probability we calculated by hand above. This means the team that is most likely to win will be the most likely measurement outcome.

So what about the World Cup? We have 16 teams; one team can thus be stored in four qubits. The knockout tree has 31 vertices, and a naive implementation can be done on a quantum computer with 124 qubits. Of course we are only a bit naive, so we can simulate this quantum computer on a classical one and obtain the following winning probabilities:

0.194 Brazil
0.168 Spain
0.119 France
0.092 Belgium
0.082 Argentina
0.075 England
0.049 Croatia
0.041 Colombia
0.04 Portugal
0.032 Uruguay
0.031 Russia
0.022 Switzerland
0.019 Denmark
0.018 Sweden
0.012 Mexico
0.006 Japan

It is worth noting that all operations we described can be implemented efficiently with a quantum computer, and the number of required qubits is quite small; for the four teams, we could get away with seven qudits, or fourteen qubits (and we could even save some, by ignoring the first four qudits which are always the same).

So for this particular algorithm there is an exponential speedup over its non-probabilistic classical counterpart: as mentioned, one would have to iterate over all trees; tedious for the World Cup, practically impossible for tennis. However…

Classical vs. Quantum

Does using a quantum algorithm give us a speedup for this task? Here, the answer is no; one could obtain similar results in comparable time using probabilistic methods, for instance, by doing Monte Carlo sampling.

But there are several interesting related questions that we could ask for which there might be a quantum advantage.

For some team A, we can easily create a state that has all game trees in superposition that lead to a victory of A—even weighting them using their respective probabilities.
Given this state as a resource, we can think of questions like “which game tree is most likely, given that we fix A and B as semifinalists”, or “which team should A play in the knockout stages to maximize the probability that B wins the tournament”.

Or, more controversially: can we optimize the winning chances for some team by rearranging the initial draw?

Some questions like these lend themselves to applying Grover search, for which there is a known speedup over classical computers. To inquire deeper into the utility of quantum algorithms, we need to invent the right kind of question to ask of this state.

Let us think of one more toy example. Being part physicists, we assume cows are spheres—so we might as well also assume that if A is likely to win a match against B, it always wins—even if the probability is only 51%. Let’s call this exciting sport “deterministic football”. For a set of teams playing a tournament of deterministic football, does there exist a winning initial draw for every team?

This becomes an especially interesting question in cases where there is a non-trivial cyclic relation between the teams’ abilities, a simple example being: A always beats B, B always beats C, and C always beats A. For example, if this problem turns out to be NP-hard, then it would be reasonable to expect that the quadratic improvement achieved by quantum search is the best we can hope for in using a quantum algorithm for the task of finding a winning initial draw for a chosen team—at least for deterministic football (phew).

To the finals and beyond

World Cup time is an exciting time: whatever the question, we are essentially dealing with binary trees, and making predictions can be translated into finding partitions or cuts that satisfy certain properties defined through a function of the edge weights (here the pairwise winning probabilities). We hope this quantum take on classical bookmaking might point us in the direction of new and interesting applications for quantum algorithms.

Hopefully a good bargain!

(Written with Steven Herbert and Sathyawageeswar Subramanian)

What’s the worst that could happen?

The archaeologist Howard Carter discovered Tutankhamun’s burial site in 1922. No other Egyptian pharaoh’s tomb had survived mostly intact until the modern era. Gold and glass and faience, statues and pendants and chariots, had evaded looting. The discovery would revolutionize the world’s understanding of, and enthusiasm for, ancient Egypt.

First, the artifacts had to leave the tomb.

Tutankhamun lay in three coffins nested like matryoshka dolls. Carter describes the nesting in his book The Tomb of Tutankhamen. Lifting the middle coffin from the outer coffin raised his blood pressure:

Everything may seem to be going well until suddenly, in the crisis of the process, you hear a crack—little pieces of surface ornament fall. Your nerves are at an almost painful tension. What is happening? All available room in the narrow space is crowded by your men. What action is needed to avert a catastrophe?

In other words, “What’s the worst that could happen?”

Matryoshka dolls

Collaborators and I asked that question in a paper published last month. We had in mind less Egyptology than thermodynamics and information theory. But never mind the distinction; you’re reading Quantum Frontiers! Let’s mix the fields like flour and oil in a Biblical grain offering.

Carter’s team had trouble separating the coffins: Ancient Egyptian priests (presumably) had poured fluid atop the innermost, solid-gold coffin. The fluid had congealed into a brown gunk, gluing the gold coffin to the bottom of the middle coffin. Removing the gold coffin required work—thermodynamic work.

Work consists of “well-ordered” energy usable in tasks like levering coffins out of sarcophagi and motoring artifacts from Egypt’s Valley of the Kings toward Cairo. We can model the gunk as a spring, one end of which was fixed to the gold coffin and one end of which was fixed to the middle coffin. The work W required to stretch a spring depends on the spring’s stiffness (the gunk’s viscosity) and on the distance stretched through.

W depends also on details: How many air molecules struck the gold coffin from above, opposing the team’s effort? How quickly did Carter’s team pull? Had the gunk above Tuankhamun’s nose settled into a hump or spread out? How about the gunk above Tutankhamun’s left eye socket? Such details barely impact the work required to open a 6.15-foot-long coffin. But air molecules would strongly impact W if Tutankhamun measured a few nanometers in length. So imagine Egyptian matryoshka dolls as long as stubs of DNA.

DNA

Imagine that Carter found one million sets of these matryoshka dolls. Lifting a given set’s innermost coffin would require an amount W of work that would vary from set of coffins to set of coffins. W would satisfy fluctuation relations, equalities I’ve blogged about many times.

Fluctuation relations resemble the Second Law of Thermodynamics, which illuminates why time flows in just one direction. But fluctuation relations imply more-precise predictions about W than the Second Law does.

Some predictions concern dissipated work: Carter’s team could avoid spending much work by opening the coffin infinitesimally slowly. Speeding up would heat the gunk, roil air molecules, and more. The heating and roiling would cost extra work, called dissipated work, denoted by W_{\rm diss}.

Suppose that Carter’s team has chosen a lid-opening speed v. Consider the greatest W_{\rm diss} that the team might have to waste on any nanoscale coffin. W_{\rm diss}^{\rm worst} is proportional to each of three information-theoretic quantities, my coauthors and I proved.

For experts: Each information-theoretic quantity is an order-infinity Rényi divergence D_\infty ( X || Y). The Rényi divergences generalize the relative entropy D ( X || Y ). D quantifies how efficiently one can distinguish between probability distributions, or quantum states, X and Y on average. The average is over many runs of a guessing game.

Imagine the worst possible run, which offers the lowest odds of guessing correctly. D_\infty quantifies your likelihood of winning. We related W_{\rm diss}^{\rm worst} to a D_\infty between two statistical-mechanical phase-space distributions (when we described classical systems), to a D_\infty between two quantum states (when we described quantum systems), and to a D_\infty between two probability distributions over work quantities W (when we described systems quantum and classical).

Book-paper

The worst case marks an extreme. How do the extremes consistent with physical law look? As though they’ve escaped from a mythologist’s daydream.

In an archaeologist’s worst case, arriving at home in the evening could lead to the following conversation:

“How was your day, honey?”

“The worst possible.”

“What happened?”

“I accidentally eviscerated a 3.5-thousand-year-old artifact—the most valuable, best-preserved, most information-rich, most lavishly wrought ancient Egyptian coffin that existed yesterday.”

Suppose that the archaeologist lived with a physicist. My group (guided by a high-energy physicist) realized that the conversation could continue as follows:

“And how was your day?”

“Also the worst possible.”

“What happened?”

“I created a black hole.”

General relativity and high-energy physics have begun breeding with quantum information and thermodynamics. The offspring bear extremes like few other systems imaginable. I wonder what our results would have to say about those offspring.

Oops

National Geographic reprinted Carter’s The Tomb of Tutankhamen in its “Adventure Classics” series. The series title fits Tomb as a mummy’s bandages fit the mummy. Carter’s narrative stretches from Egypt’s New Kingdom (of 3.5 thousand years ago) through the five-year hunt for the tomb (almost fruitless until the final season), to a water boy’s discovery of steps into the tomb, to the unsealing of the burial chamber, to the confrontation of Tutankhamun’s mummy.

Carter’s book guided me better than any audio guide could have at the California Science Center. The center is hosting the exhibition “King Tut: Treasures of the Golden Pharaoh.” After debuting in Los Angeles, the exhibition will tour the world. The tour showcases 150 artifacts from Tutankhamun’s tomb.

Those artifacts drove me to my desk—to my physics—as soon as I returned home from the museum. Tutankhamun’s tomb, Carter argues in his book, ranks amongst the 20th century’s most important scientific discoveries. I’d seen a smidgeon of the magnificence that Carter’s team— with perseverance, ingenuity, meticulousness, and buckets of sweat shed in Egypt’s heat—had discovered. I don’t expect to discover anything a tenth as magnificent. But how can a young scientist resist trying?

People say, “Prepare for the worst. Hope for the best.” I prefer “Calculate the worst. Hope and strive for a Tutankhamun.”

Outside exhibition

Postscript: Carter’s team failed to unglue the gold coffin by just “stretching” the gunky “spring.” The team resorted to heat, a thermodynamic quantity alternative to work: The team flipped the middle coffin upside-down above a heat lamp. The lamp raised the temperature to 932°F, melting the goo. The melting, with more work, caused the gold coffin to plop out of the middle coffin.

A finger painting for John Preskill

I’d completed about half my candidacy exam.

Four Caltech faculty members sat in front of me, in a bare seminar room. I stood beside a projector screen, explaining research I’d undertaken. The candidacy exam functions as a milepost in year three of our PhD program. The committee confirms that the student has accomplished research and should continue.

I was explaining a quantum-thermodynamics problem. I reviewed the problem’s classical doppelgänger and a strategy for solving the doppelgänger. Could you apply the classical strategy in the quantum problem? Up to a point. Beyond it, you’d need

Beatles

“Does anyone here like the Beatles?” I asked the committee. Three professors had never participated in an exam committee before. The question from the examinee appeared to startle them.

One committee member had participated in cartloads of committees. He recovered first, raising a hand.

The committee member—John Preskill—then began singing the Beatles song.

In the middle of my candidacy exam.

The moment remains one of the highlights of my career.

FP3

Throughout my PhD career, I’ve reported to John. I’ve emailed an update every week and requested a meeting about once a month. I sketch the work that’s firing me, relate my plans, and request feedback.

Much of the feedback, I’ve discerned over the years, condenses into aphorisms buried in our conversations. I doubt whether John has noticed his aphorisms. But they’ve etched themselves in me, and I hope they remain there.

“Think big.” What would impact science? Don’t buff a teapot if you could be silversmithing.

Education serves as “money in the bank.” Invest in yourself, and draw on the interest throughout your career.

“Stay broad.” (A stretching outward of both arms accompanies this aphorism.) Embrace connections with diverse fields. Breadth affords opportunities to think big.

“Keep it simple,” but “do something technical.” A teapot cluttered with filigree, spouts, and eighteen layers of gold leaf doesn’t merit a spot at the table. A Paul Revere does.

“Do what’s best for Nicole.” I don’t know how many requests to speak, to participate on committees, to explain portions of his lecture notes, to meet, to contribute to reports, and more John receives per week. The requests I receive must look, in comparison, like a mouse to a mammoth. But John exhorts me to to guard my time for research—perhaps, partially, because he gives so much time, including to students.

“Move on.” If you discover an opportunity, study background information for a few months, seize the opportunity, wrap up the project, and seek the next window.

John has never requested my updates, but he’s grown used to them. I’ve grown used to how meetings end. Having brought him questions, I invite him to ask questions of me.

“Are you having fun?” he says.

FP2

I tell the Beatles story when presenting that quantum-thermodynamics problem in seminars.

“I have to digress,” I say when the “Help!” image appears. “I presented this slide at a talk at Caltech, where John Preskill was in the audience. Some of you know John.” People nod. “He’s a…mature gentleman.”

I borrowed the term from the apparel industry. “Mature gentleman” means “at a distinguished stage by which one deserves to have celebrated a birthday of his with a symposium.”

Many physicists lack fluency in apparel-industry lingo. My audience members take “mature” at face value.

Some audience members grin. Some titter. Some tilt their heads from side to side, as though thinking, “Eh…”

John has impact. He’s logged boatloads of technical achievements. He has the scientific muscle of a scientific rhinoceros.

And John has fun. He doesn’t mind my posting an article about audience members giggling about him.

Friends ask me whether professors continue doing science after meriting birthday symposia, winning Nobel Prizes, and joining the National Academy of Sciences. I point to the number of papers with which John has, with coauthors, electrified physics over the past 20 years. Has coauthored because science is fun. It merits singing about during candidacy exams. Satisfying as passing the exam felt two years ago, I feel more honored when John teases me about my enthusiasm for science.

FP5

A year ago, I ate lunch with an alumnus who’d just graduated from our group. Students, he reported, have a tradition of gifting John a piece of art upon graduating. I relayed the report to another recent alumnus.

“Really?” the second alumnus said. “Maybe someone gave John a piece of art and then John invented the tradition.”

Regardless of its origin, the tradition appealed to me. John has encouraged me to blog as he’s encouraged me to do theoretical physics. Writing functions as art. And writing resembles theoretical physics: Each requires little more than a pencil, paper, and thought. Each requires creativity, aesthetics, diligence, and style. Each consists of ideas, of abstractions; each lacks substance but can outlive its creator. Let this article serve as a finger painting for John Preskill.

Thanks for five fun years.

Committee

With my PhD-thesis committee, after my thesis defense. Photo credit to Nick Hutzler, who cracked the joke that accounts for everyone’s laughing. (Left to right: Xie Chen, Fernando Brandão, John Preskill, Nicole Yunger Halpern, Manuel Endres.)

A quantum podcast

A few months ago I sat down with Craig Cannon of Y Combinator for a discussion about quantum technology and other things. A lightly edited version was published this week on the Y Combinator blog. The video is also on YouTube:

If you’re in a hurry, or can’t stand the sound of my voice, you might prefer to read the transcript, which is appended below. Only by watching the video, however, can you follow the waving of my hands.

I grabbed the transcript from the Y Combinator blog post, so you can read it there if you prefer, but I’ve corrected some of the typos. (There are a few references to questions and comments that were edited out, but that shouldn’t cause too much confusion.)

Here we go:

Craig Cannon [00:00:00] – Hey, how’s it going? This is Craig Cannon, and you’re listening to Y Combinator’s Podcast. Today’s episode is with John Preskill. John’s a theoretical physicist and the Richard P. Feynman Professor of Theoretical Physics at Caltech. He once won a bet with Stephen Hawking and he writes that it made him briefly almost famous. Basically, what happened is John and Kip Thorne bet that singularities could exist outside of black holes. After six years, Hawking conceded. He said that they were possible in very special, “non-generic conditions.” I’ll link up some more details to that in the description. In this episode, we cover what John’s been focusing on for years, which is quantum information, quantum computing, and quantum error correction. Alright, here we go. What was the revelation that made scientists and physicists think that a quantum computer could exist?

John Preskill [00:00:54] – It’s not obvious. A lot of people thought it couldn’t. The idea that a quantum computer would be powerful was emphasized over 30 years ago by Richard Feynman, the Caltech physicist. It was interesting how he came to that realization. Feynman was interested in computation his whole life. He had been involved during the war in Los Alamos. He was the head of the computation group. He was the guy who fixed the little mechanical calculators, and he had a whole crew of people who were calculating, and he figured out how to flow the work from one computer to another. All that kind of stuff. As computing technology started to evolve, he followed that. In the 1970s, a particle physicist like Feynman, that’s my background too, got really interested in using computers to study the properties of elementary particles like the quarks inside a nucleus, you know? We know a proton isn’t really a fundamental object. It’s got little beans rattling around inside, but they’re quantum beans. Gell-Mann, who’s good at names, called them quarks.

John Preskill [00:02:17] – Now we’ve had a theory since the 1970s of how quarks behave, and so in principle, you know everything about the theory, you can compute everything, but you can’t because it’s just too hard. People started to simulate that physics with digital computers in the ’70s, and there were some things that they could successfully compute, and some things they couldn’t because it was just too hard. The resources required, the memory, the time were out of reach. Feynman, in the early ’80s said nature is quantum mechanical damn it, so if you want a simulation of nature, it should be quantum mechanical. You should use a quantum system to behave like another quantum system. At the time, he called it a universal quantum simulator.

John Preskill [00:03:02] – Now we call it a quantum computer. The idea caught on about 10 years later when Peter Shor made the suggestion that we could solve problems which don’t seem to have anything to do with physics, which are really things about numbers like finding the prime factors of a big integer. That caused a lot of excitement,  in part because the implications for cryptography are a big disturbing. But then physicists — good physicists — started to consider, can we really build this thing? Some concluded and argued fairly cogently that no, you couldn’t because of this difficulty that it’s so hard to isolate systems from the environment well enough for them to behave quantumly. It took a few years for that to sort out at the theoretical level. In the mid ’90s we developed a theory called quantum error correction. It’s about how to encode the quantum state that you’d like to protect in such a clever way that even if there are some interactions with the environment that you can’t control, it still stays robust.

John Preskill [00:04:17] – At first, that was just kind of a theorist’s fantasy — it was a little too far ahead of the technology. But 20 years later, the technology is catching up, and now this idea of quantum error correction has become something you can do in the lab.

Craig Cannon [00:04:31] – How does quantum error correction work? I’ve seen a bunch of diagrams, so maybe this is difficult to explain, but how would you explain it?

John Preskill [00:04:39] – Well, I would explain it this way. I don’t think I’ve said the word entanglement yet, have I?

Craig Cannon [00:04:43] – Well, I have been checking off all the Bingo words yet.

John Preskill [00:04:45] – Okay, so let’s talk about entanglement because it’s part of the answer to your question, which I’m still not done answering, what is quantum physics? What do we mean by entanglement? It’s really the characteristic way, maybe the most important way that we know in which quantum is different from ordinary stuff, from classical. Now what does it mean, entanglement? It means that you can have a physical system which has many parts, which have interacted with one another, so it’s in kind of a complex correlated state of all those parts, and when you look at the parts one at a time it doesn’t tell you anything about the state of the whole thing. The whole thing’s in some definite state — there’s information stored in it — and now you’d like to access that information … Let me be a little more concrete. Suppose it’s a book.

John Preskill [00:05:40] – Okay? It’s a book, it’s 100 pages long. If it’s an ordinary book, 100 people could each take a page, and read it, they know what’s on that page, and then they could get together and talk, and now they’d know everything that’s in the book, right? But if it’s a quantum book written in qubits where these pages are very highly entangled, there’s still a lot of information in the book, but you can’t read it the way I just described. You can look at the pages one at a time, but a single page when you look at it just gives you random gibberish. It doesn’t reveal anything about the content of the book. Why is that? There’s information in the book, but it’s not stored in the individual pages. It’s encoded almost entirely in how those pages are correlated with one another. That’s what we mean by quantum entanglement: Information stored in those correlations which you can’t see when you look at the parts one at a time. You asked about quantum error correction?

John Preskill [00:06:39] – What’s the basic idea? It’s to take advantage of that property of entanglement. Because let’s say you have a system of many particles. The environment is kind of kicking them around, it’s interacting with them. You can’t really completely turn off those interactions no matter how hard you try, but suppose we’ve encoded the information in entanglement. So, say, if you look at one atom, it’s not telling you anything about the information you’re trying to protect. The environment isn’t learning anything when it looks at the atoms one at a time.

John Preskill [00:07:15] – This is kind of the key thing — that what makes quantum information so fragile is that when you look at it, you disturb it. This ordinary water bottle isn’t like that. Let’s say we knew it was either here or here, and we didn’t know. I would look at it, I’d find out it’s here. I was ignorant of where it was to start with, and now I know. With a quantum system, when you look at it, you really change the state. There’s no way to avoid that. So if the environment is looking at it in the sense that information is leaking out to the environment, that’s going to mess it up. We have to encode the information so the environment, so to speak, can’t find out anything about what the information is, and that’s the idea of quantum error correction. If we encode it in entanglement, the environment is looking at the parts one at a time, but it doesn’t find out what the protected information is.

Craig Cannon [00:08:06] – In other words, it’s kind of measuring probability the whole way along, right?

John Preskill [00:08:12] – I’m not sure what you mean by that.

Craig Cannon [00:08:15] – Is it Grover’s algorithm that was as quantum bits roll through, go through gates– The probability is determined of what information’s being passed through? What’s being computed?

John Preskill [00:08:30] – Grover’s algorithm is a way of sort of doing an exhaustive search through many possibilities. Let’s say I’m trying to solve some problem like a famous one is the traveling salesman problem. I’ve told you what the distances are between all the pairs of cities, and now I want to find the shortest route I can that visits them all. That’s a really hard problem. It’s still hard for a quantum computer, but not quite as hard because there’s a way of solving it, which is to try all the different routes, and measure how long they are, and then find the one that’s shortest, and you’ve solved the problem. The reason it’s so hard to solve is there’s such a vast number of possible routes. Now what Grover’s algorithm does is it speeds up that exhaustive search.

John Preskill [00:09:29] – In practice, it’s not that big a deal. What it means is that if you had the same processing speed, you can handle about twice as many cities before the problem becomes too hard to solve, as you could if you were using a classical processor. As far as what’s quantum about Grover, it takes advantage of the property in quantum physics that probabilities … tell me if I’m getting too inside baseball …

Craig Cannon [00:10:03] – No, no, this is perfect.

John Preskill [00:10:05] – That probabilities are the squares of amplitudes. This is interference. Again, this is another part of the answer. Well, we can spend the whole hour answering the question, what is quantum physics? Another essential part of it is what we call interference, and this is really crucial for understanding how quantum computing works. That is that probabilities add. If you know the probability of one alternative, and you know the probability of another, then you can add those together and find the probability that one or the other occurred. It’s not like that in quantum physics. The famous example is the double slit interference experiment. I’m sending electrons, let’s say — it could be basketballs, but it’s an easier experiment to do with electrons —

John Preskill [00:11:02] – at a screen, and there are two holes in the screen. You can try to detect the electron on the other side of the screen, and when you do that experiment many times, you can plot a graph showing where the electron was detected in each run, or make a histogram of all the different outcomes. And the graph wiggles, okay? If you could say there’s some probability of going through the first hole, and some probability of going through the second, and each time you detected it, it went through either one or the other, there’d be no wiggles in that graph. It’s the interference that makes it wiggle. The essence of the interference is that nobody can tell you whether it went through the first slit or the second slit. The question is sort of inadmissible. This interference then occurs when we can add up these different alternatives in a way which is different from what we’re used to. It’s not right to say that the electron was detected at this point because it had some probability of going through the first hole, and some probability of going through the second

John Preskill [00:12:23] – and we add those probabilities up. That doesn’t give the right answer. The different alternatives can interfere. This is really important for quantum computing because what we’re trying to do is enhance the probability or the time it takes to find the solution to a problem, and this interference can work to our advantage. We want to have, when we’re doing our search, we want to have a higher chance of getting the right answer, and a lower chance of getting the wrong answer. If the different wrong answers can interfere, they can cancel one another out, and that enhances the probability of getting the right answer. Sorry it’s such a long-winded answer, but this is how Grover’s algorithm works.

John Preskill [00:13:17] – It can speed up exhaustive search by taking advantage of that interference phenomenon.

Craig Cannon [00:13:20] – Well this is kind of one of the underlying questions among many of the questions from Twitter. You’ve hit our record for most questions asked. Basically, many people are wondering what quantum computers really will do if and when it becomes a reality that they outperform classical computers. What are they going to be really good at?

John Preskill [00:13:44] – Well, you know what? I’m not really sure. If you look at the history of technology, it would be hubris to expect me to know. It’s a whole different way of dealing with information. Quantum information is not just … a quantum computer is not just a faster way of computing. It deals with information in a completely new way because of this interference phenomenon, because of entanglement that we’ve talked about. We have limited vision when it comes to predicting decades out what the impact will be of an entirely new way of doing things. Information processing, in particular. I mean you know this well. We go back to the 1960s, and people are starting to put a few transistors on a chip. Where is that going to lead? Nobody knew.

Craig Cannon [00:14:44] – Even early days of the internet.

John Preskill [00:14:45] – Yeah, good example.

Craig Cannon [00:14:46] – Even the first browser. No one really knew what anyone was going to do with it. It makes total sense.

John Preskill [00:14:52] – For good or ill. Yeah. But we have some ideas, you know? I think … why are we confident there will be some transformative effect on society? Of the things we know about, and I emphasize again, probably the most important ones are things we haven’t thought of when it comes to applications of quantum computing, the ones which will affect everyday life, I think, are better methods for understanding and inventing new materials, new chemical compounds. Things like that can be really important. If you find a better way of capturing carbon by designing a better catalyst, or you can design pharmaceuticals that have new effects, materials that have unusual properties. These are quantum physics problems because those properties of the molecule or the material really have to do with the underlying quantum behavior of the particles, and we don’t have a good way for solving such problems or predicting that behavior using ordinary digital computers. That’s what a quantum computer is good at. It’s good — but maybe not the only thing it’s good at — one thing it should certainly be good at is telling us quantitatively how quantum systems behave. In the two contexts I just mentioned, there’s little question that there will be practical impact of that.

Craig Cannon [00:16:37] – It’s not just doing the traveling salesman problem through the table of elements for why it can find these compounds.

John Preskill [00:16:49] – No. If it were, that wouldn’t be very efficient.

Craig Cannon [00:16:52] – Exactly.

John Preskill [00:16:53] – Yeah. No, it’s much trickier than that. Like I said, the exhaustive search, though conceptually it’s really interesting that quantum can speed it up because of interference, from a practical point of view it may not be that big a deal. It means that, well like I said, in the same amount of time you can solve an instance which is twice as big of the problem. What we really get excited about are the so-called exponential speed ups. That was why Shor’s algorithm was exciting in 1994, because factoring large numbers was a problem that had been studied by smart people for a long time, and on that basis, the fact that there weren’t any fast ways of solving it was pretty good evidence it’s a hard problem. Actually, we don’t know how to prove that from first principles. Maybe somebody will come along one day and figure out how to solve factoring very fast on a digital computer. It doesn’t seem very likely because people have been trying for so long to solve problems like that, and it’s just intractable with ordinary computers. You could say the same thing about these quantum physics problems. Maybe some brilliant graduate student is going to drop a paper on the arXiv tomorrow which will say, “Here, I solved quantum chemistry, and I can do it on a digital computer.” But we don’t think that’s very likely because we’ve been working pretty hard on these problems for decades and they seem to be really hard. Those cases, like these number theoretic problems,

John Preskill [00:18:40] – which have cryptological implications, and tasks for simulating the behavior of quantum systems, we’re pretty sure those are hard problems classically, and we’re pretty sure quantum computers … I mean we have algorithms that have been proposed, but which we can’t really run currently because our quantum computers aren’t big enough on the scale that’s needed to solve problems people really care about.

Craig Cannon [00:19:09] – Maybe we should jump to one of the questions from Twitter which is related to that. Travis Scholten (@Travis_Sch) asked, what are the most problem pressings in physics, let’s say specifically around quantum computers that you think substantial progress ought to be made in to move the field forward?

John Preskill [00:19:27] – I know Travis. He was an undergrad here. How you doing, Travis? The problems that we need to solve to make quantum computing closer to realization at the level that would solve problems people care about? Well, let’s go over where we are now.

Craig Cannon [00:19:50] – Yeah, definitely.

John Preskill [00:19:51] – People have been working on quantum hardware for 20 years, working hard, and there are a number of different approaches to building the hardware, and nobody really knows which is going to be the best. I think we’re far from collapsing to one approach which everybody agrees has the best long-term prospects for scalability. And so it’s important that a lot of different types of hardware are being pursued. We can come back to what some of the different approaches are later. Where are we now? We think in a couple of years we’ll have devices with about 50 qubits to 100, and we’ll be able to control them pretty well. That’s an interesting range because even though it’s only 50 to 100 qubits, doesn’t sound like that big a deal, but that’s already too many to simulate with a digital computer, even with the most powerful supercomputers today. From that point of view, these relatively small, near-term quantum computers which we’ll be fooling around with over the next five years or so, are doing something that’s kind of super-classical.

John Preskill [00:21:14] – At least, we don’t know how to do exactly the same things with ordinary computers. Now that doesn’t mean they’ll be able to do anything that’s practically important, but we’re going to try. We’re going to try, and there are ideas about things we’ll try out, including baby versions of these problems in chemistry, and materials, and ways of speeding up optimization problems. Nobody knows how well those things are going to work at these small scales. Part of the reason is not just the number of qubits is small, but they’re also not perfect. We can perform elementary operations on pairs of qubits, which we call quantum gates like the gates in ordinary logic. But they have an error rate a little bit below an error every 100 gates. If you have a circuit with 1000 qubits, that’s a lot of noise.

Craig Cannon [00:22:18] – Exactly. Does for instance, 100-qubit quantum computer really mean 100-qubit quantum computer or do you need a certain amount of backup going on?

John Preskill [00:22:29] – In the near term, we’re going to be trying out, and probably we have the best hopes for, kind of hybrid classical-quantum methods with some kind of classical feedback. You try to do something on the quantum computer, you make a measurement that gives you some information, then you change the way you did it a little bit, and try to converge on some better answer. That’s one possible way of addressing optimization that might be faster on a quantum computer. But I just wanted to emphasize that the number of qubits isn’t the only metric. How good they are, and in particular, the reliability of the gates, how well we can perform them … that’s equally important. Anyway, coming back to Travis’ question, there are lots of things that we’d like to be able to do better. But just having much better qubits would be huge, right? If you … more or less, with the technology we have now, you can have a gate error rate of a few parts in 1,000, you know? If you can improve that by orders of magnitude, then obviously, you could run bigger circuits. That would be very enabling.

John Preskill [00:23:58] – Even if you stick with 100 qubits just by having a circuit with more depth, more layers of gates, that increases the range of what you could do. That’s always going to be important. Because, I mean look at how crappy that is. A gate error rate, even if it’s one part in 1,000, that’s pretty lousy compared to if you look at where–

Craig Cannon [00:24:21] – Your phone has a billion transistors in it. Something like that, and 0%–

John Preskill [00:24:27] – You don’t worry about the … it’s gotten to the point where there is some error protection built in at the hardware level in a processor, because I mean, we’re doing these crazy things like going down from the 11 nanometer scale for features on a chip.

Craig Cannon [00:24:45] – How are folks trying to deal with interference right now?

John Preskill [00:24:50] – You mean, what types of devices? Yeah, so that’s interesting too because there are a range of different ways to do it. I mentioned that we could store information, we could make a qubit out of a single atom, for example. That’s one approach. You have to control a whole bunch of atoms and get them to interact with one another. One way of doing that is with what we call trapped ions. That means the atoms have electrical charges. That’s a good thing because then you could control them with electric fields. You could hold them in a trap, and you can isolate them, like I said, in a very high vacuum so they’re not interacting too much with other things in the laboratory, including stray electric and magnetic fields. But that’s not enough because you got to get them to talk to one another. You got to get them to interact. We have this set of desiderata, which are kind of in tension with one another. On the one hand, we want to isolate the qubits very well. On the other hand, we want to control them from the outside and get them to do what we want them to do, and eventually, we want to read them out. You have to be able to read out the result of the computation. But the key thing is the control. You could have two of those qubits in your device interact with one another in a specified way, and to do that very accurately you have to have some kind of bus that gets the two to talk to one another.

John Preskill [00:26:23] – The way they do that in an ion trap is pretty interesting. It’s by using lasers and controlling how the ions vibrate in the trap, and with a laser, kind of excite, wiggles of the ion, and then by determining whether the ions are wiggling or not, you can go address another ion, and that way you can do a two-qubit interaction. You can do that pretty well. Another way is really completely different. What I just described was encoding information at the one atom level. But another way is to use superconductivity — circuits in which electric current flows without any dissipation. In that case, you have a lot of freedom to sort of engineer the circuits to behave in a quantum way. There are many nuances there, but the key thing is that you can encode information now in a system that might involve the collective motion of billions of electrons, and yet you can control it as though it were a single atom. I mean, here’s one oversimplified way of thinking about it.

John Preskill [00:27:42] – Suppose you have a little loop of wire, and there’s current flowing in the loop. It’s a superconducting wire so it just keeps flowing. Normally, there’d be resistance, which would dissipate that as heat, but not for the superconducting circuit, which of course, has to be kept very cold so it stays superconducting. But you can imagine in this little loop that the current is either circulating clockwise or counterclockwise. That’s a way of encoding information. It could also be both at once, and that’s what makes it a qubit.

Craig Cannon [00:28:14] – Right.

John Preskill [00:28:15] – And so in that case, even though it involves lots of particles, the magic is that you can control that system extremely well. I mentioned individual electrons. That’s another approach. Put the qubit in the spin of a single electron.

Craig Cannon [00:28:32] – You also mentioned better qubits. What did you mean by that?

John Preskill [00:28:35] – Well, what I really care about is how well I can do the gates. There’s a whole other approach, which is motivated by the desire to have much, much better control over the quantum information than we do in those systems that I mentioned so far, superconducting circuits and trapped ions. That’s actually what Microsoft is pushing very hard. We call it topological quantum computing. Topological is a word physicists and mathematicians love. It means, well, we’ll come back to what it means. Anyway, let me just tell you what they’re trying to do. They’re trying to make a much, much better qubit, which they can control much, much better using a completely different hardware approach.

Craig Cannon [00:29:30] – Okay.

John Preskill [00:29:32] – It’s very ambitious because at this point, it’s not even clear they have a single qubit, but if that approach is successful, and it’s making progress, we will see a validated qubit of this type soon. Maybe next year. Nobody really knows where it goes from there, but suppose it’s the case that you could do a two-qubit gate with an error rate of one in a million instead of one in 1,000. That would be huge. Now, scaling all these technologies up, is really challenging from a number of perspectives, including just the control engineering.

Craig Cannon [00:30:17] – How are they doing it or attempting to do it?

John Preskill [00:30:21] – You know, you could ask, where did all this progress come from over 20 years, or so? For example, with the superconducting circuits, a sort of crucial measure is what we call the coherence time of the qubit, which roughly speaking, means how much it interacts with the outside world. The longer the coherence time, the better. The rate of what we call decoherence is essentially how much it’s getting buffeted around by outside influences. For the superconducting circuits, those coherence times have increased about a factor of 10 every three years, going back 15 years or so.

Craig Cannon [00:31:06] – Wow.

John Preskill [00:31:07] – Now, it won’t necessarily go on like that indefinitely, but in order to achieve that type of progress, better materials, better fabrication, better control. The way you control these things is with microwave circuitry. Not that different from the kind of things that are going on in communication devices. All those things are important, but going forward, the control is really the critical thing. Coherence times are already getting pretty long, I mean having them longer is certainly good. But the key thing is to get two qubits to interact just the way you want them to. Even if there is, now I keep saying the key thing is the environment, it’s not the only key thing, right? Because you have some qubit, like if you think about that electron spin, one way of saying it is I said it can be both up and down at the same time. Well, there’s a simpler way of saying that. It might not point either up or down. It might point some other way. But there really are a continuum of ways it could point. That’s not like a bit. See, it’s much easier to stabilize a bit because it’s got two states.

John Preskill [00:32:31] – But if it can kind of wander around in the space of possible configurations for a qubit, that makes it much harder to control. People have gotten better at that, a lot better at that in the last few years.

Craig Cannon [00:32:44] – Interesting. Joshua Harmon asked, what engineering strategy for quantum computers do you think has the most promise?

John Preskill [00:32:53] – Yeah, so I mentioned some of these different approaches, and I guess I’ll interpret the question as, which one is the winning horse? I know better than to answer that question! They’re all interesting. For the near term, the most advanced are superconducting circuits and trapped ions, which is why I mentioned those first. I think that will remain true over the next five to 10 years. Other technologies have the potential — like these topologically protected qubits — to surpass those, but it’s not going to happen real soon. I kind of like superconducting circuits because there’s so much phase space of things you can do with them. Of ways you can engineer and configure them, and imagine scaling them up.

John Preskill [00:33:54] – They have the advantage of being faster. The cycle time, time to do a gate, is faster than with the trapped ions. Just the basic physics of the interactions is different. In the long term, those electron spins could catapult ahead of these other things. That’s something that you can naturally do in silicon, and it’s potentially easy to integrate with silicon technology. Right now, the qubits and gates aren’t as good as the other technologies, but that can change. I mean, from a theorist’s perspective, this topological approach is very appealing. We can imagine it takes off maybe 10 years from now and it becomes the leader. I think it’s important to emphasize we don’t really know what’s going to scale the best.

Craig Cannon [00:34:50] – Right. And are there multiple attempts being made around programming quantum computers?

John Preskill [00:34:55] – Yeah. I mean, some of these companies– That are working on quantum technology now, which includes well-known big players like IBM, and Google, and Microsoft and Intel, but also a lot of startups now. They are trying to encompass the full stack, so they’re interested in the hardware, and the fabrication, and the control technology. But also, the software, the applications, the user interface. All those things are certainly going to be important eventually.

Craig Cannon [00:35:38] – Yeah, they’re pushing it almost to like an AWS layer. Where you interact with your quantum computer in a server farm and you don’t even touch it.

John Preskill [00:35:49] – That’s how it will be in the near term. You’re not going to have, most of us won’t, have a quantum computer sitting on your desktop, or in your pocket. Maybe someday. In the near term, it’ll be in the Cloud, and you’ll be able to run applications on it by some kind of web interface. Ideally, that should be designed so the user doesn’t have to know anything about quantum physics in order to program or use it, and I think that’s part of what some of these companies are moving toward.

Craig Cannon [00:36:24] – Do you think it will get to the level where it’s in your pocket? How do you deal with that when you’re below one kelvin?

John Preskill [00:36:32] – Well, if it’s in your pocket, it probably won’t be one kelvin.

Craig Cannon [00:36:35] – Yeah, probably not.

John Preskill [00:36:38] – What do you do? Well, there’s one approach, as an example, which I guess I mentioned in passing before, where maybe it doesn’t have to be at such low temperature, and that’s nuclear spins. Because they’re very weakly interacting with the outside world, you can have quantum information in a nuclear spin, which — I’m not saying that it would be undisturbed for years, but seconds, which is pretty good. And you can imagine that getting significantly longer. Someday you might have a little quantum smart card in your pocket. The nice thing about that particular technology is you could do it at room temperature. Still have long coherence times. If you go to the ATM and you’re worried that there’s a rogue bank that’s going to steal your information, one solution to that problem — I’m not saying there aren’t other solutions — is to have a quantum card where the bank will be able to authenticate it without being able to forge it.

Craig Cannon [00:37:54] – We should talk about the security element. Kevin Su asked what risk would quantum computers pose to current encryption schemes? So public key, and what changes should people be thinking about if quantum computers come in the next five years, 10 years?

John Preskill [00:38:12] – Yeah. Quantum computers threaten those systems that are in widespread use. Whenever you’re using a web browser and you see that little padlock and you’re at an HTTPS site, you’re using a public key cryptosystem to protect your privacy. Those cryptosystems rely for their security on the presumed hardness of computational problems. That is, it’s possible to crack them, but it’s just too hard. RSA, which is one of the ones that’s widely used … as typically practiced today, to break it you’d have to do something like factor a number which is over 2000 bits long, 2048. That’s too hard to do now. But that’s what quantum computers will be good at. Another one that’s widely used is called elliptic curve cryptography. Doesn’t really matter exactly what it is.

John Preskill [00:39:24] – But the point is that it’s also vulnerable to quantum attack, so we’re going to have to protect our privacy in different ways when quantum computers are prevalent.

Craig Cannon [00:39:37] – What are the attempts being made right now?

John Preskill [00:39:39] – There are two main classes of attempts. One is just to come up with a cryptographic protocol not so different conceptually from what’s done now, but based on a problem that’s hard for quantum computers.

Craig Cannon [00:39:59] – There you go.

John Preskill [00:40:02] – It turns out that what has sort of become the standard way doesn’t have that feature, and there are alternatives that people are working on. We speak of post-quantum cryptography, meaning the protocols that we’ll have to use when we’re worried that our adversaries have quantum computers. I don’t think there’s any proposed cryptosystem — although there’s a long list of them by now which people think are candidates for being quantum resistant, for being unbreakable, or hard to break by quantum computers. I don’t think there’s any one that the world has sufficient confidence in now that’s really hard for a quantum adversary that we’re all going to switch over. But it’s certainly time to be thinking about it. When people worry about their privacy, of course different users have different standards, but the US Government sometimes says they would like a system to stay secure for 50 years. They’d like to be able to use it for 20, roughly speaking, and then have the intercepted traffic be protected for another 30 after that. I don’t think, though I could be wrong, that we’re likely to have quantum computers that can break those public key cryptosystems in 10 years, but in 50 years seems not unlikely,

John Preskill [00:41:33] – and so we should really be worrying about it. The other one is actually using quantum communication for privacy. In other words, if you and I could send qubits to one another instead of bits, it opens up new possibilities. The way to think about these public key schemes — or one way — that we’re using now, is I want you to send me a private message, and I can send you a lockbox. It has a padlock on it, but I keep the key, okay? But you can close up the box and send it to me. But I’m the only one with the key. The key thing is that if you have the padlock you can’t reverse engineer the key. Of course, it’s a digital box and key, but that’s the idea of public key. The idea of what we call quantum key distribution, which is a particular type of quantum cryptography, is that I can actually send you the key, or you can send me your key, but why can’t any eavesdropper then listen in and know the key? Well it’s because it’s quantum, and remember, it has that property that if you look at it, you disturb it.

John Preskill [00:42:59] – So if you collect information about my key, or if the adversary does, that will cause some change in the key, and there are ways in which we can check whether what you received is really what I sent. And if it turns out it’s not, or it has too many errors in it, then we’ll be suspicious that there was an adversary who tampered with it, and then we won’t use that key. Because we haven’t used it yet — we’re just trying to establish the key. We do the test to see whether an adversary interfered. If it passes the test, then we can use the key. And if it fails the test, we throw that key away and we try again. That’s how quantum cryptography works, but it requires a much different infrastructure than what we’re using now. We have to be able to send qubits … well, it’s not completely different because you can do it with photons. Of course, that’s how we communicate through optical fiber now — we’re sending photons. It’s a little trickier sending quantum information through an optical fiber, because of that issue that interactions with the environment can disturb it. But nowadays, you can send quantum information through an optical fiber over tens of kilometers with a low enough error rate so it’s useful for communication.

Craig Cannon [00:44:22] – Wow.

John Preskill [00:44:23] – Of course, we’d like to be able to scale that up to global distances.

Craig Cannon [00:44:26] – Sure.

John Preskill [00:44:27] – And there are big challenges in that. But anyway, so that’s another approach to the future of privacy that people are interested in.

Craig Cannon [00:44:35] – Does that necessitate quantum computers on both ends?

John Preskill [00:44:38] – Yes, but not huge ones. The reason … well, yes and no. At the scale of tens of kilometers, no. You can do that now. There are prototype systems that are in existence. But if you really want to scale it up —  in other words, to send things longer distance — then you have to bring this quantum error correction idea into the game.

John Preskill [00:45:10] – Because at least with our current photonics technology, there’s no way I can send a single photon from here to China without there being a very high probability that it gets lost in the fiber somewhere. We have to have what we call quantum repeaters, which can boost the signal. But it’s not like the usual type of repeater that we have in communication networks now. The usual type is you measure the signal, and then you resend it. That won’t work for quantum because as soon as you measure it you’re going to mess it up. You have to find a way of boosting it without knowing what it is. Of course, it’s important that it works that way because otherwise, the adversary could just intercept it and resend it. And so it will require some quantum processing to get that quantum error correction in the quantum repeater to work. But it’s a much more modest scale quantum processor than we would need to solve hard problems.

Craig Cannon [00:46:14] – Okay. Gotcha. What are the other things you’re both excited about, and worried about for potential business opportunities? Snehan, I’m mispronouncing names all the times, Snehan Kekre asks, budding entrepreneurs, what should they be thinking about in the context of quantum computing?

John Preskill [00:46:37] – There’s more to quantum technology than computing. Something which has good potential to have an impact in the relatively near future is improved sensing. Quantum systems, partly because of that property that I keep emphasizing that they can’t be perfectly isolated from the outside, they’re good at sensing things. Sometimes, you want to detect it when something in the outside world messes around with your qubit. Again, using this technology of nuclear spins, which I mentioned you can do at room temperature potentially, you can make a pretty good sensor, and it can potentially achieve higher sensitivity and spatial resolution, look at things on shorter distance scales than other existing sensing technology. One of the things people are excited about are the biological and medical implications of that.

John Preskill [00:47:53] – If you can monitor the behavior of molecular machines, probe biological systems at the molecular level using very powerful sensors, that would surely have a lot of applications. One interesting question you can ask is, can you use these quantum error correction ideas to make those sensors even more powerful? That’s another area of current basic research, where you could see significant potential economic impact.

Craig Cannon [00:48:29] – Interesting. In terms of your research right now, what are you working on that you find both interesting and incredibly difficult?

John Preskill [00:48:40] – Everything I work on–

Craig Cannon [00:48:41] – 100%.

John Preskill [00:48:42] – Is both interesting and incredibly difficult. Well, let me change direction a little from what we’ve been talking about so far. Well, I’m going to tell you a little bit about me.

Craig Cannon [00:48:58] – Sure.

John Preskill [00:49:00] – I didn’t start out interested in information in my career. I’m a physicist. I was trained as an elementary particle theorist, studying the fundamental interactions and the elementary particles. That drew me into an interest in gravitation because one thing that we still have a very poor understanding of is how gravity fits together with the other fundamental interactions. The way physicists usually say it is we don’t have a quantum theory of gravity, at least not one that we think is complete and satisfactory. I’ve been interested in that question for many decades, and then got sidetracked because I got excited about quantum computing. But you know what? I’ve always looked at quantum information not just as a technology. I’m a physicist, I’m not an engineer. I’m not trying to build a better computer, necessarily, though that’s very exciting, and worth doing, and if my work can contribute to that, that’s very pleasing. I see quantum information as a new frontier in the exploration of the physical sciences. Sometimes I call it the entanglement frontier. Physicists, we like to talk about frontiers, and stuff. Short distance frontier. That’s what we’re doing at CERN in the Large Hadron Collider, trying to discern new properties of matter at distances which are shorter than we’ve ever been able to explore before.

John Preskill [00:50:57] – There’s a long distance frontier in cosmology. We’re trying to look deeper into the universe and understand its structure and behavior at earlier times. Those are both very exciting frontiers. This entanglement frontier is increasingly going to be at the forefront of basic physics research in the 21st century. By entanglement frontier, I just mean scaling up quantum systems to larger and larger complexity where it becomes harder and harder to simulate those systems with our existing digital tools. That means we can’t very well anticipate the types of behavior that we’re going to see. That’s a great opportunity for new discovery, and that’s part of what’s going to be exciting even in the relatively near term. When we have 100 qubits … there are some things that we can do to understand the behavior of the dynamics of a highly complex system of 100 qubits that we’ve never been able to experimentally probe before. That’s going to be very interesting. But what we’re starting to see now is that these quantum information ideas are connecting to these fundamental questions about gravitation, and how to think about it quantumly. And it turns out, as is true for most of the broader implications of quantum physics, the key thing is entanglement.

John Preskill [00:52:36] – We can think of the microscopic structure of spacetime, the geometry of where we live. Geometry just means who’s close to who else. If we’re in the auditorium, and I’m in the first row and you’re in the fourth row, the geometry is how close we are to one another. Of course, that’s very fundamental in both space and time. How far apart are we in space? How far apart are we in time? Is geometry really a fundamental thing, or is it something that’s kind of emergent from some even more fundamental concept? It seems increasingly likely that it’s really an emergent property.

John Preskill [00:53:29] – That there’s something deeper than geometry. What is it? We think it’s quantum entanglement. That you can think of the geometry as arising from quantum correlations among parts of a system. That’s really what defines who’s close to who. We’re trying to explore that idea more deeply, and one of the things that comes in is the idea of quantum error correction. Remember the whole idea of quantum error correction was that we could make a quantum system behave the way we want it to because it’s well-protected against the damaging effects of noise. It seems like quantum error correction is part of the deep secret of how spacetime geometry works. It has a kind of intrinsic robustness coming from these ideas of quantum error correction that makes space meaningful, so that it doesn’t just evaporate when you tap on it. If you wanted to, you could think of the spacetime, the space that you’re in and the space that I’m in, as parts of a system that are entangled with one another.

John Preskill [00:54:45] – What would happen if we broke that entanglement and your part of space became disentangled from my part? Well what we think that would mean is that there’d be no way to connect us anymore. There wouldn’t be any path through space that starts over here with me and ends with you. It’d become broken apart into two pieces. It’s really the entanglement which holds space together, which keeps it from falling apart into little pieces. We’re trying to get a deeper grasp of what that means.

Craig Cannon [00:55:19] – How do you make any progress on that? That seems like the most unbelievably difficult problem to work on.

John Preskill [00:55:26] – It’s difficult because, well for a number of reasons, but in particular, because it’s hard to get guidance from experiment, which is how physics historically–

Craig Cannon [00:55:38] – All science.

John Preskill [00:55:38] – Has advanced.

Craig Cannon [00:55:39] – Yeah.

John Preskill [00:55:41] – Although it was fun a moment ago to talk about what would happen if we disentangled your part of space from mine, I don’t know how to do that in the lab right now. Of course, part of the reason is we have the audacity to think we can figure these things out just by thinking about them. Maybe that’s not true. Nobody knows, right? We should try. Solving these problems is a great challenge, and it may be that the apes that evolved on Earth don’t have the capacity to understand things like the quantum structure of spacetime. But maybe we do, so we should try. Now in the longer term, and maybe not such a long term, maybe we can get some guidance from experiment. In particular, what we’re going to be doing with quantum computers and the other quantum technologies that are becoming increasingly sophisticated in the next couple of decades, is we’ll be able to control very well highly entangled complex quantum systems. That should mean that in a laboratory, on a tabletop, I can sort of make my own little toy space time …

John Preskill [00:57:02] – with an emergent geometry arising from the properties of that entanglement, and I think that’ll teach us lessons because systems like that are the types of system that, because they’re so highly entangled, digital computers can’t simulate them. It seems like only quantum computers are potentially up to the task. So that won’t be quite the same as disentangling your side of the room from mine, in real life. But we’d be able to do it in a laboratory setting using model systems, which I think would help us to understand the basic principles better.

Craig Cannon [00:57:39] – Wild. Yeah, desktop space time seems pretty cool, if you could figure it out.

John Preskill [00:57:43] – Yeah, it’s pretty fundamental. We didn’t really talk about what people sometimes, we did implicitly, but not in so many words. We didn’t talk about what people sometimes call quantum non-locality. It’s another way of describing quantum entanglement, actually. There’s this notion of Bell’s theorem that when you look at the correlations among the parts of a quantum system, that they’re different from any possible classical correlations. Some things that you read give you the impression that you can use that to instantaneously send information over long distances. It is true that if we have two qubits, electron spins, say, and they’re entangled with one another, then what’s kind of remarkable is that I can measure my qubit to see along some axis whether it’s up or down, and you can measure yours, and we will get perfectly correlated results. When I see up, you’ll see up, say, and when I see down, you’ll see down. And sometimes, people make it sound like that’s remarkable. That’s not remarkable in itself. Somebody could’ve flipped a pair of coins, you know,

John Preskill [00:59:17] – so that they came up both heads or both tails, and given one to you and one –

Craig Cannon [00:59:20] – Split them apart.

John Preskill [00:59:20] – to me.

Craig Cannon [00:59:21] – Yeah.

John Preskill [00:59:22] – And gone a light year apart, and then we both …  hey, mine’s heads. Mine’s heads too!

Craig Cannon [00:59:24] – And then they call it quantum teleportation on YouTube.

John Preskill [00:59:28] – Yeah. Of course, what’s really important about entanglement that makes it different from just those coins is that there’s more than one way of looking at a qubit. We have what we call complementary ways of measuring it, so you can ask whether it’s up or down along this axis or along that axis. There’s nothing like that for the coins. There’s just one way to look at it. What’s cool about entanglement is that we’ll get perfectly correlated results if we both measure in the same way, but there’s more than one possible way that we could measure. What sometimes gets said, or the impression people get, is that that means that when I do something to my qubit, it instantaneously affects your qubit, even if we’re on different sides of the galaxy. But that’s not what entanglement does. It just means they’re correlated in a certain way.

John Preskill [01:00:30] – When you look at yours, if we have maximally entangled qubits, you just see a random bit. It could be a zero or a one, each occurring with probability 1/2. That’s going to be true no matter what I did to my qubit, and so you can’t tell what I did by just looking at it. It’s only that if we compared notes later we can see how they’re correlated, and that correlation holds for either one of these two complementary ways in which we could both measure. It’s that fact that we have these complementary ways to measure that makes it impossible for a classical system to reproduce those same correlations. So that’s one misconception that’s pretty widespread. Another one is this about quantum computing, which is in trying to explain why quantum computers are powerful, people will sometimes say, well, it’s because you can superpose –I used that word before, you can add together many different possibilities. That means that, whereas an ordinary computer would just do a computation once, acting on a superposition a quantum computer can do a vast number of computations all at once.

John Preskill [01:01:54] – There’s a certain sense in which that’s mathematically true if you interpret it right, but it’s very misleading. Because in the end, you’re going to have to make some measurement to read out the result. When you read it out, there’s a limited amount of information you can get. You’re not going to be able to read out the results of some huge number of computations in a single shot measurement. Really the key thing that makes it work is this idea of interference, which we discussed briefly when you asked about Grover’s algorithm. The art of a quantum algorithm is to make sure that the wrong answers interfere and cancel one another out, so the right answer is enhanced. That’s not automatic. It requires that the quantum algorithm be designed in just the right way.

Craig Cannon [01:02:50] – Right. The diagrams I’ve seen online at least, involve usually you’re squaring the output as it goes along, and then essentially, that flips the correct answer to the positive, and the others are in the negative position. Is that accurate?

John Preskill [01:03:08] – I wouldn’t have said it the way you did– Because you can’t really measure it as you go along. Once you measure it, the magic of superposition is going to be lost.

John Preskill [01:03:19] – It means that now there’s some definite outcome or state. To take advantage of this interference phenomenon, you need to delay the measurement. Remember when we were talking about the double slit and I said, if you actually see these wiggles in the probability of detection, which is the signal of interference, that means that there’s no way anybody could know whether the electron went through hole one or hole two? It’s the same way with quantum computing. If you think of the computation as being a superposition of different possible computations, it wouldn’t work — there wouldn’t be a speed up — if you could know which of those paths the computation followed. It’s important that you don’t know. And so you have to sum up all the different computations, and that’s how the interference phenomenon comes into play.

Craig Cannon [01:04:17] – To take a little sidetrack, you mentioned Feynman before. And before we started recording you mentioned working with him. I know I’m in the Feynman fan club, for sure. What was that experience like?

John Preskill [01:04:32] – We never really collaborated. I mean, we didn’t write a paper together, or anything like that. We overlapped for five years at Caltech. I arrived here in 1983. He died in 1988. We had offices on the same corridor, and we talked pretty often because we were both interested in the fundamental interactions, and in particular, what we call quantum chromodynamics. It’s our theory of how nuclear matter behaves, how quarks interact, what holds the proton together, those kinds of things. One big question is what does hold the proton together? Why don’t the quarks just fall apart? That was an example of a problem that both he and I were very interested in, and which we talked about sometimes. Now, this was pretty late in his career. When I think about it now, when I arrived at Caltech, that was 1983, Feynman was born in 1918, so he was 65. I’m 64 now, so maybe he wasn’t so old, right? But at the time, he seemed pretty ancient to me. Since I was 30.

John Preskill [01:05:58] – Those who interacted with Dick Feynman when he was really at his intellectual peak in the ’40s, and ’50s, and ’60s, probably saw even more extraordinary intellectual feats than I witnessed interacting with the 65 year old Feynman. He just loved physics, you know? He just thought everything was so much fun. He loved talking about it. He wasn’t as good a listener as a talker, but actually – well that’s a little unfair, isn’t it? It was kind of funny because Feynman, he always wanted to think things through for himself, sort of from first principles, rather than rely on the guidance from experts who have thought about these things before. Well that’s fine. You should try to understand things as deeply as you can on your own, and sort of reconstruct the knowledge from the ground up. That’s very enabling, and gives you new insights. But he was a little too dismissive, in my view, of what the other guys knew. But I could slip it in because I didn’t tell him, “Dick, you should read this paper by Polyakov” — well maybe I did, but he wouldn’t have even heard that  — because he solved that problem that you’re talking about.

John Preskill [01:07:39] – But I knew what Polyakov had said about it, so I would say, “Oh well, look, why don’t we look at it this way?” And so he thought I was, that I was having all these insights, but the truth was the big difference between Feynman and me in the mid 1980s was I was reading literature, and he wasn’t.

Craig Cannon [01:08:00] – That’s funny.

John Preskill [01:08:01] – Probably, if he had been, he would’ve been well served, but that wasn’t the way he liked to work on things. He wanted to find his own approach. Of course, that had worked out pretty well for him throughout his career.

Craig Cannon [01:08:15] – What other qualities did you notice about him when he was roaming the corridors?

John Preskill [01:08:21] – He’d always be drumming. So you would know he was around because he’d actually be walking down the hallway drumming on the wall.

Craig Cannon [01:08:27] – Wait, with his hands, or with sticks, or–

John Preskill [01:08:29] – No, hands. He’d just be tapping.

Craig Cannon [01:08:32] – Just a bongo thing.

John Preskill [01:08:33] – Yeah. That was one thing. He loved to tell stories. You’ve probably read the books that Ralph Leighton put together based on the stories Feynman told. Ralph did an amazing job, of capturing Feynman’s personality in writing those stories down because I’d heard a lot of them. I’m sure he told the same stories to many people many times, because he loved telling stories. But the book really captures his voice pretty well.

John Preskill [01:09:12] – If you had heard him tell some of these stories, and then you read the way Ralph Leighton transcribed them, you can hear Feynman talking. At the time that I knew him, one of the experiences that he went through was he was on the Challenger commission after the space shuttle blew up. He was in Washington a lot of the time, but he’d come back from time to time, and he would sort of sit back and relax in our seminar room and start bringing us up to date on all the weird things that were happening on the Challenger commission. That was pretty fun.

Craig Cannon [01:09:56] – That’s really cool.

John Preskill [01:09:56] – A lot of that got captured in the second volume. I guess it’s the one called, What Do You Care What Other People Think? There’s a chapter about him telling stories about the Challenger commission. He was interested in everything. It wasn’t just physics. He was very interested in biology. He was interested in computation. I remember how excited he was when he got his first IBM PC. Probably not long after I got to Caltech. Yeah, it was what they called the AT. We thought it was a pretty sexy machine. I had one, too. He couldn’t wait to start programming it in BASIC.

Craig Cannon [01:10:50] – Very cool.

John Preskill [01:10:51] – Because that was so much fun.

Craig Cannon [01:10:52] – There was a question that I was kind of curious to your answer. Tika asks about essentially, teaching about quantum computers. They say, many kids in grade 10 can code. Some can play with machine learning tools without knowing the math. Can quantum computing become as simple and/or accessible?

John Preskill [01:11:17] – Maybe so. At some level, when people say quantum mechanics is counterintuitive, it’s hard for us to grasp, it’s so foreign to our experience, that’s true. The way things behave at the microscopic scale are, like we discussed earlier, really different from the way ordinary stuff behaves. But it’s a question of familiarity. What I wouldn’t be surprised by is that if you go out a few decades, kids who are 10 years old are going to be playing quantum games. That’s an application area that doesn’t get discussed very much, but there could be a real market there because people love games. Quantum games are different, and the strategies are different, and what you have to do to win is different. If you play the game enough, you start to get the hang of it.

John Preskill [01:12:26] – I don’t see any reason why kids who have not necessarily deeply studied physics can’t get a pretty good feel for how quantum mechanics works. You know, the way ordinary physics works, maybe it’s not so intuitive. Newton’s laws … Aristotle couldn’t get it right. He thought you had to keep pushing on something to get it to keep moving. That wasn’t right. Galileo was able to roll balls down a ramp, and things like that, and see he didn’t have to keep pushing to keep it moving. He could see that it was uniformly accelerated in a gravitational field. Newton took that to a much more general and powerful level. You fool around with stuff, and you get the hang of it. And I think quantum stuff can be like that. We’ll experience it in a different way, but when we have quantum computers, in a way, that opens the opportunity for trying things out and seeing what happens.

John Preskill [01:13:50] – After you’ve played the game enough, you start to anticipate. And actually, it’s an important point about the applications. One of the questions you asked me at the beginning was what are we able to do with quantum computers? And I said, I don’t know. So how are we going to discover new applications? It might just be, at least in part, by fooling around. A lot of classical algorithms that people use on today’s computers were discovered, or that they were powerful was discovered, by experimenting. By trying it. I don’t know … what’s an example of that? Well, the simplex method that we use in linear programming. I don’t think there was a mathematical proof that it was fast at first, but people did experiments, and they said, hey, this is pretty fast.

Craig Cannon [01:14:53] – Well, you’re seeing it a lot now in machine learning.

John Preskill [01:14:57] – Yeah, well that’s a good example.

Craig Cannon [01:14:58] – You test it out a million times over when you’re running simulations, and it turns out, that’s what works. Following the thread of education, and maybe your political interest, given it’s the year that it is, do you have thoughts on how you would adjust or change STEM education?

John Preskill [01:15:23] – Well, no particularly original thoughts. But I do think that STEM education … we shouldn’t think of it as we’re going to need this technical workforce, and so we better train them. The key thing is we want the general population to be able to reason effectively, and to recognize when an argument is phony and when it’s authentic. To think about, well how can I check whether what I just read on Facebook is really true? And I see that as part of the goal of STEM education. When you’re teaching kids in school how to understand the world by doing experiments, by looking at the evidence, by reasoning from the evidence, this is something that we apply in everyday life, too. I don’t know exactly how to implement this–

John Preskill [01:16:36] – But I think we should have that perspective that we’re trying to educate a public, which is going to eventually make critical decisions about our democracy, and they should understand how to tell when something is true or not. That’s a hard thing to do in general, but you know what I mean. That there are some things that, if you’re a person with some — I mean it doesn’t necessarily have to be technical — but if you’re used to evaluating evidence and making a judgment based on that evidence about whether it’s a good argument or not, you can apply that to all the things you hear and read, and make better judgments.

Craig Cannon [01:17:23] – What about on the policy side? Let’s see, JJ Francis asked that, if you or any of your colleagues would ever consider running for office. Curious about science policy in the US.

John Preskill [01:17:38] – Well, it would be good if we had more scientifically trained people in government. Very few members of Congress. I know of one, Bill Foster’s a physicist in Illinois. He was a particle physicist, and he worked at Fermilab, and now he’s in Congress, and very interested in the science and educational policy aspects of government. Rush Holt was a congressman from New Jersey who had a background in physics. He retired from the House a couple of years ago, but he was in Congress for something like 18 years, and he had a positive influence, because he had a voice that people respected when it came to science policy. Having more people like that would help. Now, another thing, it doesn’t have to be elective office.

Craig Cannon [01:18:39] – Right.

John Preskill [01:18:42] – There are a lot of technically trained people in government, many of them making their careers in agencies that deal with technical issues. Department of Defense, of course, there are a lot of technical issues. In the Obama Administration we had two successive secretaries of energy who were very, very good physicists. Steve Chu was Nobel Prize winning physicist. Then Ernie Moniz, who’s a real authority on nuclear energy and weapons. That kind of expertise makes a difference in government.

John Preskill [01:19:24] – Now the Secretary of Energy is Rick Perry. It’s a different background.

Craig Cannon [01:19:28] – Yeah, you could say that. Just kind of historical reference, what policies did they put in place that you really felt their hand as a physicist move forward?

John Preskill [01:19:44] – You mean in particular–

Craig Cannon [01:19:45] – I’m talking the Obama Administration.

John Preskill [01:19:49] – Well, I think the Department of Energy, DOE, tried to facilitate technical innovation by seeding new technologies, by supporting startup companies that were trying to do things that would improve battery technology, and solar power, and things like that, which could benefit future generations. They had an impact by doing that. You don’t have to be a Nobel Prize winning physicist to think that’s a good idea. That the administration felt that was a priority made a difference, and appointing a physicist at Department of Energy was, if nothing else, highly symbolic of how important those things are.

Craig Cannon [01:20:52] – On the quantum side, someone asked Vikas Karad, he asked where the Quantum Valley might be. Do you have thoughts, as in Silicon Valley for quantum computing?

John Preskill [01:21:06] – Well… I don’t know, but you look at what’s happening the last couple of years, there have been a number of quantum startups. A notable number of them are in the Bay Area. Why so? Well, that’s where the tech industry is concentrated and where the people who are interested in financing innovative technical startups are concentrated. If you are an entrepreneur interested in starting a company, and you’re concerned about how to fundraise for it, it kind of makes sense to locate in that area. Now, that’s what’s sort of happening now, and may not continue, of course. It might not be like that indefinitely. Nothing lasts forever, but I would say… That’s the place, Silicon Valley is likely to be Quantum Valley, the way things are right now.

Craig Cannon [01:22:10] – Well then what about the physicists who might be listening to this? If they’re thinking about starting a company, do you have advice for them?

John Preskill [01:22:22] – Just speaking very generally, if you’re putting a team together… Different people have different expertise. Take quantum computing as an example, like we were saying earlier, some of the big players and the startups, they want to do everything. They want to build the hardware, figure out better ways to fabricate it. Better control, better software, better applications. Nobody can be an expert on all those things. Of course, you’ll hire a software person to write your software, and microwave engineer to figure out your control, and of course that’s the right thing to do. But I think in that arena, and it probably applies to other entrepreneurial activity relating to physics, being able to communicate across those boundaries is very valuable, and you can see it in quantum computing now. That if the man or woman who’s involved in the software has that background, but there’s not a big communication barrier talking to the people who are doing the control engineering, that can be very helpful. It makes sense to give some preference to the people who maybe are comfortable doing so, or have the background that stretches across more than one of those areas of expertise. That can be very enabling in a technology arena like quantum computing today, where we’re trying to do really, really hard stuff, and you don’t know whether you’ll succeed, and you want to give it your best go by seeing the connections between those different things.

Craig Cannon [01:24:28] – Would you advise someone then to maybe teach or try and explain it to, I don’t know their young cousins? Because Feynman maybe recognizes the king of communicating physics, at least for a certain period of time. How would you advise someone to get better at it so they can be more effective?

John Preskill [01:24:50] – Practice. There are different aspects of that. This isn’t what you meant at all, but I’ll say it anyway, because what you asked brought it to mind. If you teach, you learn. We have this odd model in the research university that a professor like me is supposed to do research and teach. Why don’t we hire teachers and researchers? Why do we have the same people doing both? Well, part of the reason for me is most of what I know, what I’ve learned since my own school education ended, is knowledge I acquired by trying to teach it. To keep our intellect rejuvenated, we have to have that experience of trying to teach new things that we didn’t know that well before to other people. That deepens your knowledge. Just thinking about how you convey it makes you ask questions that you might not think to ask otherwise, and you say “Hey, I don’t know the answer to that.” Then you have to try to figure it out. So I think that applies at varying levels to any situation in which a scientist, or somebody with a technical background, is trying to communicate.

John Preskill [01:26:21] – By thinking about how to get it across to other people, we can get new insights, you know? We can look at it in a different way. It’s not a waste of time. Aside from the benefits of actually successfully communicating, we benefit from it in this other way. But other than that… Have fun with it, you know? Don’t look at it as a burden, or some kind of task you have to do along with all the other things you’re doing. It should be a pleasure. When it’s successful, it’s very gratifying. If you put a lot of thought into how to communicate something and you think people are getting it, that’s one of the ways that somebody in my line of work can get a lot of satisfaction.

Craig Cannon [01:27:23] – If now were to be your opportunity to teach a lot of people about physics, and you could just point someone to things, who would you advise someone to be? They want to learn more about quantum computing, they want to learn about physics. What should they be reading? What YouTube channel should they follow? What should they pay attention to?

John Preskill [01:27:44] – Well one communicator who I have great admiration for is Leonard Susskind, who’s at Stanford. You mentioned Feynman as the great communicator, and that’s fair, but in terms of style and personality of physicists who are currently active, I think Lenny Susskind is the most similar to Feynman of anyone I can think of. He’s a no bullshit kind of guy. He wants to give you the straight stuff. He doesn’t want to water it down for you. But he’s very gifted when it comes to making analogies and creating the illusion that you’re understanding what he’s saying. He has … if you just go to YouTube and search Leonard Susskind you’ll see lectures that he’s given at Stanford where they have some kind of extension school for people who are not Stanford students, people in the community. A lot of them in the tech community because it’s Stanford, and he’s giving courses. Yeah, and on quite sophisticated topics, but also on more basic topics, and he’s in the process of turning those into books. I’m not sure how many of those have appeared, but he has a series called The Theoretical Minimum

John Preskill [01:29:19] – which is supposed to be the gentle introduction to different topics like classical physics, quantum physics, and so on. He’s pretty special I think in his ability to do that.

Craig Cannon [01:29:32] – I need to subscribe. Actually, here’s a question then. In the things you’ve relearned while teaching over the past, I guess it’s 35 years now.

John Preskill [01:29:46] – Shit, is that right?

Craig Cannon [01:29:47] – Something like that.

John Preskill [01:29:48] – That’s true. Yeah.

Craig Cannon [01:29:51] – What were the big thing, what were the revelations?

John Preskill [01:29:55] – That’s how I learned quantum computing, for one thing. I was not at all knowledgeable about information science. That wasn’t my training. Back when I was in school, physicists didn’t learn much about things like information theory, computer science, complexity theory. One of the great things about quantum computing is its interdisciplinary character, that it brings these different things into contact, which traditionally had not been part of the common curriculum of any community of scholars. I decided 20 years ago that I should teach a quantum information class at Caltech, and I worked very hard on it that year. Not that I’m an expert, or anything, but I learned a lot about information theory, and things like channel capacity, and computational complexity — how we classify the hardness of problems — and algorithms. Things like that, which I didn’t really know very well. I had sort of a passing familiarity with some of those things from reading some of the quantum computing literature. That’s no substitute for teaching a class because then you really have to synthesize it and figure out your way of presenting it. Most of the notes are typed up and you can still get to them on my website.That was pretty transformative for me … and it was easier then, 20 years ago, I guess than it is now because it was such a new topic.

John Preskill [01:31:49] – But I really felt I was kind of close enough to the cutting edge on most of those topics by the time I’d finished the class that I wasn’t intimidated by another paper I’d read or a new thing I’d hear about those things. That was probably the one case where it really made a difference in my foundation of knowledge which enabled me to do things. But I had the same experience in particle physics. When I was a student, I read a lot. I was very broadly interested in physics. But when the first time, I was still at Harvard at the time –later I taught a similar course here — I’m in my late 20s, I’m just a year or two out of graduate school, and I decide to teach a very comprehensive class on elementary particles … in particular, quantum chromodynamics, the theory of nuclear forces like we talked about before. It just really expanded my knowledge to have that experience of teaching that class. I still draw on that. I can still remember that experience and I think I get ideas that I might not otherwise have because I went through that.

Craig Cannon [01:33:23] – I want to get involved now. I want to go back to school, or maybe teach a class. I don’t know.

John Preskill [01:33:27] – Well, what’s stopping you?

Craig Cannon [01:33:29] – Nothing. Alright, thanks John.

John Preskill [01:33:32] – Okay, thank you Craig.