# The World Cup from a Quantum Perspective

Two weeks into the Football World Cup and the group stages are over: 16 teams have gone home, leaving the top 16 teams to contend the knock-out stages. Those fans who enjoy a bet will be poring over the odds in search of a bargain—a mis-calculation on the part of the bookmakers. Is now the time to back Ronaldo for the golden boot, whilst Harry Kane dominates the headlines and sports betting markets? Will the hosts Russia continue to defy lowly pre-tournament expectations and make the semi-finals? Are France about to emerge from an unconvincing start to the tournament and blossom as clear front-runners?

But, whilst for most the sports betting markets may lead to the enhanced enjoyment of the tournament that a bet can bring (as well as the possibility of making a little money), for others they represent a window into the fascinating world of sports statistics. A fascination that can be captured by the simple question: how do they set the odds?

Suppose that a bookmaker has in their possession a list of outcome probabilities for matches between each pair of national football teams in the world and wants to predict the overall winner. There are 32768 possible ways for the tournament knock-out rounds to pan-out—a large, but not insurmountable number of iterations by modern computational standards.

However, if the bookmaker instead considers the tennis grand-slams, with 128 competitors in the first round, then there are a colossal 1.7 × 1038 permutations. Indeed, in a knock-out format there are 2n-1 permutations, where n is the number of entrants. And for those of a certain mindset, this exponentially growing space immediately raises the question of whether a quantum algorithm can yield a speed-up for the related prediction tasks.

## A Tiny Cup

The immediate question which we want to answer here is, perhaps, who will win the World Cup. We will walk through the idea on the blackboard first, and then implement it as a quantum algorithm—which, hopefully, will give some insight into how and where quantum computers can outperform classical ones, for this particular way of answering the question.

Let us take a toy setup with four teams A, B, C and D;
the knockout stage starts with a game A vs. B, and C vs. D.
Whoever wins each game will play against each other, so here we have four possible final games: A vs. C, A vs. D, B vs. C, or B vs. D.
Let’s denote by p(X, Y) the probability that X wins when playing against Y.

The likelihood of A winning the cup is then simply given by

p(A, B) × ( p(C, D) × p(A, C) + p(D, C) × p(A, D) ),

i.e. the probability that A wins against B, times the probabilities of A winning against C in case C won against D, plus the probability of A winning against D in case D won.

## How can we obtain the same quantity with a quantum algorithm?

First, we set up our Hilbert space so that it can represent all possible Cup scenarios.
Since we have four teams, we need a four-dimensional quantum system as our smallest storage unit—we commonly call those qudits as generalizations of a qubit, which having dimension 2 would be fit to store two teams only (we can always “embed” a qudit into a few qubits of the same dimension.

Remember: k qubits have dimension 2k, so we could also store the qudit as two qubits).
If we write $|A\rangle$, this simply stands for a qudit representing team A; if we write $|A\rangle |B\rangle$, then we have a state representing two teams.

To represent a full knockout tree, we follow the same logic: Take four qudits for the initial draw; add two qudits for the winners of the first two matches, and one qudit for the final winner.

For instance, one possible knockout scenario would be

$|\text{Game 1}\rangle = \underbrace{|A\rangle |B\rangle |C\rangle |D\rangle}_\text{Initial Draw} \ \underbrace{|A\rangle |D\rangle}_\text{Finals} \ |D\rangle.$

The probability associated with Game 1 is then precisely p(A, B) × p(D, C) × p(D, A).

Here is where quantum computing comes in.

Starting from an initial state $|A\rangle |B\rangle |C\rangle |D\rangle$, we create two new slots in a superposition over all possible match outcomes, weighted by the square-root of their probabilities (which we call q instead of p):

\begin{aligned} |\text{Step 1}\rangle = |A\rangle |B\rangle |C\rangle |D\rangle \big(\ \ &\text{q(A, B)q(C, D)} \,|A\rangle\ |C\rangle +\\ &\text{q(A, B)q(D, C)} \,|A\rangle\ |D\rangle +\\ &\text{q(B, A)q(C, D)} \,|B\rangle\ |C\rangle +\\ &\text{q(B, A)q(D, C)} \,|B\rangle\ |D\rangle\ \big). \end{aligned}

For the final round, we perform the same operation on those two last slots; e.g. we would map $|A\rangle |C\rangle$ to a state $|A\rangle |C\rangle ( q(A, C) |A\rangle + q(C, A) |C\rangle )$. The final state is thus a superposition over eight possible weighted games (as we would expect).

## So you can tell me who wins the World Cup?

Yes. Or well, probably. We find out by measuring the rightmost qudit.
As we know, the probability of obtaining a certain measurement outcome, say A, will then be determined by the square of the weights in front of the measured state; since we put in the square-roots initially we recover the original probabilities. Neat!

And since there are two possible game trees that lead to a victory of A, we have to sum them up—and we get precisely the probability we calculated by hand above. This means the team that is most likely to win will be the most likely measurement outcome.

So what about the World Cup? We have 16 teams; one team can thus be stored in four qubits. The knockout tree has 31 vertices, and a naive implementation can be done on a quantum computer with 124 qubits. Of course we are only a bit naive, so we can simulate this quantum computer on a classical one and obtain the following winning probabilities:

 0.194 Brazil 0.168 Spain 0.119 France 0.092 Belgium 0.082 Argentina 0.075 England 0.049 Croatia 0.041 Colombia 0.04 Portugal 0.032 Uruguay 0.031 Russia 0.022 Switzerland 0.019 Denmark 0.018 Sweden 0.012 Mexico 0.006 Japan

It is worth noting that all operations we described can be implemented efficiently with a quantum computer, and the number of required qubits is quite small; for the four teams, we could get away with seven qudits, or fourteen qubits (and we could even save some, by ignoring the first four qudits which are always the same).

So for this particular algorithm there is an exponential speedup over its non-probabilistic classical counterpart: as mentioned, one would have to iterate over all trees; tedious for the World Cup, practically impossible for tennis. However…

## Classical vs. Quantum

Does using a quantum algorithm give us a speedup for this task? Here, the answer is no; one could obtain similar results in comparable time using probabilistic methods, for instance, by doing Monte Carlo sampling.

But there are several interesting related questions that we could ask for which there might be a quantum advantage.

For some team A, we can easily create a state that has all game trees in superposition that lead to a victory of A—even weighting them using their respective probabilities.
Given this state as a resource, we can think of questions like “which game tree is most likely, given that we fix A and B as semifinalists”, or “which team should A play in the knockout stages to maximize the probability that B wins the tournament”.

Or, more controversially: can we optimize the winning chances for some team by rearranging the initial draw?

Some questions like these lend themselves to applying Grover search, for which there is a known speedup over classical computers. To inquire deeper into the utility of quantum algorithms, we need to invent the right kind of question to ask of this state.

Let us think of one more toy example. Being part physicists, we assume cows are spheres—so we might as well also assume that if A is likely to win a match against B, it always wins—even if the probability is only 51%. Let’s call this exciting sport “deterministic football”. For a set of teams playing a tournament of deterministic football, does there exist a winning initial draw for every team?

This becomes an especially interesting question in cases where there is a non-trivial cyclic relation between the teams’ abilities, a simple example being: A always beats B, B always beats C, and C always beats A. For example, if this problem turns out to be NP-hard, then it would be reasonable to expect that the quadratic improvement achieved by quantum search is the best we can hope for in using a quantum algorithm for the task of finding a winning initial draw for a chosen team—at least for deterministic football (phew).

## To the finals and beyond

World Cup time is an exciting time: whatever the question, we are essentially dealing with binary trees, and making predictions can be translated into finding partitions or cuts that satisfy certain properties defined through a function of the edge weights (here the pairwise winning probabilities). We hope this quantum take on classical bookmaking might point us in the direction of new and interesting applications for quantum algorithms.

Hopefully a good bargain!

(Written with Steven Herbert and Sathyawageeswar Subramanian)

# What’s the worst that could happen?

The archaeologist Howard Carter discovered Tutankhamun’s burial site in 1922. No other Egyptian pharaoh’s tomb had survived mostly intact until the modern era. Gold and glass and faience, statues and pendants and chariots, had evaded looting. The discovery would revolutionize the world’s understanding of, and enthusiasm for, ancient Egypt.

First, the artifacts had to leave the tomb.

Tutankhamun lay in three coffins nested like matryoshka dolls. Carter describes the nesting in his book The Tomb of Tutankhamen. Lifting the middle coffin from the outer coffin raised his blood pressure:

Everything may seem to be going well until suddenly, in the crisis of the process, you hear a crack—little pieces of surface ornament fall. Your nerves are at an almost painful tension. What is happening? All available room in the narrow space is crowded by your men. What action is needed to avert a catastrophe?

In other words, “What’s the worst that could happen?”

Collaborators and I asked that question in a paper published last month. We had in mind less Egyptology than thermodynamics and information theory. But never mind the distinction; you’re reading Quantum Frontiers! Let’s mix the fields like flour and oil in a Biblical grain offering.

Carter’s team had trouble separating the coffins: Ancient Egyptian priests (presumably) had poured fluid atop the innermost, solid-gold coffin. The fluid had congealed into a brown gunk, gluing the gold coffin to the bottom of the middle coffin. Removing the gold coffin required work—thermodynamic work.

Work consists of “well-ordered” energy usable in tasks like levering coffins out of sarcophagi and motoring artifacts from Egypt’s Valley of the Kings toward Cairo. We can model the gunk as a spring, one end of which was fixed to the gold coffin and one end of which was fixed to the middle coffin. The work $W$ required to stretch a spring depends on the spring’s stiffness (the gunk’s viscosity) and on the distance stretched through.

$W$ depends also on details: How many air molecules struck the gold coffin from above, opposing the team’s effort? How quickly did Carter’s team pull? Had the gunk above Tuankhamun’s nose settled into a hump or spread out? How about the gunk above Tutankhamun’s left eye socket? Such details barely impact the work required to open a 6.15-foot-long coffin. But air molecules would strongly impact $W$ if Tutankhamun measured a few nanometers in length. So imagine Egyptian matryoshka dolls as long as stubs of DNA.

Imagine that Carter found one million sets of these matryoshka dolls. Lifting a given set’s innermost coffin would require an amount $W$ of work that would vary from set of coffins to set of coffins. $W$ would satisfy fluctuation relations, equalities I’ve blogged about many times.

Fluctuation relations resemble the Second Law of Thermodynamics, which illuminates why time flows in just one direction. But fluctuation relations imply more-precise predictions about $W$ than the Second Law does.

Some predictions concern dissipated work: Carter’s team could avoid spending much work by opening the coffin infinitesimally slowly. Speeding up would heat the gunk, roil air molecules, and more. The heating and roiling would cost extra work, called dissipated work, denoted by $W_{\rm diss}$.

Suppose that Carter’s team has chosen a lid-opening speed $v$. Consider the greatest $W_{\rm diss}$ that the team might have to waste on any nanoscale coffin. $W_{\rm diss}^{\rm worst}$ is proportional to each of three information-theoretic quantities, my coauthors and I proved.

For experts: Each information-theoretic quantity is an order-infinity Rényi divergence $D_\infty ( X || Y)$. The Rényi divergences generalize the relative entropy $D ( X || Y )$. $D$ quantifies how efficiently one can distinguish between probability distributions, or quantum states, $X$ and $Y$ on average. The average is over many runs of a guessing game.

Imagine the worst possible run, which offers the lowest odds of guessing correctly. $D_\infty$ quantifies your likelihood of winning. We related $W_{\rm diss}^{\rm worst}$ to a $D_\infty$ between two statistical-mechanical phase-space distributions (when we described classical systems), to a $D_\infty$ between two quantum states (when we described quantum systems), and to a $D_\infty$ between two probability distributions over work quantities $W$ (when we described systems quantum and classical).

The worst case marks an extreme. How do the extremes consistent with physical law look? As though they’ve escaped from a mythologist’s daydream.

In an archaeologist’s worst case, arriving at home in the evening could lead to the following conversation:

“The worst possible.”

“What happened?”

“I accidentally eviscerated a 3.5-thousand-year-old artifact—the most valuable, best-preserved, most information-rich, most lavishly wrought ancient Egyptian coffin that existed yesterday.”

Suppose that the archaeologist lived with a physicist. My group (guided by a high-energy physicist) realized that the conversation could continue as follows:

“Also the worst possible.”

“What happened?”

“I created a black hole.”

General relativity and high-energy physics have begun breeding with quantum information and thermodynamics. The offspring bear extremes like few other systems imaginable. I wonder what our results would have to say about those offspring.

National Geographic reprinted Carter’s The Tomb of Tutankhamen in its “Adventure Classics” series. The series title fits Tomb as a mummy’s bandages fit the mummy. Carter’s narrative stretches from Egypt’s New Kingdom (of 3.5 thousand years ago) through the five-year hunt for the tomb (almost fruitless until the final season), to a water boy’s discovery of steps into the tomb, to the unsealing of the burial chamber, to the confrontation of Tutankhamun’s mummy.

Carter’s book guided me better than any audio guide could have at the California Science Center. The center is hosting the exhibition “King Tut: Treasures of the Golden Pharaoh.” After debuting in Los Angeles, the exhibition will tour the world. The tour showcases 150 artifacts from Tutankhamun’s tomb.

Those artifacts drove me to my desk—to my physics—as soon as I returned home from the museum. Tutankhamun’s tomb, Carter argues in his book, ranks amongst the 20th century’s most important scientific discoveries. I’d seen a smidgeon of the magnificence that Carter’s team— with perseverance, ingenuity, meticulousness, and buckets of sweat shed in Egypt’s heat—had discovered. I don’t expect to discover anything a tenth as magnificent. But how can a young scientist resist trying?

People say, “Prepare for the worst. Hope for the best.” I prefer “Calculate the worst. Hope and strive for a Tutankhamun.”

Postscript: Carter’s team failed to unglue the gold coffin by just “stretching” the gunky “spring.” The team resorted to heat, a thermodynamic quantity alternative to work: The team flipped the middle coffin upside-down above a heat lamp. The lamp raised the temperature to 932°F, melting the goo. The melting, with more work, caused the gold coffin to plop out of the middle coffin.

# A finger painting for John Preskill

I’d completed about half my candidacy exam.

Four Caltech faculty members sat in front of me, in a bare seminar room. I stood beside a projector screen, explaining research I’d undertaken. The candidacy exam functions as a milepost in year three of our PhD program. The committee confirms that the student has accomplished research and should continue.

I was explaining a quantum-thermodynamics problem. I reviewed the problem’s classical doppelgänger and a strategy for solving the doppelgänger. Could you apply the classical strategy in the quantum problem? Up to a point. Beyond it, you’d need

“Does anyone here like the Beatles?” I asked the committee. Three professors had never participated in an exam committee before. The question from the examinee appeared to startle them.

One committee member had participated in cartloads of committees. He recovered first, raising a hand.

The committee member—John Preskill—then began singing the Beatles song.

In the middle of my candidacy exam.

The moment remains one of the highlights of my career.

Throughout my PhD career, I’ve reported to John. I’ve emailed an update every week and requested a meeting about once a month. I sketch the work that’s firing me, relate my plans, and request feedback.

Much of the feedback, I’ve discerned over the years, condenses into aphorisms buried in our conversations. I doubt whether John has noticed his aphorisms. But they’ve etched themselves in me, and I hope they remain there.

“Think big.” What would impact science? Don’t buff a teapot if you could be silversmithing.

Education serves as “money in the bank.” Invest in yourself, and draw on the interest throughout your career.

“Stay broad.” (A stretching outward of both arms accompanies this aphorism.) Embrace connections with diverse fields. Breadth affords opportunities to think big.

“Keep it simple,” but “do something technical.” A teapot cluttered with filigree, spouts, and eighteen layers of gold leaf doesn’t merit a spot at the table. A Paul Revere does.

“Do what’s best for Nicole.” I don’t know how many requests to speak, to participate on committees, to explain portions of his lecture notes, to meet, to contribute to reports, and more John receives per week. The requests I receive must look, in comparison, like a mouse to a mammoth. But John exhorts me to to guard my time for research—perhaps, partially, because he gives so much time, including to students.

“Move on.” If you discover an opportunity, study background information for a few months, seize the opportunity, wrap up the project, and seek the next window.

John has never requested my updates, but he’s grown used to them. I’ve grown used to how meetings end. Having brought him questions, I invite him to ask questions of me.

“Are you having fun?” he says.

I tell the Beatles story when presenting that quantum-thermodynamics problem in seminars.

“I have to digress,” I say when the “Help!” image appears. “I presented this slide at a talk at Caltech, where John Preskill was in the audience. Some of you know John.” People nod. “He’s a…mature gentleman.”

I borrowed the term from the apparel industry. “Mature gentleman” means “at a distinguished stage by which one deserves to have celebrated a birthday of his with a symposium.”

Many physicists lack fluency in apparel-industry lingo. My audience members take “mature” at face value.

Some audience members grin. Some titter. Some tilt their heads from side to side, as though thinking, “Eh…”

John has impact. He’s logged boatloads of technical achievements. He has the scientific muscle of a scientific rhinoceros.

And John has fun. He doesn’t mind my posting an article about audience members giggling about him.

Friends ask me whether professors continue doing science after meriting birthday symposia, winning Nobel Prizes, and joining the National Academy of Sciences. I point to the number of papers with which John has, with coauthors, electrified physics over the past 20 years. Has coauthored because science is fun. It merits singing about during candidacy exams. Satisfying as passing the exam felt two years ago, I feel more honored when John teases me about my enthusiasm for science.

A year ago, I ate lunch with an alumnus who’d just graduated from our group. Students, he reported, have a tradition of gifting John a piece of art upon graduating. I relayed the report to another recent alumnus.

“Really?” the second alumnus said. “Maybe someone gave John a piece of art and then John invented the tradition.”

Regardless of its origin, the tradition appealed to me. John has encouraged me to blog as he’s encouraged me to do theoretical physics. Writing functions as art. And writing resembles theoretical physics: Each requires little more than a pencil, paper, and thought. Each requires creativity, aesthetics, diligence, and style. Each consists of ideas, of abstractions; each lacks substance but can outlive its creator. Let this article serve as a finger painting for John Preskill.

Thanks for five fun years.

With my PhD-thesis committee, after my thesis defense. Photo credit to Nick Hutzler, who cracked the joke that accounts for everyone’s laughing. (Left to right: Xie Chen, Fernando Brandão, John Preskill, Nicole Yunger Halpern, Manuel Endres.)

# A quantum podcast

A few months ago I sat down with Craig Cannon of Y Combinator for a discussion about quantum technology and other things. A lightly edited version was published this week on the Y Combinator blog. The video is also on YouTube:

If you’re in a hurry, or can’t stand the sound of my voice, you might prefer to read the transcript, which is appended below. Only by watching the video, however, can you follow the waving of my hands.

I grabbed the transcript from the Y Combinator blog post, so you can read it there if you prefer, but I’ve corrected some of the typos. (There are a few references to questions and comments that were edited out, but that shouldn’t cause too much confusion.)

Here we go:

Craig Cannon [00:00:00] – Hey, how’s it going? This is Craig Cannon, and you’re listening to Y Combinator’s Podcast. Today’s episode is with John Preskill. John’s a theoretical physicist and the Richard P. Feynman Professor of Theoretical Physics at Caltech. He once won a bet with Stephen Hawking and he writes that it made him briefly almost famous. Basically, what happened is John and Kip Thorne bet that singularities could exist outside of black holes. After six years, Hawking conceded. He said that they were possible in very special, “non-generic conditions.” I’ll link up some more details to that in the description. In this episode, we cover what John’s been focusing on for years, which is quantum information, quantum computing, and quantum error correction. Alright, here we go. What was the revelation that made scientists and physicists think that a quantum computer could exist?

John Preskill [00:00:54] – It’s not obvious. A lot of people thought it couldn’t. The idea that a quantum computer would be powerful was emphasized over 30 years ago by Richard Feynman, the Caltech physicist. It was interesting how he came to that realization. Feynman was interested in computation his whole life. He had been involved during the war in Los Alamos. He was the head of the computation group. He was the guy who fixed the little mechanical calculators, and he had a whole crew of people who were calculating, and he figured out how to flow the work from one computer to another. All that kind of stuff. As computing technology started to evolve, he followed that. In the 1970s, a particle physicist like Feynman, that’s my background too, got really interested in using computers to study the properties of elementary particles like the quarks inside a nucleus, you know? We know a proton isn’t really a fundamental object. It’s got little beans rattling around inside, but they’re quantum beans. Gell-Mann, who’s good at names, called them quarks.

John Preskill [00:02:17] – Now we’ve had a theory since the 1970s of how quarks behave, and so in principle, you know everything about the theory, you can compute everything, but you can’t because it’s just too hard. People started to simulate that physics with digital computers in the ’70s, and there were some things that they could successfully compute, and some things they couldn’t because it was just too hard. The resources required, the memory, the time were out of reach. Feynman, in the early ’80s said nature is quantum mechanical damn it, so if you want a simulation of nature, it should be quantum mechanical. You should use a quantum system to behave like another quantum system. At the time, he called it a universal quantum simulator.

John Preskill [00:03:02] – Now we call it a quantum computer. The idea caught on about 10 years later when Peter Shor made the suggestion that we could solve problems which don’t seem to have anything to do with physics, which are really things about numbers like finding the prime factors of a big integer. That caused a lot of excitement,  in part because the implications for cryptography are a big disturbing. But then physicists — good physicists — started to consider, can we really build this thing? Some concluded and argued fairly cogently that no, you couldn’t because of this difficulty that it’s so hard to isolate systems from the environment well enough for them to behave quantumly. It took a few years for that to sort out at the theoretical level. In the mid ’90s we developed a theory called quantum error correction. It’s about how to encode the quantum state that you’d like to protect in such a clever way that even if there are some interactions with the environment that you can’t control, it still stays robust.

John Preskill [00:04:17] – At first, that was just kind of a theorist’s fantasy — it was a little too far ahead of the technology. But 20 years later, the technology is catching up, and now this idea of quantum error correction has become something you can do in the lab.

Craig Cannon [00:04:31] – How does quantum error correction work? I’ve seen a bunch of diagrams, so maybe this is difficult to explain, but how would you explain it?

John Preskill [00:04:39] – Well, I would explain it this way. I don’t think I’ve said the word entanglement yet, have I?

Craig Cannon [00:04:43] – Well, I have been checking off all the Bingo words yet.

John Preskill [00:04:45] – Okay, so let’s talk about entanglement because it’s part of the answer to your question, which I’m still not done answering, what is quantum physics? What do we mean by entanglement? It’s really the characteristic way, maybe the most important way that we know in which quantum is different from ordinary stuff, from classical. Now what does it mean, entanglement? It means that you can have a physical system which has many parts, which have interacted with one another, so it’s in kind of a complex correlated state of all those parts, and when you look at the parts one at a time it doesn’t tell you anything about the state of the whole thing. The whole thing’s in some definite state — there’s information stored in it — and now you’d like to access that information … Let me be a little more concrete. Suppose it’s a book.

John Preskill [00:05:40] – Okay? It’s a book, it’s 100 pages long. If it’s an ordinary book, 100 people could each take a page, and read it, they know what’s on that page, and then they could get together and talk, and now they’d know everything that’s in the book, right? But if it’s a quantum book written in qubits where these pages are very highly entangled, there’s still a lot of information in the book, but you can’t read it the way I just described. You can look at the pages one at a time, but a single page when you look at it just gives you random gibberish. It doesn’t reveal anything about the content of the book. Why is that? There’s information in the book, but it’s not stored in the individual pages. It’s encoded almost entirely in how those pages are correlated with one another. That’s what we mean by quantum entanglement: Information stored in those correlations which you can’t see when you look at the parts one at a time. You asked about quantum error correction?

John Preskill [00:06:39] – What’s the basic idea? It’s to take advantage of that property of entanglement. Because let’s say you have a system of many particles. The environment is kind of kicking them around, it’s interacting with them. You can’t really completely turn off those interactions no matter how hard you try, but suppose we’ve encoded the information in entanglement. So, say, if you look at one atom, it’s not telling you anything about the information you’re trying to protect. The environment isn’t learning anything when it looks at the atoms one at a time.

John Preskill [00:07:15] – This is kind of the key thing — that what makes quantum information so fragile is that when you look at it, you disturb it. This ordinary water bottle isn’t like that. Let’s say we knew it was either here or here, and we didn’t know. I would look at it, I’d find out it’s here. I was ignorant of where it was to start with, and now I know. With a quantum system, when you look at it, you really change the state. There’s no way to avoid that. So if the environment is looking at it in the sense that information is leaking out to the environment, that’s going to mess it up. We have to encode the information so the environment, so to speak, can’t find out anything about what the information is, and that’s the idea of quantum error correction. If we encode it in entanglement, the environment is looking at the parts one at a time, but it doesn’t find out what the protected information is.

Craig Cannon [00:08:06] – In other words, it’s kind of measuring probability the whole way along, right?

John Preskill [00:08:12] – I’m not sure what you mean by that.

Craig Cannon [00:08:15] – Is it Grover’s algorithm that was as quantum bits roll through, go through gates– The probability is determined of what information’s being passed through? What’s being computed?

John Preskill [00:08:30] – Grover’s algorithm is a way of sort of doing an exhaustive search through many possibilities. Let’s say I’m trying to solve some problem like a famous one is the traveling salesman problem. I’ve told you what the distances are between all the pairs of cities, and now I want to find the shortest route I can that visits them all. That’s a really hard problem. It’s still hard for a quantum computer, but not quite as hard because there’s a way of solving it, which is to try all the different routes, and measure how long they are, and then find the one that’s shortest, and you’ve solved the problem. The reason it’s so hard to solve is there’s such a vast number of possible routes. Now what Grover’s algorithm does is it speeds up that exhaustive search.

John Preskill [00:09:29] – In practice, it’s not that big a deal. What it means is that if you had the same processing speed, you can handle about twice as many cities before the problem becomes too hard to solve, as you could if you were using a classical processor. As far as what’s quantum about Grover, it takes advantage of the property in quantum physics that probabilities … tell me if I’m getting too inside baseball …

Craig Cannon [00:10:03] – No, no, this is perfect.

John Preskill [00:10:05] – That probabilities are the squares of amplitudes. This is interference. Again, this is another part of the answer. Well, we can spend the whole hour answering the question, what is quantum physics? Another essential part of it is what we call interference, and this is really crucial for understanding how quantum computing works. That is that probabilities add. If you know the probability of one alternative, and you know the probability of another, then you can add those together and find the probability that one or the other occurred. It’s not like that in quantum physics. The famous example is the double slit interference experiment. I’m sending electrons, let’s say — it could be basketballs, but it’s an easier experiment to do with electrons —

John Preskill [00:11:02] – at a screen, and there are two holes in the screen. You can try to detect the electron on the other side of the screen, and when you do that experiment many times, you can plot a graph showing where the electron was detected in each run, or make a histogram of all the different outcomes. And the graph wiggles, okay? If you could say there’s some probability of going through the first hole, and some probability of going through the second, and each time you detected it, it went through either one or the other, there’d be no wiggles in that graph. It’s the interference that makes it wiggle. The essence of the interference is that nobody can tell you whether it went through the first slit or the second slit. The question is sort of inadmissible. This interference then occurs when we can add up these different alternatives in a way which is different from what we’re used to. It’s not right to say that the electron was detected at this point because it had some probability of going through the first hole, and some probability of going through the second

John Preskill [00:12:23] – and we add those probabilities up. That doesn’t give the right answer. The different alternatives can interfere. This is really important for quantum computing because what we’re trying to do is enhance the probability or the time it takes to find the solution to a problem, and this interference can work to our advantage. We want to have, when we’re doing our search, we want to have a higher chance of getting the right answer, and a lower chance of getting the wrong answer. If the different wrong answers can interfere, they can cancel one another out, and that enhances the probability of getting the right answer. Sorry it’s such a long-winded answer, but this is how Grover’s algorithm works.

John Preskill [00:13:17] – It can speed up exhaustive search by taking advantage of that interference phenomenon.

Craig Cannon [00:13:20] – Well this is kind of one of the underlying questions among many of the questions from Twitter. You’ve hit our record for most questions asked. Basically, many people are wondering what quantum computers really will do if and when it becomes a reality that they outperform classical computers. What are they going to be really good at?

John Preskill [00:13:44] – Well, you know what? I’m not really sure. If you look at the history of technology, it would be hubris to expect me to know. It’s a whole different way of dealing with information. Quantum information is not just … a quantum computer is not just a faster way of computing. It deals with information in a completely new way because of this interference phenomenon, because of entanglement that we’ve talked about. We have limited vision when it comes to predicting decades out what the impact will be of an entirely new way of doing things. Information processing, in particular. I mean you know this well. We go back to the 1960s, and people are starting to put a few transistors on a chip. Where is that going to lead? Nobody knew.

Craig Cannon [00:14:44] – Even early days of the internet.

John Preskill [00:14:45] – Yeah, good example.

Craig Cannon [00:14:46] – Even the first browser. No one really knew what anyone was going to do with it. It makes total sense.

John Preskill [00:14:52] – For good or ill. Yeah. But we have some ideas, you know? I think … why are we confident there will be some transformative effect on society? Of the things we know about, and I emphasize again, probably the most important ones are things we haven’t thought of when it comes to applications of quantum computing, the ones which will affect everyday life, I think, are better methods for understanding and inventing new materials, new chemical compounds. Things like that can be really important. If you find a better way of capturing carbon by designing a better catalyst, or you can design pharmaceuticals that have new effects, materials that have unusual properties. These are quantum physics problems because those properties of the molecule or the material really have to do with the underlying quantum behavior of the particles, and we don’t have a good way for solving such problems or predicting that behavior using ordinary digital computers. That’s what a quantum computer is good at. It’s good — but maybe not the only thing it’s good at — one thing it should certainly be good at is telling us quantitatively how quantum systems behave. In the two contexts I just mentioned, there’s little question that there will be practical impact of that.

Craig Cannon [00:16:37] – It’s not just doing the traveling salesman problem through the table of elements for why it can find these compounds.

John Preskill [00:16:49] – No. If it were, that wouldn’t be very efficient.

Craig Cannon [00:16:52] – Exactly.

John Preskill [00:16:53] – Yeah. No, it’s much trickier than that. Like I said, the exhaustive search, though conceptually it’s really interesting that quantum can speed it up because of interference, from a practical point of view it may not be that big a deal. It means that, well like I said, in the same amount of time you can solve an instance which is twice as big of the problem. What we really get excited about are the so-called exponential speed ups. That was why Shor’s algorithm was exciting in 1994, because factoring large numbers was a problem that had been studied by smart people for a long time, and on that basis, the fact that there weren’t any fast ways of solving it was pretty good evidence it’s a hard problem. Actually, we don’t know how to prove that from first principles. Maybe somebody will come along one day and figure out how to solve factoring very fast on a digital computer. It doesn’t seem very likely because people have been trying for so long to solve problems like that, and it’s just intractable with ordinary computers. You could say the same thing about these quantum physics problems. Maybe some brilliant graduate student is going to drop a paper on the arXiv tomorrow which will say, “Here, I solved quantum chemistry, and I can do it on a digital computer.” But we don’t think that’s very likely because we’ve been working pretty hard on these problems for decades and they seem to be really hard. Those cases, like these number theoretic problems,

John Preskill [00:18:40] – which have cryptological implications, and tasks for simulating the behavior of quantum systems, we’re pretty sure those are hard problems classically, and we’re pretty sure quantum computers … I mean we have algorithms that have been proposed, but which we can’t really run currently because our quantum computers aren’t big enough on the scale that’s needed to solve problems people really care about.

Craig Cannon [00:19:09] – Maybe we should jump to one of the questions from Twitter which is related to that. Travis Scholten (@Travis_Sch) asked, what are the most problem pressings in physics, let’s say specifically around quantum computers that you think substantial progress ought to be made in to move the field forward?

John Preskill [00:19:27] – I know Travis. He was an undergrad here. How you doing, Travis? The problems that we need to solve to make quantum computing closer to realization at the level that would solve problems people care about? Well, let’s go over where we are now.

Craig Cannon [00:19:50] – Yeah, definitely.

John Preskill [00:19:51] – People have been working on quantum hardware for 20 years, working hard, and there are a number of different approaches to building the hardware, and nobody really knows which is going to be the best. I think we’re far from collapsing to one approach which everybody agrees has the best long-term prospects for scalability. And so it’s important that a lot of different types of hardware are being pursued. We can come back to what some of the different approaches are later. Where are we now? We think in a couple of years we’ll have devices with about 50 qubits to 100, and we’ll be able to control them pretty well. That’s an interesting range because even though it’s only 50 to 100 qubits, doesn’t sound like that big a deal, but that’s already too many to simulate with a digital computer, even with the most powerful supercomputers today. From that point of view, these relatively small, near-term quantum computers which we’ll be fooling around with over the next five years or so, are doing something that’s kind of super-classical.

John Preskill [00:21:14] – At least, we don’t know how to do exactly the same things with ordinary computers. Now that doesn’t mean they’ll be able to do anything that’s practically important, but we’re going to try. We’re going to try, and there are ideas about things we’ll try out, including baby versions of these problems in chemistry, and materials, and ways of speeding up optimization problems. Nobody knows how well those things are going to work at these small scales. Part of the reason is not just the number of qubits is small, but they’re also not perfect. We can perform elementary operations on pairs of qubits, which we call quantum gates like the gates in ordinary logic. But they have an error rate a little bit below an error every 100 gates. If you have a circuit with 1000 qubits, that’s a lot of noise.

Craig Cannon [00:22:18] – Exactly. Does for instance, 100-qubit quantum computer really mean 100-qubit quantum computer or do you need a certain amount of backup going on?

John Preskill [00:22:29] – In the near term, we’re going to be trying out, and probably we have the best hopes for, kind of hybrid classical-quantum methods with some kind of classical feedback. You try to do something on the quantum computer, you make a measurement that gives you some information, then you change the way you did it a little bit, and try to converge on some better answer. That’s one possible way of addressing optimization that might be faster on a quantum computer. But I just wanted to emphasize that the number of qubits isn’t the only metric. How good they are, and in particular, the reliability of the gates, how well we can perform them … that’s equally important. Anyway, coming back to Travis’ question, there are lots of things that we’d like to be able to do better. But just having much better qubits would be huge, right? If you … more or less, with the technology we have now, you can have a gate error rate of a few parts in 1,000, you know? If you can improve that by orders of magnitude, then obviously, you could run bigger circuits. That would be very enabling.

John Preskill [00:23:58] – Even if you stick with 100 qubits just by having a circuit with more depth, more layers of gates, that increases the range of what you could do. That’s always going to be important. Because, I mean look at how crappy that is. A gate error rate, even if it’s one part in 1,000, that’s pretty lousy compared to if you look at where–

Craig Cannon [00:24:21] – Your phone has a billion transistors in it. Something like that, and 0%–

John Preskill [00:24:27] – You don’t worry about the … it’s gotten to the point where there is some error protection built in at the hardware level in a processor, because I mean, we’re doing these crazy things like going down from the 11 nanometer scale for features on a chip.

Craig Cannon [00:24:45] – How are folks trying to deal with interference right now?

John Preskill [00:24:50] – You mean, what types of devices? Yeah, so that’s interesting too because there are a range of different ways to do it. I mentioned that we could store information, we could make a qubit out of a single atom, for example. That’s one approach. You have to control a whole bunch of atoms and get them to interact with one another. One way of doing that is with what we call trapped ions. That means the atoms have electrical charges. That’s a good thing because then you could control them with electric fields. You could hold them in a trap, and you can isolate them, like I said, in a very high vacuum so they’re not interacting too much with other things in the laboratory, including stray electric and magnetic fields. But that’s not enough because you got to get them to talk to one another. You got to get them to interact. We have this set of desiderata, which are kind of in tension with one another. On the one hand, we want to isolate the qubits very well. On the other hand, we want to control them from the outside and get them to do what we want them to do, and eventually, we want to read them out. You have to be able to read out the result of the computation. But the key thing is the control. You could have two of those qubits in your device interact with one another in a specified way, and to do that very accurately you have to have some kind of bus that gets the two to talk to one another.

John Preskill [00:26:23] – The way they do that in an ion trap is pretty interesting. It’s by using lasers and controlling how the ions vibrate in the trap, and with a laser, kind of excite, wiggles of the ion, and then by determining whether the ions are wiggling or not, you can go address another ion, and that way you can do a two-qubit interaction. You can do that pretty well. Another way is really completely different. What I just described was encoding information at the one atom level. But another way is to use superconductivity — circuits in which electric current flows without any dissipation. In that case, you have a lot of freedom to sort of engineer the circuits to behave in a quantum way. There are many nuances there, but the key thing is that you can encode information now in a system that might involve the collective motion of billions of electrons, and yet you can control it as though it were a single atom. I mean, here’s one oversimplified way of thinking about it.

John Preskill [00:27:42] – Suppose you have a little loop of wire, and there’s current flowing in the loop. It’s a superconducting wire so it just keeps flowing. Normally, there’d be resistance, which would dissipate that as heat, but not for the superconducting circuit, which of course, has to be kept very cold so it stays superconducting. But you can imagine in this little loop that the current is either circulating clockwise or counterclockwise. That’s a way of encoding information. It could also be both at once, and that’s what makes it a qubit.

Craig Cannon [00:28:14] – Right.

John Preskill [00:28:15] – And so in that case, even though it involves lots of particles, the magic is that you can control that system extremely well. I mentioned individual electrons. That’s another approach. Put the qubit in the spin of a single electron.

Craig Cannon [00:28:32] – You also mentioned better qubits. What did you mean by that?

John Preskill [00:28:35] – Well, what I really care about is how well I can do the gates. There’s a whole other approach, which is motivated by the desire to have much, much better control over the quantum information than we do in those systems that I mentioned so far, superconducting circuits and trapped ions. That’s actually what Microsoft is pushing very hard. We call it topological quantum computing. Topological is a word physicists and mathematicians love. It means, well, we’ll come back to what it means. Anyway, let me just tell you what they’re trying to do. They’re trying to make a much, much better qubit, which they can control much, much better using a completely different hardware approach.

Craig Cannon [00:29:30] – Okay.

John Preskill [00:29:32] – It’s very ambitious because at this point, it’s not even clear they have a single qubit, but if that approach is successful, and it’s making progress, we will see a validated qubit of this type soon. Maybe next year. Nobody really knows where it goes from there, but suppose it’s the case that you could do a two-qubit gate with an error rate of one in a million instead of one in 1,000. That would be huge. Now, scaling all these technologies up, is really challenging from a number of perspectives, including just the control engineering.

Craig Cannon [00:30:17] – How are they doing it or attempting to do it?

John Preskill [00:30:21] – You know, you could ask, where did all this progress come from over 20 years, or so? For example, with the superconducting circuits, a sort of crucial measure is what we call the coherence time of the qubit, which roughly speaking, means how much it interacts with the outside world. The longer the coherence time, the better. The rate of what we call decoherence is essentially how much it’s getting buffeted around by outside influences. For the superconducting circuits, those coherence times have increased about a factor of 10 every three years, going back 15 years or so.

Craig Cannon [00:31:06] – Wow.

John Preskill [00:31:07] – Now, it won’t necessarily go on like that indefinitely, but in order to achieve that type of progress, better materials, better fabrication, better control. The way you control these things is with microwave circuitry. Not that different from the kind of things that are going on in communication devices. All those things are important, but going forward, the control is really the critical thing. Coherence times are already getting pretty long, I mean having them longer is certainly good. But the key thing is to get two qubits to interact just the way you want them to. Even if there is, now I keep saying the key thing is the environment, it’s not the only key thing, right? Because you have some qubit, like if you think about that electron spin, one way of saying it is I said it can be both up and down at the same time. Well, there’s a simpler way of saying that. It might not point either up or down. It might point some other way. But there really are a continuum of ways it could point. That’s not like a bit. See, it’s much easier to stabilize a bit because it’s got two states.

John Preskill [00:32:31] – But if it can kind of wander around in the space of possible configurations for a qubit, that makes it much harder to control. People have gotten better at that, a lot better at that in the last few years.

Craig Cannon [00:32:44] – Interesting. Joshua Harmon asked, what engineering strategy for quantum computers do you think has the most promise?

John Preskill [00:32:53] – Yeah, so I mentioned some of these different approaches, and I guess I’ll interpret the question as, which one is the winning horse? I know better than to answer that question! They’re all interesting. For the near term, the most advanced are superconducting circuits and trapped ions, which is why I mentioned those first. I think that will remain true over the next five to 10 years. Other technologies have the potential — like these topologically protected qubits — to surpass those, but it’s not going to happen real soon. I kind of like superconducting circuits because there’s so much phase space of things you can do with them. Of ways you can engineer and configure them, and imagine scaling them up.

John Preskill [00:33:54] – They have the advantage of being faster. The cycle time, time to do a gate, is faster than with the trapped ions. Just the basic physics of the interactions is different. In the long term, those electron spins could catapult ahead of these other things. That’s something that you can naturally do in silicon, and it’s potentially easy to integrate with silicon technology. Right now, the qubits and gates aren’t as good as the other technologies, but that can change. I mean, from a theorist’s perspective, this topological approach is very appealing. We can imagine it takes off maybe 10 years from now and it becomes the leader. I think it’s important to emphasize we don’t really know what’s going to scale the best.

Craig Cannon [00:34:50] – Right. And are there multiple attempts being made around programming quantum computers?

John Preskill [00:34:55] – Yeah. I mean, some of these companies– That are working on quantum technology now, which includes well-known big players like IBM, and Google, and Microsoft and Intel, but also a lot of startups now. They are trying to encompass the full stack, so they’re interested in the hardware, and the fabrication, and the control technology. But also, the software, the applications, the user interface. All those things are certainly going to be important eventually.

Craig Cannon [00:35:38] – Yeah, they’re pushing it almost to like an AWS layer. Where you interact with your quantum computer in a server farm and you don’t even touch it.

John Preskill [00:35:49] – That’s how it will be in the near term. You’re not going to have, most of us won’t, have a quantum computer sitting on your desktop, or in your pocket. Maybe someday. In the near term, it’ll be in the Cloud, and you’ll be able to run applications on it by some kind of web interface. Ideally, that should be designed so the user doesn’t have to know anything about quantum physics in order to program or use it, and I think that’s part of what some of these companies are moving toward.

Craig Cannon [00:36:24] – Do you think it will get to the level where it’s in your pocket? How do you deal with that when you’re below one kelvin?

John Preskill [00:36:32] – Well, if it’s in your pocket, it probably won’t be one kelvin.

Craig Cannon [00:36:35] – Yeah, probably not.

John Preskill [00:36:38] – What do you do? Well, there’s one approach, as an example, which I guess I mentioned in passing before, where maybe it doesn’t have to be at such low temperature, and that’s nuclear spins. Because they’re very weakly interacting with the outside world, you can have quantum information in a nuclear spin, which — I’m not saying that it would be undisturbed for years, but seconds, which is pretty good. And you can imagine that getting significantly longer. Someday you might have a little quantum smart card in your pocket. The nice thing about that particular technology is you could do it at room temperature. Still have long coherence times. If you go to the ATM and you’re worried that there’s a rogue bank that’s going to steal your information, one solution to that problem — I’m not saying there aren’t other solutions — is to have a quantum card where the bank will be able to authenticate it without being able to forge it.

Craig Cannon [00:37:54] – We should talk about the security element. Kevin Su asked what risk would quantum computers pose to current encryption schemes? So public key, and what changes should people be thinking about if quantum computers come in the next five years, 10 years?

John Preskill [00:38:12] – Yeah. Quantum computers threaten those systems that are in widespread use. Whenever you’re using a web browser and you see that little padlock and you’re at an HTTPS site, you’re using a public key cryptosystem to protect your privacy. Those cryptosystems rely for their security on the presumed hardness of computational problems. That is, it’s possible to crack them, but it’s just too hard. RSA, which is one of the ones that’s widely used … as typically practiced today, to break it you’d have to do something like factor a number which is over 2000 bits long, 2048. That’s too hard to do now. But that’s what quantum computers will be good at. Another one that’s widely used is called elliptic curve cryptography. Doesn’t really matter exactly what it is.

John Preskill [00:39:24] – But the point is that it’s also vulnerable to quantum attack, so we’re going to have to protect our privacy in different ways when quantum computers are prevalent.

Craig Cannon [00:39:37] – What are the attempts being made right now?

John Preskill [00:39:39] – There are two main classes of attempts. One is just to come up with a cryptographic protocol not so different conceptually from what’s done now, but based on a problem that’s hard for quantum computers.

Craig Cannon [00:39:59] – There you go.

John Preskill [00:40:02] – It turns out that what has sort of become the standard way doesn’t have that feature, and there are alternatives that people are working on. We speak of post-quantum cryptography, meaning the protocols that we’ll have to use when we’re worried that our adversaries have quantum computers. I don’t think there’s any proposed cryptosystem — although there’s a long list of them by now which people think are candidates for being quantum resistant, for being unbreakable, or hard to break by quantum computers. I don’t think there’s any one that the world has sufficient confidence in now that’s really hard for a quantum adversary that we’re all going to switch over. But it’s certainly time to be thinking about it. When people worry about their privacy, of course different users have different standards, but the US Government sometimes says they would like a system to stay secure for 50 years. They’d like to be able to use it for 20, roughly speaking, and then have the intercepted traffic be protected for another 30 after that. I don’t think, though I could be wrong, that we’re likely to have quantum computers that can break those public key cryptosystems in 10 years, but in 50 years seems not unlikely,

John Preskill [00:41:33] – and so we should really be worrying about it. The other one is actually using quantum communication for privacy. In other words, if you and I could send qubits to one another instead of bits, it opens up new possibilities. The way to think about these public key schemes — or one way — that we’re using now, is I want you to send me a private message, and I can send you a lockbox. It has a padlock on it, but I keep the key, okay? But you can close up the box and send it to me. But I’m the only one with the key. The key thing is that if you have the padlock you can’t reverse engineer the key. Of course, it’s a digital box and key, but that’s the idea of public key. The idea of what we call quantum key distribution, which is a particular type of quantum cryptography, is that I can actually send you the key, or you can send me your key, but why can’t any eavesdropper then listen in and know the key? Well it’s because it’s quantum, and remember, it has that property that if you look at it, you disturb it.

John Preskill [00:42:59] – So if you collect information about my key, or if the adversary does, that will cause some change in the key, and there are ways in which we can check whether what you received is really what I sent. And if it turns out it’s not, or it has too many errors in it, then we’ll be suspicious that there was an adversary who tampered with it, and then we won’t use that key. Because we haven’t used it yet — we’re just trying to establish the key. We do the test to see whether an adversary interfered. If it passes the test, then we can use the key. And if it fails the test, we throw that key away and we try again. That’s how quantum cryptography works, but it requires a much different infrastructure than what we’re using now. We have to be able to send qubits … well, it’s not completely different because you can do it with photons. Of course, that’s how we communicate through optical fiber now — we’re sending photons. It’s a little trickier sending quantum information through an optical fiber, because of that issue that interactions with the environment can disturb it. But nowadays, you can send quantum information through an optical fiber over tens of kilometers with a low enough error rate so it’s useful for communication.

Craig Cannon [00:44:22] – Wow.

John Preskill [00:44:23] – Of course, we’d like to be able to scale that up to global distances.

Craig Cannon [00:44:26] – Sure.

John Preskill [00:44:27] – And there are big challenges in that. But anyway, so that’s another approach to the future of privacy that people are interested in.

Craig Cannon [00:44:35] – Does that necessitate quantum computers on both ends?

John Preskill [00:44:38] – Yes, but not huge ones. The reason … well, yes and no. At the scale of tens of kilometers, no. You can do that now. There are prototype systems that are in existence. But if you really want to scale it up —  in other words, to send things longer distance — then you have to bring this quantum error correction idea into the game.

John Preskill [00:45:10] – Because at least with our current photonics technology, there’s no way I can send a single photon from here to China without there being a very high probability that it gets lost in the fiber somewhere. We have to have what we call quantum repeaters, which can boost the signal. But it’s not like the usual type of repeater that we have in communication networks now. The usual type is you measure the signal, and then you resend it. That won’t work for quantum because as soon as you measure it you’re going to mess it up. You have to find a way of boosting it without knowing what it is. Of course, it’s important that it works that way because otherwise, the adversary could just intercept it and resend it. And so it will require some quantum processing to get that quantum error correction in the quantum repeater to work. But it’s a much more modest scale quantum processor than we would need to solve hard problems.

Craig Cannon [00:46:14] – Okay. Gotcha. What are the other things you’re both excited about, and worried about for potential business opportunities? Snehan, I’m mispronouncing names all the times, Snehan Kekre asks, budding entrepreneurs, what should they be thinking about in the context of quantum computing?

John Preskill [00:46:37] – There’s more to quantum technology than computing. Something which has good potential to have an impact in the relatively near future is improved sensing. Quantum systems, partly because of that property that I keep emphasizing that they can’t be perfectly isolated from the outside, they’re good at sensing things. Sometimes, you want to detect it when something in the outside world messes around with your qubit. Again, using this technology of nuclear spins, which I mentioned you can do at room temperature potentially, you can make a pretty good sensor, and it can potentially achieve higher sensitivity and spatial resolution, look at things on shorter distance scales than other existing sensing technology. One of the things people are excited about are the biological and medical implications of that.

John Preskill [00:47:53] – If you can monitor the behavior of molecular machines, probe biological systems at the molecular level using very powerful sensors, that would surely have a lot of applications. One interesting question you can ask is, can you use these quantum error correction ideas to make those sensors even more powerful? That’s another area of current basic research, where you could see significant potential economic impact.

Craig Cannon [00:48:29] – Interesting. In terms of your research right now, what are you working on that you find both interesting and incredibly difficult?

John Preskill [00:48:40] – Everything I work on–

Craig Cannon [00:48:41] – 100%.

John Preskill [00:48:42] – Is both interesting and incredibly difficult. Well, let me change direction a little from what we’ve been talking about so far. Well, I’m going to tell you a little bit about me.

Craig Cannon [00:48:58] – Sure.

John Preskill [00:49:00] – I didn’t start out interested in information in my career. I’m a physicist. I was trained as an elementary particle theorist, studying the fundamental interactions and the elementary particles. That drew me into an interest in gravitation because one thing that we still have a very poor understanding of is how gravity fits together with the other fundamental interactions. The way physicists usually say it is we don’t have a quantum theory of gravity, at least not one that we think is complete and satisfactory. I’ve been interested in that question for many decades, and then got sidetracked because I got excited about quantum computing. But you know what? I’ve always looked at quantum information not just as a technology. I’m a physicist, I’m not an engineer. I’m not trying to build a better computer, necessarily, though that’s very exciting, and worth doing, and if my work can contribute to that, that’s very pleasing. I see quantum information as a new frontier in the exploration of the physical sciences. Sometimes I call it the entanglement frontier. Physicists, we like to talk about frontiers, and stuff. Short distance frontier. That’s what we’re doing at CERN in the Large Hadron Collider, trying to discern new properties of matter at distances which are shorter than we’ve ever been able to explore before.

John Preskill [00:50:57] – There’s a long distance frontier in cosmology. We’re trying to look deeper into the universe and understand its structure and behavior at earlier times. Those are both very exciting frontiers. This entanglement frontier is increasingly going to be at the forefront of basic physics research in the 21st century. By entanglement frontier, I just mean scaling up quantum systems to larger and larger complexity where it becomes harder and harder to simulate those systems with our existing digital tools. That means we can’t very well anticipate the types of behavior that we’re going to see. That’s a great opportunity for new discovery, and that’s part of what’s going to be exciting even in the relatively near term. When we have 100 qubits … there are some things that we can do to understand the behavior of the dynamics of a highly complex system of 100 qubits that we’ve never been able to experimentally probe before. That’s going to be very interesting. But what we’re starting to see now is that these quantum information ideas are connecting to these fundamental questions about gravitation, and how to think about it quantumly. And it turns out, as is true for most of the broader implications of quantum physics, the key thing is entanglement.

John Preskill [00:52:36] – We can think of the microscopic structure of spacetime, the geometry of where we live. Geometry just means who’s close to who else. If we’re in the auditorium, and I’m in the first row and you’re in the fourth row, the geometry is how close we are to one another. Of course, that’s very fundamental in both space and time. How far apart are we in space? How far apart are we in time? Is geometry really a fundamental thing, or is it something that’s kind of emergent from some even more fundamental concept? It seems increasingly likely that it’s really an emergent property.

John Preskill [00:53:29] – That there’s something deeper than geometry. What is it? We think it’s quantum entanglement. That you can think of the geometry as arising from quantum correlations among parts of a system. That’s really what defines who’s close to who. We’re trying to explore that idea more deeply, and one of the things that comes in is the idea of quantum error correction. Remember the whole idea of quantum error correction was that we could make a quantum system behave the way we want it to because it’s well-protected against the damaging effects of noise. It seems like quantum error correction is part of the deep secret of how spacetime geometry works. It has a kind of intrinsic robustness coming from these ideas of quantum error correction that makes space meaningful, so that it doesn’t just evaporate when you tap on it. If you wanted to, you could think of the spacetime, the space that you’re in and the space that I’m in, as parts of a system that are entangled with one another.

John Preskill [00:54:45] – What would happen if we broke that entanglement and your part of space became disentangled from my part? Well what we think that would mean is that there’d be no way to connect us anymore. There wouldn’t be any path through space that starts over here with me and ends with you. It’d become broken apart into two pieces. It’s really the entanglement which holds space together, which keeps it from falling apart into little pieces. We’re trying to get a deeper grasp of what that means.

Craig Cannon [00:55:19] – How do you make any progress on that? That seems like the most unbelievably difficult problem to work on.

John Preskill [00:55:26] – It’s difficult because, well for a number of reasons, but in particular, because it’s hard to get guidance from experiment, which is how physics historically–

Craig Cannon [00:55:38] – All science.

John Preskill [00:55:38] – Has advanced.

Craig Cannon [00:55:39] – Yeah.

John Preskill [00:55:41] – Although it was fun a moment ago to talk about what would happen if we disentangled your part of space from mine, I don’t know how to do that in the lab right now. Of course, part of the reason is we have the audacity to think we can figure these things out just by thinking about them. Maybe that’s not true. Nobody knows, right? We should try. Solving these problems is a great challenge, and it may be that the apes that evolved on Earth don’t have the capacity to understand things like the quantum structure of spacetime. But maybe we do, so we should try. Now in the longer term, and maybe not such a long term, maybe we can get some guidance from experiment. In particular, what we’re going to be doing with quantum computers and the other quantum technologies that are becoming increasingly sophisticated in the next couple of decades, is we’ll be able to control very well highly entangled complex quantum systems. That should mean that in a laboratory, on a tabletop, I can sort of make my own little toy space time …

John Preskill [00:57:02] – with an emergent geometry arising from the properties of that entanglement, and I think that’ll teach us lessons because systems like that are the types of system that, because they’re so highly entangled, digital computers can’t simulate them. It seems like only quantum computers are potentially up to the task. So that won’t be quite the same as disentangling your side of the room from mine, in real life. But we’d be able to do it in a laboratory setting using model systems, which I think would help us to understand the basic principles better.

Craig Cannon [00:57:39] – Wild. Yeah, desktop space time seems pretty cool, if you could figure it out.

John Preskill [00:57:43] – Yeah, it’s pretty fundamental. We didn’t really talk about what people sometimes, we did implicitly, but not in so many words. We didn’t talk about what people sometimes call quantum non-locality. It’s another way of describing quantum entanglement, actually. There’s this notion of Bell’s theorem that when you look at the correlations among the parts of a quantum system, that they’re different from any possible classical correlations. Some things that you read give you the impression that you can use that to instantaneously send information over long distances. It is true that if we have two qubits, electron spins, say, and they’re entangled with one another, then what’s kind of remarkable is that I can measure my qubit to see along some axis whether it’s up or down, and you can measure yours, and we will get perfectly correlated results. When I see up, you’ll see up, say, and when I see down, you’ll see down. And sometimes, people make it sound like that’s remarkable. That’s not remarkable in itself. Somebody could’ve flipped a pair of coins, you know,

John Preskill [00:59:17] – so that they came up both heads or both tails, and given one to you and one –

Craig Cannon [00:59:20] – Split them apart.

John Preskill [00:59:20] – to me.

Craig Cannon [00:59:21] – Yeah.

John Preskill [00:59:22] – And gone a light year apart, and then we both …  hey, mine’s heads. Mine’s heads too!

Craig Cannon [00:59:24] – And then they call it quantum teleportation on YouTube.

John Preskill [00:59:28] – Yeah. Of course, what’s really important about entanglement that makes it different from just those coins is that there’s more than one way of looking at a qubit. We have what we call complementary ways of measuring it, so you can ask whether it’s up or down along this axis or along that axis. There’s nothing like that for the coins. There’s just one way to look at it. What’s cool about entanglement is that we’ll get perfectly correlated results if we both measure in the same way, but there’s more than one possible way that we could measure. What sometimes gets said, or the impression people get, is that that means that when I do something to my qubit, it instantaneously affects your qubit, even if we’re on different sides of the galaxy. But that’s not what entanglement does. It just means they’re correlated in a certain way.

John Preskill [01:00:30] – When you look at yours, if we have maximally entangled qubits, you just see a random bit. It could be a zero or a one, each occurring with probability 1/2. That’s going to be true no matter what I did to my qubit, and so you can’t tell what I did by just looking at it. It’s only that if we compared notes later we can see how they’re correlated, and that correlation holds for either one of these two complementary ways in which we could both measure. It’s that fact that we have these complementary ways to measure that makes it impossible for a classical system to reproduce those same correlations. So that’s one misconception that’s pretty widespread. Another one is this about quantum computing, which is in trying to explain why quantum computers are powerful, people will sometimes say, well, it’s because you can superpose –I used that word before, you can add together many different possibilities. That means that, whereas an ordinary computer would just do a computation once, acting on a superposition a quantum computer can do a vast number of computations all at once.

John Preskill [01:01:54] – There’s a certain sense in which that’s mathematically true if you interpret it right, but it’s very misleading. Because in the end, you’re going to have to make some measurement to read out the result. When you read it out, there’s a limited amount of information you can get. You’re not going to be able to read out the results of some huge number of computations in a single shot measurement. Really the key thing that makes it work is this idea of interference, which we discussed briefly when you asked about Grover’s algorithm. The art of a quantum algorithm is to make sure that the wrong answers interfere and cancel one another out, so the right answer is enhanced. That’s not automatic. It requires that the quantum algorithm be designed in just the right way.

Craig Cannon [01:02:50] – Right. The diagrams I’ve seen online at least, involve usually you’re squaring the output as it goes along, and then essentially, that flips the correct answer to the positive, and the others are in the negative position. Is that accurate?

John Preskill [01:03:08] – I wouldn’t have said it the way you did– Because you can’t really measure it as you go along. Once you measure it, the magic of superposition is going to be lost.

John Preskill [01:03:19] – It means that now there’s some definite outcome or state. To take advantage of this interference phenomenon, you need to delay the measurement. Remember when we were talking about the double slit and I said, if you actually see these wiggles in the probability of detection, which is the signal of interference, that means that there’s no way anybody could know whether the electron went through hole one or hole two? It’s the same way with quantum computing. If you think of the computation as being a superposition of different possible computations, it wouldn’t work — there wouldn’t be a speed up — if you could know which of those paths the computation followed. It’s important that you don’t know. And so you have to sum up all the different computations, and that’s how the interference phenomenon comes into play.

Craig Cannon [01:04:17] – To take a little sidetrack, you mentioned Feynman before. And before we started recording you mentioned working with him. I know I’m in the Feynman fan club, for sure. What was that experience like?

John Preskill [01:04:32] – We never really collaborated. I mean, we didn’t write a paper together, or anything like that. We overlapped for five years at Caltech. I arrived here in 1983. He died in 1988. We had offices on the same corridor, and we talked pretty often because we were both interested in the fundamental interactions, and in particular, what we call quantum chromodynamics. It’s our theory of how nuclear matter behaves, how quarks interact, what holds the proton together, those kinds of things. One big question is what does hold the proton together? Why don’t the quarks just fall apart? That was an example of a problem that both he and I were very interested in, and which we talked about sometimes. Now, this was pretty late in his career. When I think about it now, when I arrived at Caltech, that was 1983, Feynman was born in 1918, so he was 65. I’m 64 now, so maybe he wasn’t so old, right? But at the time, he seemed pretty ancient to me. Since I was 30.

John Preskill [01:05:58] – Those who interacted with Dick Feynman when he was really at his intellectual peak in the ’40s, and ’50s, and ’60s, probably saw even more extraordinary intellectual feats than I witnessed interacting with the 65 year old Feynman. He just loved physics, you know? He just thought everything was so much fun. He loved talking about it. He wasn’t as good a listener as a talker, but actually – well that’s a little unfair, isn’t it? It was kind of funny because Feynman, he always wanted to think things through for himself, sort of from first principles, rather than rely on the guidance from experts who have thought about these things before. Well that’s fine. You should try to understand things as deeply as you can on your own, and sort of reconstruct the knowledge from the ground up. That’s very enabling, and gives you new insights. But he was a little too dismissive, in my view, of what the other guys knew. But I could slip it in because I didn’t tell him, “Dick, you should read this paper by Polyakov” — well maybe I did, but he wouldn’t have even heard that  — because he solved that problem that you’re talking about.

John Preskill [01:07:39] – But I knew what Polyakov had said about it, so I would say, “Oh well, look, why don’t we look at it this way?” And so he thought I was, that I was having all these insights, but the truth was the big difference between Feynman and me in the mid 1980s was I was reading literature, and he wasn’t.

Craig Cannon [01:08:00] – That’s funny.

John Preskill [01:08:01] – Probably, if he had been, he would’ve been well served, but that wasn’t the way he liked to work on things. He wanted to find his own approach. Of course, that had worked out pretty well for him throughout his career.

Craig Cannon [01:08:15] – What other qualities did you notice about him when he was roaming the corridors?

John Preskill [01:08:21] – He’d always be drumming. So you would know he was around because he’d actually be walking down the hallway drumming on the wall.

Craig Cannon [01:08:27] – Wait, with his hands, or with sticks, or–

John Preskill [01:08:29] – No, hands. He’d just be tapping.

Craig Cannon [01:08:32] – Just a bongo thing.

John Preskill [01:08:33] – Yeah. That was one thing. He loved to tell stories. You’ve probably read the books that Ralph Leighton put together based on the stories Feynman told. Ralph did an amazing job, of capturing Feynman’s personality in writing those stories down because I’d heard a lot of them. I’m sure he told the same stories to many people many times, because he loved telling stories. But the book really captures his voice pretty well.

John Preskill [01:09:12] – If you had heard him tell some of these stories, and then you read the way Ralph Leighton transcribed them, you can hear Feynman talking. At the time that I knew him, one of the experiences that he went through was he was on the Challenger commission after the space shuttle blew up. He was in Washington a lot of the time, but he’d come back from time to time, and he would sort of sit back and relax in our seminar room and start bringing us up to date on all the weird things that were happening on the Challenger commission. That was pretty fun.

Craig Cannon [01:09:56] – That’s really cool.

John Preskill [01:09:56] – A lot of that got captured in the second volume. I guess it’s the one called, What Do You Care What Other People Think? There’s a chapter about him telling stories about the Challenger commission. He was interested in everything. It wasn’t just physics. He was very interested in biology. He was interested in computation. I remember how excited he was when he got his first IBM PC. Probably not long after I got to Caltech. Yeah, it was what they called the AT. We thought it was a pretty sexy machine. I had one, too. He couldn’t wait to start programming it in BASIC.

Craig Cannon [01:10:50] – Very cool.

John Preskill [01:10:51] – Because that was so much fun.

Craig Cannon [01:10:52] – There was a question that I was kind of curious to your answer. Tika asks about essentially, teaching about quantum computers. They say, many kids in grade 10 can code. Some can play with machine learning tools without knowing the math. Can quantum computing become as simple and/or accessible?

John Preskill [01:11:17] – Maybe so. At some level, when people say quantum mechanics is counterintuitive, it’s hard for us to grasp, it’s so foreign to our experience, that’s true. The way things behave at the microscopic scale are, like we discussed earlier, really different from the way ordinary stuff behaves. But it’s a question of familiarity. What I wouldn’t be surprised by is that if you go out a few decades, kids who are 10 years old are going to be playing quantum games. That’s an application area that doesn’t get discussed very much, but there could be a real market there because people love games. Quantum games are different, and the strategies are different, and what you have to do to win is different. If you play the game enough, you start to get the hang of it.

John Preskill [01:12:26] – I don’t see any reason why kids who have not necessarily deeply studied physics can’t get a pretty good feel for how quantum mechanics works. You know, the way ordinary physics works, maybe it’s not so intuitive. Newton’s laws … Aristotle couldn’t get it right. He thought you had to keep pushing on something to get it to keep moving. That wasn’t right. Galileo was able to roll balls down a ramp, and things like that, and see he didn’t have to keep pushing to keep it moving. He could see that it was uniformly accelerated in a gravitational field. Newton took that to a much more general and powerful level. You fool around with stuff, and you get the hang of it. And I think quantum stuff can be like that. We’ll experience it in a different way, but when we have quantum computers, in a way, that opens the opportunity for trying things out and seeing what happens.

John Preskill [01:13:50] – After you’ve played the game enough, you start to anticipate. And actually, it’s an important point about the applications. One of the questions you asked me at the beginning was what are we able to do with quantum computers? And I said, I don’t know. So how are we going to discover new applications? It might just be, at least in part, by fooling around. A lot of classical algorithms that people use on today’s computers were discovered, or that they were powerful was discovered, by experimenting. By trying it. I don’t know … what’s an example of that? Well, the simplex method that we use in linear programming. I don’t think there was a mathematical proof that it was fast at first, but people did experiments, and they said, hey, this is pretty fast.

Craig Cannon [01:14:53] – Well, you’re seeing it a lot now in machine learning.

John Preskill [01:14:57] – Yeah, well that’s a good example.

Craig Cannon [01:14:58] – You test it out a million times over when you’re running simulations, and it turns out, that’s what works. Following the thread of education, and maybe your political interest, given it’s the year that it is, do you have thoughts on how you would adjust or change STEM education?

John Preskill [01:15:23] – Well, no particularly original thoughts. But I do think that STEM education … we shouldn’t think of it as we’re going to need this technical workforce, and so we better train them. The key thing is we want the general population to be able to reason effectively, and to recognize when an argument is phony and when it’s authentic. To think about, well how can I check whether what I just read on Facebook is really true? And I see that as part of the goal of STEM education. When you’re teaching kids in school how to understand the world by doing experiments, by looking at the evidence, by reasoning from the evidence, this is something that we apply in everyday life, too. I don’t know exactly how to implement this–

John Preskill [01:16:36] – But I think we should have that perspective that we’re trying to educate a public, which is going to eventually make critical decisions about our democracy, and they should understand how to tell when something is true or not. That’s a hard thing to do in general, but you know what I mean. That there are some things that, if you’re a person with some — I mean it doesn’t necessarily have to be technical — but if you’re used to evaluating evidence and making a judgment based on that evidence about whether it’s a good argument or not, you can apply that to all the things you hear and read, and make better judgments.

Craig Cannon [01:17:23] – What about on the policy side? Let’s see, JJ Francis asked that, if you or any of your colleagues would ever consider running for office. Curious about science policy in the US.

John Preskill [01:17:38] – Well, it would be good if we had more scientifically trained people in government. Very few members of Congress. I know of one, Bill Foster’s a physicist in Illinois. He was a particle physicist, and he worked at Fermilab, and now he’s in Congress, and very interested in the science and educational policy aspects of government. Rush Holt was a congressman from New Jersey who had a background in physics. He retired from the House a couple of years ago, but he was in Congress for something like 18 years, and he had a positive influence, because he had a voice that people respected when it came to science policy. Having more people like that would help. Now, another thing, it doesn’t have to be elective office.

Craig Cannon [01:18:39] – Right.

John Preskill [01:18:42] – There are a lot of technically trained people in government, many of them making their careers in agencies that deal with technical issues. Department of Defense, of course, there are a lot of technical issues. In the Obama Administration we had two successive secretaries of energy who were very, very good physicists. Steve Chu was Nobel Prize winning physicist. Then Ernie Moniz, who’s a real authority on nuclear energy and weapons. That kind of expertise makes a difference in government.

John Preskill [01:19:24] – Now the Secretary of Energy is Rick Perry. It’s a different background.

Craig Cannon [01:19:28] – Yeah, you could say that. Just kind of historical reference, what policies did they put in place that you really felt their hand as a physicist move forward?

John Preskill [01:19:44] – You mean in particular–

Craig Cannon [01:19:45] – I’m talking the Obama Administration.

John Preskill [01:19:49] – Well, I think the Department of Energy, DOE, tried to facilitate technical innovation by seeding new technologies, by supporting startup companies that were trying to do things that would improve battery technology, and solar power, and things like that, which could benefit future generations. They had an impact by doing that. You don’t have to be a Nobel Prize winning physicist to think that’s a good idea. That the administration felt that was a priority made a difference, and appointing a physicist at Department of Energy was, if nothing else, highly symbolic of how important those things are.

Craig Cannon [01:20:52] – On the quantum side, someone asked Vikas Karad, he asked where the Quantum Valley might be. Do you have thoughts, as in Silicon Valley for quantum computing?

John Preskill [01:21:06] – Well… I don’t know, but you look at what’s happening the last couple of years, there have been a number of quantum startups. A notable number of them are in the Bay Area. Why so? Well, that’s where the tech industry is concentrated and where the people who are interested in financing innovative technical startups are concentrated. If you are an entrepreneur interested in starting a company, and you’re concerned about how to fundraise for it, it kind of makes sense to locate in that area. Now, that’s what’s sort of happening now, and may not continue, of course. It might not be like that indefinitely. Nothing lasts forever, but I would say… That’s the place, Silicon Valley is likely to be Quantum Valley, the way things are right now.

Craig Cannon [01:22:10] – Well then what about the physicists who might be listening to this? If they’re thinking about starting a company, do you have advice for them?

John Preskill [01:22:22] – Just speaking very generally, if you’re putting a team together… Different people have different expertise. Take quantum computing as an example, like we were saying earlier, some of the big players and the startups, they want to do everything. They want to build the hardware, figure out better ways to fabricate it. Better control, better software, better applications. Nobody can be an expert on all those things. Of course, you’ll hire a software person to write your software, and microwave engineer to figure out your control, and of course that’s the right thing to do. But I think in that arena, and it probably applies to other entrepreneurial activity relating to physics, being able to communicate across those boundaries is very valuable, and you can see it in quantum computing now. That if the man or woman who’s involved in the software has that background, but there’s not a big communication barrier talking to the people who are doing the control engineering, that can be very helpful. It makes sense to give some preference to the people who maybe are comfortable doing so, or have the background that stretches across more than one of those areas of expertise. That can be very enabling in a technology arena like quantum computing today, where we’re trying to do really, really hard stuff, and you don’t know whether you’ll succeed, and you want to give it your best go by seeing the connections between those different things.

Craig Cannon [01:24:28] – Would you advise someone then to maybe teach or try and explain it to, I don’t know their young cousins? Because Feynman maybe recognizes the king of communicating physics, at least for a certain period of time. How would you advise someone to get better at it so they can be more effective?

John Preskill [01:24:50] – Practice. There are different aspects of that. This isn’t what you meant at all, but I’ll say it anyway, because what you asked brought it to mind. If you teach, you learn. We have this odd model in the research university that a professor like me is supposed to do research and teach. Why don’t we hire teachers and researchers? Why do we have the same people doing both? Well, part of the reason for me is most of what I know, what I’ve learned since my own school education ended, is knowledge I acquired by trying to teach it. To keep our intellect rejuvenated, we have to have that experience of trying to teach new things that we didn’t know that well before to other people. That deepens your knowledge. Just thinking about how you convey it makes you ask questions that you might not think to ask otherwise, and you say “Hey, I don’t know the answer to that.” Then you have to try to figure it out. So I think that applies at varying levels to any situation in which a scientist, or somebody with a technical background, is trying to communicate.

John Preskill [01:26:21] – By thinking about how to get it across to other people, we can get new insights, you know? We can look at it in a different way. It’s not a waste of time. Aside from the benefits of actually successfully communicating, we benefit from it in this other way. But other than that… Have fun with it, you know? Don’t look at it as a burden, or some kind of task you have to do along with all the other things you’re doing. It should be a pleasure. When it’s successful, it’s very gratifying. If you put a lot of thought into how to communicate something and you think people are getting it, that’s one of the ways that somebody in my line of work can get a lot of satisfaction.

Craig Cannon [01:27:23] – If now were to be your opportunity to teach a lot of people about physics, and you could just point someone to things, who would you advise someone to be? They want to learn more about quantum computing, they want to learn about physics. What should they be reading? What YouTube channel should they follow? What should they pay attention to?

John Preskill [01:27:44] – Well one communicator who I have great admiration for is Leonard Susskind, who’s at Stanford. You mentioned Feynman as the great communicator, and that’s fair, but in terms of style and personality of physicists who are currently active, I think Lenny Susskind is the most similar to Feynman of anyone I can think of. He’s a no bullshit kind of guy. He wants to give you the straight stuff. He doesn’t want to water it down for you. But he’s very gifted when it comes to making analogies and creating the illusion that you’re understanding what he’s saying. He has … if you just go to YouTube and search Leonard Susskind you’ll see lectures that he’s given at Stanford where they have some kind of extension school for people who are not Stanford students, people in the community. A lot of them in the tech community because it’s Stanford, and he’s giving courses. Yeah, and on quite sophisticated topics, but also on more basic topics, and he’s in the process of turning those into books. I’m not sure how many of those have appeared, but he has a series called The Theoretical Minimum

John Preskill [01:29:19] – which is supposed to be the gentle introduction to different topics like classical physics, quantum physics, and so on. He’s pretty special I think in his ability to do that.

Craig Cannon [01:29:32] – I need to subscribe. Actually, here’s a question then. In the things you’ve relearned while teaching over the past, I guess it’s 35 years now.

John Preskill [01:29:46] – Shit, is that right?

Craig Cannon [01:29:47] – Something like that.

John Preskill [01:29:48] – That’s true. Yeah.

Craig Cannon [01:29:51] – What were the big thing, what were the revelations?

John Preskill [01:29:55] – That’s how I learned quantum computing, for one thing. I was not at all knowledgeable about information science. That wasn’t my training. Back when I was in school, physicists didn’t learn much about things like information theory, computer science, complexity theory. One of the great things about quantum computing is its interdisciplinary character, that it brings these different things into contact, which traditionally had not been part of the common curriculum of any community of scholars. I decided 20 years ago that I should teach a quantum information class at Caltech, and I worked very hard on it that year. Not that I’m an expert, or anything, but I learned a lot about information theory, and things like channel capacity, and computational complexity — how we classify the hardness of problems — and algorithms. Things like that, which I didn’t really know very well. I had sort of a passing familiarity with some of those things from reading some of the quantum computing literature. That’s no substitute for teaching a class because then you really have to synthesize it and figure out your way of presenting it. Most of the notes are typed up and you can still get to them on my website.That was pretty transformative for me … and it was easier then, 20 years ago, I guess than it is now because it was such a new topic.

John Preskill [01:31:49] – But I really felt I was kind of close enough to the cutting edge on most of those topics by the time I’d finished the class that I wasn’t intimidated by another paper I’d read or a new thing I’d hear about those things. That was probably the one case where it really made a difference in my foundation of knowledge which enabled me to do things. But I had the same experience in particle physics. When I was a student, I read a lot. I was very broadly interested in physics. But when the first time, I was still at Harvard at the time –later I taught a similar course here — I’m in my late 20s, I’m just a year or two out of graduate school, and I decide to teach a very comprehensive class on elementary particles … in particular, quantum chromodynamics, the theory of nuclear forces like we talked about before. It just really expanded my knowledge to have that experience of teaching that class. I still draw on that. I can still remember that experience and I think I get ideas that I might not otherwise have because I went through that.

Craig Cannon [01:33:23] – I want to get involved now. I want to go back to school, or maybe teach a class. I don’t know.

John Preskill [01:33:27] – Well, what’s stopping you?

Craig Cannon [01:33:29] – Nothing. Alright, thanks John.

John Preskill [01:33:32] – Okay, thank you Craig.

# To Feynman on his 100th birthday

The Feynman legend, pundits say
Began in Queens – Far Rockaway.
It’s there a boy would stop and think

He grew up as a curious guy
Who showed his sister the night sky.
He wondered why, and wondered why
He wondered why he wondered why.

New Jersey followed MIT.
The cream and lemon in his tea
Taught Mr. Feynman when to joke
And how to act like normal folk.

Cracking safes, though loads of fun,
Could not conceal from everyone,
The mind behind that grinning brow:
A new Dirac, but human now.

In New York state he spun a plate
Which led, in nineteen forty eight
To diagrams that let us see
The processes of QED.

He left the east and made a trek
Until he landed at Caltech.
His genius brought us great acclaim.
This place would never be the same.

Dick’s teaching skills were next to none
When reinventing Physics 1.
His wisdom’s there for all to see
In red books numbered 1, 2, 3.

Always up and never glum
He loved to paint and play the drum.
His mind engaged with everything
For all the world is int’resting.

Dick proved that charm befits a nerd.
For papers read, and stories heard
We’ll always be in Feynman’s debt.
A giant we cannot forget.

# Catching up with the quantum-thermo crowd

You have four hours to tour Oxford University.

What will you visit? The Ashmolean Museum, home to da Vinci drawings, samurai armor, and Egyptian mummies? The Bodleian, one of Europe’s oldest libraries? Turf Tavern, where former president Bill Clinton reportedly “didn’t inhale” marijuana?

Felix Binder showed us a cemetery.

Of course he showed us a cemetery. We were at a thermodynamics conference.

The Fifth Quantum Thermodynamics Conference took place in the City of Dreaming Spires.Participants enthused about energy, information, engines, and the flow of time. About 160 scientists attended—roughly 60 more than attended the first conference, co-organizer Janet Anders estimated.

Weak measurements and quasiprobability distributions were trending. The news delighted me, Quantum Frontiers regulars won’t be surprised to hear.

Measurements disturb quantum systems, as early-20th-century physicist Werner Heisenberg intuited. Measure a system’s position strongly, and you forfeit your ability to predict the outcomes of future momentum measurements. Weak measurements don’t disturb the system much. In exchange, weak measurements provide little information about the system. But you can recoup information by performing a weak measurement in each of many trials, then processing the outcomes.

Strong measurements lead to probability distributions: Imagine preparing a particle in some quantum state, then measuring its position strongly, in each of many trials. From the outcomes, you can infer a probability distribution $\{ p(x) \}$, wherein $p(x)$ denotes the probability that the next trial will yield position $x$.

Weak measurements lead analogously to quasiprobability distributions. Quasiprobabilities resemble probabilities but can misbehave: Probabilities are real numbers no less than zero. Quasiprobabilities can dip below zero and can assume nonreal values.

What relevance have weak measurements and quasiprobabilities to quantum thermodynamics? Thermodynamics involves work and heat. Work is energy harnessed to perform useful tasks, like propelling a train from London to Oxford. Heat is energy that jiggles systems randomly.

Quantum properties obscure the line between work and heat. (Here’s an illustration for experts: Consider an isolated quantum, such as a spin chain. Let $H(t)$ denote the Hamiltonian that evolves with the time $t \in [0, t_f]$. Consider preparing the system in an energy eigenstate $| E_i(0) \rangle$. This state has zero diagonal entropy: Measuring the energy yields $E_i(0)$ deterministically. Considering tuning $H(t)$, as by changing a magnetic field. This change constitutes work, we learn in electrodynamics class. But if $H(t)$ changes quickly, the state can acquire weight on multiple energy eigenstates. The diagonal entropy rises. The system’s energetics have gained an unreliability characteristic of heat absorption. But the system has remained isolated from any heat bath. Work mimics heat.)

Quantum thermodynamicists have defined work in terms of a two-point measurement scheme: Initialize the quantum system, such as by letting heat flow between the system and a giant, fixed-temperature heat reservoir until the system equilibrates. Measure the system’s energy strongly, and call the outcome $E_i$. Isolate the system from the reservoir. Tune the Hamiltonian, performing the quantum equivalent of propelling the London train up a hill. Measure the energy, and call the outcome $E_f$.

Any change $\Delta E$ in a system’s energy comes from heat $Q$ and/or from work $W$, by the First Law of Thermodynamics, $\Delta E = Q + W.$  Our system hasn’t exchanged energy with any heat reservoir between the measurements. So the energy change consists of work: $E_f - E_i =: W$.

Imagine performing this protocol in each of many trials. Different trials will require different amounts $W$ of work. Upon recording the amounts, you can infer a distribution $\{ p(W) \}$. $p(W)$ denotes the probability that the next trial will require an amount $W$ of work.

Measuring the system’s energy disturbs the system, squashing some of its quantum properties. (The measurement eliminates coherences, relative to the energy eigenbasis, from the state.) Quantum properties star in quantum thermodynamics. So the two-point measurement scheme doesn’t satisfy everyone.

Enter weak measurements. They can provide information about the system’s energy without disturbing the system much. Work probability distributions $\{ p(W) \}$ give way to quasiprobability distributions $\{ \tilde{p}(W) \}$.

So propose Solinas and Gasparinetti, in these papers. Other quantum thermodynamicists apply weak measurements and quasiprobabilities differently.2 I proposed applying them to characterize chaos, and the scrambling of quantum information in many-body systems, at the conference.3 Feel free to add your favorite applications to the “comments” section.

All the quantum ladies: The conference’s female participants gathered for dinner one conference night.

Wednesday afforded an afternoon for touring. Participants congregated at the college of conference co-organizer Felix Binder.3 His tour evoked, for me, the ghosts of thermo conferences past: One conference, at the University of Cambridge, had brought me to the grave of thermodynamicist Arthur Eddington. Another conference, about entropies in information theory, had convened near Canada’s Banff Cemetery. Felix’s tour began with St. Edmund Hall’s cemetery. Thermodynamics highlights equilibrium, a state in which large-scale properties—like temperature and pressure—remain constant. Some things never change.

With thanks to Felix, Janet, and the other coordinators for organizing the conference.

1Oxford derives its nickname from an elegy by Matthew Arnold. Happy National Poetry Month!

3Michele Campisi joined me in introducing out-of-time-ordered correlators (OTOCs) into the quantum-thermo conference: He, with coauthor John Goold, combined OTOCs with the two-point measurement scheme.

3Oxford University contains 38 colleges, the epicenters of undergraduates’ social, dining, and housing experiences. Graduate students and postdoctoral scholars affiliate with colleges, and senior fellows—faculty members—govern the colleges.

# The math of multiboundary wormholes

Xi Dong, Alex Maloney, Henry Maxfield and I recently posted a paper to the arXiv with the title: Phase Transitions in 3D Gravity and Fractal Dimension. In other words, we’ll get about ten readers per year for the next few decades. Despite the heady title, there’s deep geometrical beauty underlying this work. In this post I want to outline the origin story and motivation behind this paper.

There are two different branches to the origin story. The first was my personal motivation and the second is related to how I came into contact with my collaborators (who began working on the same project but with different motivation, namely to explain a phase transition described in this paper by Belin, Keller and Zadeh.)

During the first year of my PhD at Caltech I was working in the mathematics department and I had a few brief but highly influential interactions with Nikolai Makarov while I was trying to find a PhD advisor. His previous student, Stanislav Smirnov, had recently won a Fields Medal for his work studying Schramm-Loewner evolution (SLE) and I was captivated by the beauty of these objects.

SLE example from Scott Sheffield’s webpage. SLEs are the fractal curves that form at the interface of many models undergoing phase transitions in 2D, such as the boundary between up and down spins in a 2D magnet (Ising model.)

One afternoon, I went to Professor Makarov’s office for a meeting and while he took a brief phone call I noticed a book on his shelf called Indra’s Pearls, which had a mesmerizing image on its cover. I asked Professor Makarov about it and he spent 30 minutes explaining some of the key results (which I didn’t understand at the time.) When we finished that part of our conversation Professor Makarov described this area of math as “math for the future, ahead of the tools we have right now” and he offered for me to borrow his copy. With a description like that I was hooked. I spent the next six months devouring this book which provided a small toehold as I tried to grok the relevant mathematics literature. This year or so of being obsessed with Kleinian groups (the underlying objects in Indra’s Pearls) comes back into the story soon. I also want to mention that during that meeting with Professor Makarov I was exposed to two other ideas that have driven my research as I moved from mathematics to physics: quasiconformal mappings and the simultaneous uniformization theorem, both of which will play heavy roles in the next paper I release.  In other words, it was a pretty important 90 minutes of my life.

Google image search for “Indra’s Pearls”. The math underlying Indra’s Pearls sits at the intersection of hyperbolic geometry, complex analysis and dynamical systems. Mathematicians oftentimes call this field the study of “Kleinian groups”. Most of these figures were obtained by starting with a small number of Mobius transformations (usually two or three) and then finding the fixed points for all possible combinations of the initial transformations and their inverses. Indra’s Pearls was written by David Mumford, Caroline Series and David Wright. I couldn’t recommend it more highly.

My life path then hit a discontinuity when I was recruited to work on a DARPA project, which led to taking an 18 month leave of absence from Caltech. It’s an understatement to say that being deployed in Afghanistan led to extreme introspection. While “down range” I had moments of clarity where I knew life was too short to work on anything other than ones’ deepest passions. Before math, the thing that got me into science was a childhood obsession with space and black holes. I knew that when I returned to Caltech I wanted to work on quantum gravity with John Preskill. I sent him an e-mail from Afghanistan and luckily he was willing to take me on as a student. But as a student in the mathematics department, I knew it would be tricky to find a project that involved all of: black holes (my interest), quantum information (John’s primary interest at the time) and mathematics (so I could get the degree.)

I returned to Caltech in May of 2012 which was only two months before the Firewall Paradox was introduced by Almheiri, Marolf, Polchinski and Sully. It was obvious that this was where most of the action would be for the next few years so I spent a great deal of time (years) trying to get sharp enough in the underlying concepts to be able to make comments of my own on the matter. Black holes are probably the closest things we have in Nature to the proverbial bottomless pit, which is an apt metaphor for thinking about the Firewall Paradox. After two years I was stuck. I still wasn’t close to confident enough with AdS/CFT to understand a majority of the promising developments. And then at exactly the right moment, in the summer of 2014, Preskill tipped my hat to a paper titled Multiboundary Wormholes and Holographic Entanglement by Balasubramanian, Hayden, Maloney, Marolf and Ross. It was immediately obvious to me that the tools of Indra’s Pearls (Kleinian groups) provided exactly the right language to study these “multiboundary wormholes.” But despite knowing a bridge could be built between these fields, I still didn’t have the requisite physics mastery (AdS/CFT) to build it confidently.

Before mentioning how I met my collaborators and describing the work we did together, let me first describe the worlds that we bridged together.

3D Gravity and Universality

As the media has sensationalized to death, one of the most outstanding questions in modern physics is to discover and then understand a theory of quantum gravity.  As a quick aside, Quantum gravity is just a placeholder name for such a theory. I used italics because physicists have already discovered candidate theories, such as string theory and loop quantum gravity (I’m not trying to get into politics, just trying to demonstrate that there are multiple candidate theories). But understanding these theories — carrying out all of the relevant computations to confirm that they are consistent with Nature and then doing experiments to verify their novel predictions — is still beyond our ability. Surprisingly, without knowing the specific theory of quantum gravity that guides Nature’s hand, we’re still able to say a number of universal things that must be true for any theory of quantum gravity. The most prominent example being the holographic principle which comes from the entropy of black holes being proportional to the surface area encapsulated by the black hole’s horizon (a naive guess says the entropy should be proportional to the volume of the black hole; such as the entropy of a glass of water.) Universal statements such as this serve as guideposts and consistency checks as we try to understand quantum gravity.

It’s exceedingly rare to find universal statements that are true in physically realistic models of quantum gravity. The holographic principle is one such example but it pretty much stands alone in its power and applicability. By physically realistic I mean: 3+1-dimensional and with the curvature of the universe being either flat or very mildly positively curved.  However, we can make additional simplifying assumptions where it’s easier to find universal properties. For example, we can reduce the number of spatial dimensions so that we’re considering 2+1-dimensional quantum gravity (3D gravity). Or we can investigate spacetimes that are negatively curved (anti-de Sitter space) as in the AdS/CFT correspondence. Or we can do BOTH! As in the paper that we just posted. The hope is that what’s learned in these limited situations will back-propagate insights towards reality.

The motivation for going to 2+1-dimensions is that gravity (general relativity) is much simpler here. This is explained eloquently in section II of Steve Carlip’s notes here. In 2+1-dimensions, there are no “local”/”gauge” degrees of freedom. This makes thinking about quantum aspects of these spacetimes much simpler.

The standard motivation for considering negatively curved spacetimes is that it puts us in the domain of AdS/CFT, which is the best understood model of quantum gravity. However, it’s worth pointing out that our results don’t rely on AdS/CFT. We consider negatively curved spacetimes (negatively curved Lorentzian manifolds) because they’re related to what mathematicians call hyperbolic manifolds (negatively curved Euclidean manifolds), and mathematicians know a great deal about these objects. It’s just a helpful coincidence that because we’re working with negatively curved manifolds we then get to unpack our statements in AdS/CFT.

Multiboundary wormholes

Finding solutions to Einstein’s equations of general relativity is a notoriously hard problem. Some of the more famous examples include: Minkowski space, de-Sitter space, anti-de Sitter space and Schwarzschild’s solution (which describes perfectly symmetrical and static black holes.) However, there’s a trick! Einstein’s equations only depend on the local curvature of spacetime while being insensitive to global topology (the number of boundaries and holes and such.) If M is a solution of Einstein’s equations and $\Gamma$ is a discrete subgroup of the isometry group of $M$, then the quotient space $M/\Gamma$ will also be a spacetime that solves Einstein’s equations! Here’s an example for intuition. Start with 2+1-dimensional Minkowski space, which is just a stack of flat planes indexed by time. One example of a “discrete subgroup of the isometry group” is the cyclic group generated by a single translation, say the translation along the x-axis by ten meters. Minkowski space quotiented by this group will also be a solution of Einstein’s equations, given as a stack of 10m diameter cylinders indexed by time.

Start with 2+1-dimensional Minkowski space which is just a stack of flat planes index by time. Think of the planes on the left hand side as being infinite. To “quotient” by a translation means to glue the green lines together which leaves a cylinder for every time slice. The figure on the right shows this cylinder universe which is also a solution to Einstein’s equations.

D+1-dimensional Anti-de Sitter space ($AdS_{d+1}$) is the maximally symmetric d+1-dimensional Lorentzian manifold with negative curvature. Our paper is about 3D gravity in negatively curved spacetimes so our starting point is $AdS_3$ which can be thought of as a stack of Poincare disks (or hyperbolic sheets) with the time dimension telling you which disk (sheet) you’re on. The isometry group of $AdS_3$ is a group called $SO(2,2)$ which in turn is isomorphic to the group $SL(2, R) \times SL(2, R)$. The group $SL(2,R) \times SL(2,R)$ isn’t a very common group but a single copy of $SL(2,R)$ is a very well-studied group. Discrete subgroups of it are called Fuchsian groups. Every element in the group should be thought of as a 2×2 matrix which corresponds to a Mobius transformation of the complex plane. The quotients that we obtain from these Fuchsian groups, or the larger isometry group yield a rich infinite family of new spacetimes, which are called multiboundary wormholes. Multiboundary wormholes have risen in importance over the last few years as powerful toy models when trying to understand how entanglement is dispersed near black holes (Ryu-Takayanagi conjecture) and for how the holographic dictionary works in terms of mapping operators in the boundary CFT to fields in the bulk (entanglement wedge reconstruction.)

Three dimensional AdS can be thought of as a stack of hyperboloids indexed by time. It’s convenient to use the Poincare disk model for the hyperboloids so that the entire spacetime can be pictured in a compact way. Despite how things appear, all of the triangles have the same “area”.

I now want to work through a few examples.

BTZ black hole: this is the simplest possible example. It’s obtained by quotienting $AdS_3$ by a cyclic group $\langle A \rangle$, generated by a single matrix $A \in SL(2,R)$ which for example could take the form $A = \begin{pmatrix} e^{\lambda} & 0 \\ 0 & e^{-\lambda} \end{pmatrix}$. The matrix A acts by fractional linear transformation on the complex plane, so in this case the point $z \in \mathbb{C}$ gets mapped to $z\mapsto (e^{\lambda}z + 0)/(0z + e^{-\lambda}) = e^{2\lambda} z$. In this case

Start with $AdS_3$ as a stack of hyperbolic half planes indexed by time. A quotient by A means that each hyperbolic half plane gets quotiented. Quotienting a constant time slice by the map $z \mapsto e^{2\lambda}z$ gives a surface that’s topologically a cylinder. Using the picture above this means you glue together the solid black curves. The green and red segments become two boundary regions. We call it the BTZ black hole because when you add “time” it becomes impossible to send a signal from the green boundary to the red boundary, or vica versa. The dotted line acts as an event horizon.

Three boundary wormhole:

There are many parameterizations that we can choose to obtain the three boundary wormhole. I’ll only show schematically how the gluings go. A nice reference with the details is this paper by Henry Maxfield.

This is a picture of a constant time slice of $AdS_3$ quotiented by the A and B above. Each time slice is given as a copy of the hyperbolic half plane with the black arcs and green arcs glued together (by the maps A and B). These gluings yield a pair of pants surface. Each of the boundary regions are causally disconnected from the others. The dotted lines are black hole horizons that illustrate where the causal disconnection happens.

Torus wormhole:

It’s simpler to write down generators for the torus wormhole; but following along with the gluings is more complicated. To obtain the three boundary wormhole we quotient $AdS_3$ by the free group $\langle A, B \rangle$ where $A = \begin{pmatrix} e^{\lambda} & 0 \\ 0 & e^{-\lambda} \end{pmatrix}$ and $B = \begin{pmatrix} \cosh \lambda & \sinh \lambda \\ \sinh \lambda & \cosh \lambda \end{pmatrix}$. (Note that this is only one choice of generators, and a highly symmetrical one at that.)

This is a picture of a constant time slice of $AdS_3$ quotiented by the A and B above. Each time slice is given as a copy of the hyperbolic half plane with the black arcs and green arcs glued together (by the maps A and B). These gluings yield what’s called the “torus wormhole”. Topologically it’s just a donut with a hole cut out. However, there’s a causal structure when you add time to the mix where the dotted lines act as a black hole horizon, so that a message sent from behind the horizon will never reach the boundary.

Lorentzian to Euclidean spacetimes

So far we have been talking about negatively curved Lorentzian manifolds. These are manifolds that have a notion of both “time” and “space.” The technical definition involves differential geometry and it is related to the signature of the metric. On the other hand, mathematicians know a great deal about negatively curved Euclidean manifolds. Euclidean manifolds only have a notion of “space” (so no time-like directions.) Given a multiboundary wormhole, which by definition, is a quotient of $AdS_3/\Gamma$ where $\Gamma$ is a discrete subgroup of Isom($AdS_3$), there’s a procedure to analytically continue this to a Euclidean hyperbolic manifold of the form $H^3/ \Gamma_E$ where $H^3$ is three dimensional hyperbolic space and $\Gamma_E$ is a discrete subgroup of the isometry group of $H^3$, which is $PSL(2, \mathbb{C})$. This analytic continuation procedure is well understood for time-symmetric spacetimes but it’s subtle for spacetimes that don’t have time-reversal symmetry. A discussion of this subtlety will be the topic of my next paper. To keep this blog post at a reasonable level of technical detail I’m going to need you to take it on a leap of faith that to every Lorentzian 3-manifold multiboundary wormhole there’s an associated Euclidean hyperbolic 3-manifold. Basically you need to believe that given a discrete subgroup $\Gamma$ of $SL(2, R) \times SL(2, R)$ there’s a procedure to obtain a discrete subgroup $\Gamma_E$ of $PSL(2, \mathbb{C})$. Discrete subgroups of $PSL(2, \mathbb{C})$ are called Kleinian groups and quotients of $H^3$ by groups of this form yield hyperbolic 3-manifolds. These Euclidean manifolds obtained by analytic continuation arise when studying the thermodynamics of these spacetimes or also when studying correlation functions; there’s a sense in which they’re physical.

TLDR: you start with a 2+1-d Lorentzian 3-manifold obtained as a quotient $AdS_3/\Gamma$ and analytic continuation gives a Euclidean 3-manifold obtained as a quotient $H^3/\Gamma_E$ where $H^3$ is 3-dimensional hyperbolic space and $\Gamma_E$ is a discrete subgroup of $PSL(2,\mathbb{C})$ (Kleinian group.)

Limit sets:

Every Kleinian group $\Gamma_E = \langle A_1, \dots, A_g \rangle \subset PSL(2, \mathbb{C})$ has a fractal that’s naturally associated with it. The fractal is obtained by finding the fixed points of every possible combination of generators and their inverses. Moreover, there’s a beautiful theorem of Patterson, Sullivan, Bishop and Jones that says the smallest eigenvalue $\lambda_0$ of the spectrum of the Laplacian on the quotient Euclidean spacetime $H^3 / \Gamma_E$ is related to the Hausdorff dimension of this fractal (call it $D$) by the formula $\lambda_0 = D(2-D)$. This smallest eigenvalue controls a number of the quantities of interest for this spacetime but calculating it directly is usually intractable. However, McMullen proposed an algorithm to calculate the Hausdorff dimension of the relevant fractals so we can get at the spectrum efficiently, albeit indirectly.

This is a screen grab of Figure 2 from our paper. These are two examples of fractals that emerge when studying these spacetimes. Both of these particular fractals have a 3-fold symmetry. They have this symmetry because these particular spacetimes came from looking at something called “n=3 Renyi entropies”. The number q indexes a one complex dimensional family of spacetimes that have this 3-fold symmetry. These Kleinian groups each have two generators that are described in section 2.3 of our paper.

What we did

Our primary result is a generalization of the Hawking-Page phase transition for multiboundary wormholes. To understand the thermodynamics (from a 3d quantum gravity perspective) one starts with a fixed boundary Riemann surface and then looks at the contributions to the partition function from each of the ways to fill in the boundary (each of which is a hyperbolic 3-manifold). We showed that the expected dominant contributions, which are given by handlebodies, are unstable when the kinetic operator $(\nabla^2 - m^2)$ is negative, which happens whenever the Hausdorff dimension of the limit set of $\Gamma_E$ is greater than the lightest scalar field living in the bulk. One has to go pretty far down the quantum gravity rabbit hole (black hole) to understand why this is an interesting research direction to pursue, but at least anyone can appreciate the pretty pictures!