Four Caltech faculty members sat in front of me, in a bare seminar room. I stood beside a projector screen, explaining research I’d undertaken. The candidacy exam functions as a milepost in year three of our PhD program. The committee confirms that the student has accomplished research and should continue.

I was explaining a quantum-thermodynamics problem. I reviewed the problem’s classical doppelgänger and a strategy for solving the doppelgänger. Could you apply the classical strategy in the quantum problem? Up to a point. Beyond it, you’d need

“Does anyone here like the Beatles?” I asked the committee. Three professors had never participated in an exam committee before. The question from the examinee appeared to startle them.

One committee member had participated in cartloads of committees. He recovered first, raising a hand.

The committee member—John Preskill—then began singing the Beatles song.

In the middle of my candidacy exam.

The moment remains one of the highlights of my career.

Throughout my PhD career, I’ve reported to John. I’ve emailed an update every week and requested a meeting about once a month. I sketch the work that’s firing me, relate my plans, and request feedback.

Much of the feedback, I’ve discerned over the years, condenses into aphorisms buried in our conversations. I doubt whether John has noticed his aphorisms. But they’ve etched themselves in me, and I hope they remain there.

“Think big.” What would impact science? Don’t buff a teapot if you could be silversmithing.

Education serves as “money in the bank.” Invest in yourself, and draw on the interest throughout your career.

“Stay broad.” (A stretching outward of both arms accompanies this aphorism.) Embrace connections with diverse fields. Breadth affords opportunities to think big.

“Keep it simple,” but “do something technical.” A teapot cluttered with filigree, spouts, and eighteen layers of gold leaf doesn’t merit a spot at the table. A Paul Revere does.

“Do what’s best for Nicole.” I don’t know how many requests to speak, to participate on committees, to explain portions of his lecture notes, to meet, to contribute to reports, and more John receives per week. The requests I receive must look, in comparison, like a mouse to a mammoth. But John exhorts me to to guard my time for research—perhaps, partially, because he gives so much time, including to students.

“Move on.” If you discover an opportunity, study background information for a few months, seize the opportunity, wrap up the project, and seek the next window.

John has never requested my updates, but he’s grown used to them. I’ve grown used to how meetings end. Having brought him questions, I invite him to ask questions of me.

“Are you having fun?” he says.

I tell the Beatles story when presenting that quantum-thermodynamics problem in seminars.

“I have to digress,” I say when the “Help!” image appears. “I presented this slide at a talk at Caltech, where John Preskill was in the audience. Some of you know John.” People nod. “He’s a…mature gentleman.”

I borrowed the term from the apparel industry. “Mature gentleman” means “at a distinguished stage by which one deserves to have celebrated a birthday of his with a symposium.”

Many physicists lack fluency in apparel-industry lingo. My audience members take “mature” at face value.

Some audience members grin. Some titter. Some tilt their heads from side to side, as though thinking, “Eh…”

John has impact. He’s logged boatloads of technical achievements. He has the scientific muscle of a scientific rhinoceros.

And John has fun. He doesn’t mind my posting an article about audience members giggling about him.

Friends ask me whether professors continue doing science after meriting birthday symposia, winning Nobel Prizes, and joining the National Academy of Sciences. I point to the number of papers with which John has, with coauthors, electrified physics over the past 20 years. Has coauthored because science is fun. It merits singing about during candidacy exams. Satisfying as passing the exam felt two years ago, I feel more honored when John teases me about my enthusiasm for science.

A year ago, I ate lunch with an alumnus who’d just graduated from our group. Students, he reported, have a tradition of gifting John a piece of art upon graduating. I relayed the report to another recent alumnus.

“Really?” the second alumnus said. “Maybe someone gave John a piece of art and then John invented the tradition.”

Regardless of its origin, the tradition appealed to me. John has encouraged me to blog as he’s encouraged me to do theoretical physics. Writing functions as art. And writing resembles theoretical physics: Each requires little more than a pencil, paper, and thought. Each requires creativity, aesthetics, diligence, and style. Each consists of ideas, of abstractions; each lacks substance but can outlive its creator. Let this article serve as a finger painting for John Preskill.

Thanks for five fun years.

*With my PhD-thesis committee, after my thesis defense. Photo credit to Nick Hutzler, who cracked the joke that accounts for everyone’s laughing. (Left to right: Xie Chen, Fernando Brandão, John Preskill, Nicole Yunger Halpern, Manuel Endres.)*

*If you’re in a hurry, or can’t stand the sound of my voice, you might prefer to read the transcript, which is appended below. Only by watching the video, however, can you follow the waving of my hands.
*

*I grabbed the transcript from the Y Combinator blog post, so you can read it there if you prefer, but I’ve corrected some of the typos. (There are a few references to questions and comments that were edited out, but that shouldn’t cause too much confusion.)*

*Here we go:*

Craig Cannon [00:00:00] – Hey, how’s it going? This is Craig Cannon, and you’re listening to Y Combinator’s Podcast. Today’s episode is with John Preskill. John’s a theoretical physicist and the Richard P. Feynman Professor of Theoretical Physics at Caltech. He once won a bet with Stephen Hawking and he writes that it made him briefly almost famous. Basically, what happened is John and Kip Thorne bet that singularities could exist outside of black holes. After six years, Hawking conceded. He said that they were possible in very special, “non-generic conditions.” I’ll link up some more details to that in the description. In this episode, we cover what John’s been focusing on for years, which is quantum information, quantum computing, and quantum error correction. Alright, here we go. What was the revelation that made scientists and physicists think that a quantum computer could exist?

John Preskill [00:00:54] – It’s not obvious. A lot of people thought it couldn’t. The idea that a quantum computer would be powerful was emphasized over 30 years ago by Richard Feynman, the Caltech physicist. It was interesting how he came to that realization. Feynman was interested in computation his whole life. He had been involved during the war in Los Alamos. He was the head of the computation group. He was the guy who fixed the little mechanical calculators, and he had a whole crew of people who were calculating, and he figured out how to flow the work from one computer to another. All that kind of stuff. As computing technology started to evolve, he followed that. In the 1970s, a particle physicist like Feynman, that’s my background too, got really interested in using computers to study the properties of elementary particles like the quarks inside a nucleus, you know? We know a proton isn’t really a fundamental object. It’s got little beans rattling around inside, but they’re quantum beans. Gell-Mann, who’s good at names, called them quarks.

John Preskill [00:02:17] – Now we’ve had a theory since the 1970s of how quarks behave, and so in principle, you know everything about the theory, you can compute everything, but you can’t because it’s just too hard. People started to simulate that physics with digital computers in the ’70s, and there were some things that they could successfully compute, and some things they couldn’t because it was just too hard. The resources required, the memory, the time were out of reach. Feynman, in the early ’80s said nature is quantum mechanical damn it, so if you want a simulation of nature, it should be quantum mechanical. You should use a quantum system to behave like another quantum system. At the time, he called it a universal quantum simulator.

John Preskill [00:03:02] – Now we call it a quantum computer. The idea caught on about 10 years later when Peter Shor made the suggestion that we could solve problems which don’t seem to have anything to do with physics, which are really things about numbers like finding the prime factors of a big integer. That caused a lot of excitement, in part because the implications for cryptography are a big disturbing. But then physicists — good physicists — started to consider, can we really build this thing? Some concluded and argued fairly cogently that no, you couldn’t because of this difficulty that it’s so hard to isolate systems from the environment well enough for them to behave quantumly. It took a few years for that to sort out at the theoretical level. In the mid ’90s we developed a theory called quantum error correction. It’s about how to encode the quantum state that you’d like to protect in such a clever way that even if there are some interactions with the environment that you can’t control, it still stays robust.

John Preskill [00:04:17] – At first, that was just kind of a theorist’s fantasy — it was a little too far ahead of the technology. But 20 years later, the technology is catching up, and now this idea of quantum error correction has become something you can do in the lab.

Craig Cannon [00:04:31] – How does quantum error correction work? I’ve seen a bunch of diagrams, so maybe this is difficult to explain, but how would you explain it?

John Preskill [00:04:39] – Well, I would explain it this way. I don’t think I’ve said the word entanglement yet, have I?

Craig Cannon [00:04:43] – Well, I have been checking off all the Bingo words yet.

John Preskill [00:04:45] – Okay, so let’s talk about entanglement because it’s part of the answer to your question, which I’m still not done answering, what is quantum physics? What do we mean by entanglement? It’s really the characteristic way, maybe the most important way that we know in which quantum is different from ordinary stuff, from classical. Now what does it mean, entanglement? It means that you can have a physical system which has many parts, which have interacted with one another, so it’s in kind of a complex correlated state of all those parts, and when you look at the parts one at a time it doesn’t tell you anything about the state of the whole thing. The whole thing’s in some definite state — there’s information stored in it — and now you’d like to access that information … Let me be a little more concrete. Suppose it’s a book.

John Preskill [00:05:40] – Okay? It’s a book, it’s 100 pages long. If it’s an ordinary book, 100 people could each take a page, and read it, they know what’s on that page, and then they could get together and talk, and now they’d know everything that’s in the book, right? But if it’s a quantum book written in qubits where these pages are very highly entangled, there’s still a lot of information in the book, but you can’t read it the way I just described. You can look at the pages one at a time, but a single page when you look at it just gives you random gibberish. It doesn’t reveal anything about the content of the book. Why is that? There’s information in the book, but it’s not stored in the individual pages. It’s encoded almost entirely in how those pages are correlated with one another. That’s what we mean by quantum entanglement: Information stored in those correlations which you can’t see when you look at the parts one at a time. You asked about quantum error correction?

John Preskill [00:06:39] – What’s the basic idea? It’s to take advantage of that property of entanglement. Because let’s say you have a system of many particles. The environment is kind of kicking them around, it’s interacting with them. You can’t really completely turn off those interactions no matter how hard you try, but suppose we’ve encoded the information in entanglement. So, say, if you look at one atom, it’s not telling you anything about the information you’re trying to protect. The environment isn’t learning anything when it looks at the atoms one at a time.

John Preskill [00:07:15] – This is kind of the key thing — that what makes quantum information so fragile is that when you look at it, you disturb it. This ordinary water bottle isn’t like that. Let’s say we knew it was either here or here, and we didn’t know. I would look at it, I’d find out it’s here. I was ignorant of where it was to start with, and now I know. With a quantum system, when you look at it, you really change the state. There’s no way to avoid that. So if the environment is looking at it in the sense that information is leaking out to the environment, that’s going to mess it up. We have to encode the information so the environment, so to speak, can’t find out anything about what the information is, and that’s the idea of quantum error correction. If we encode it in entanglement, the environment is looking at the parts one at a time, but it doesn’t find out what the protected information is.

Craig Cannon [00:08:06] – In other words, it’s kind of measuring probability the whole way along, right?

John Preskill [00:08:12] – I’m not sure what you mean by that.

Craig Cannon [00:08:15] – Is it Grover’s algorithm that was as quantum bits roll through, go through gates– The probability is determined of what information’s being passed through? What’s being computed?

John Preskill [00:08:30] – Grover’s algorithm is a way of sort of doing an exhaustive search through many possibilities. Let’s say I’m trying to solve some problem like a famous one is the traveling salesman problem. I’ve told you what the distances are between all the pairs of cities, and now I want to find the shortest route I can that visits them all. That’s a really hard problem. It’s still hard for a quantum computer, but not quite as hard because there’s a way of solving it, which is to try all the different routes, and measure how long they are, and then find the one that’s shortest, and you’ve solved the problem. The reason it’s so hard to solve is there’s such a vast number of possible routes. Now what Grover’s algorithm does is it speeds up that exhaustive search.

John Preskill [00:09:29] – In practice, it’s not that big a deal. What it means is that if you had the same processing speed, you can handle about twice as many cities before the problem becomes too hard to solve, as you could if you were using a classical processor. As far as what’s quantum about Grover, it takes advantage of the property in quantum physics that probabilities … tell me if I’m getting too inside baseball …

Craig Cannon [00:10:03] – No, no, this is perfect.

John Preskill [00:10:05] – That probabilities are the squares of amplitudes. This is interference. Again, this is another part of the answer. Well, we can spend the whole hour answering the question, what is quantum physics? Another essential part of it is what we call interference, and this is really crucial for understanding how quantum computing works. That is that probabilities add. If you know the probability of one alternative, and you know the probability of another, then you can add those together and find the probability that one or the other occurred. It’s not like that in quantum physics. The famous example is the double slit interference experiment. I’m sending electrons, let’s say — it could be basketballs, but it’s an easier experiment to do with electrons —

John Preskill [00:11:02] – at a screen, and there are two holes in the screen. You can try to detect the electron on the other side of the screen, and when you do that experiment many times, you can plot a graph showing where the electron was detected in each run, or make a histogram of all the different outcomes. And the graph wiggles, okay? If you could say there’s some probability of going through the first hole, and some probability of going through the second, and each time you detected it, it went through either one or the other, there’d be no wiggles in that graph. It’s the interference that makes it wiggle. The essence of the interference is that nobody can tell you whether it went through the first slit or the second slit. The question is sort of inadmissible. This interference then occurs when we can add up these different alternatives in a way which is different from what we’re used to. It’s not right to say that the electron was detected at this point because it had some probability of going through the first hole, and some probability of going through the second

John Preskill [00:12:23] – and we add those probabilities up. That doesn’t give the right answer. The different alternatives can interfere. This is really important for quantum computing because what we’re trying to do is enhance the probability or the time it takes to find the solution to a problem, and this interference can work to our advantage. We want to have, when we’re doing our search, we want to have a higher chance of getting the right answer, and a lower chance of getting the wrong answer. If the different wrong answers can interfere, they can cancel one another out, and that enhances the probability of getting the right answer. Sorry it’s such a long-winded answer, but this is how Grover’s algorithm works.

John Preskill [00:13:17] – It can speed up exhaustive search by taking advantage of that interference phenomenon.

Craig Cannon [00:13:20] – Well this is kind of one of the underlying questions among many of the questions from Twitter. You’ve hit our record for most questions asked. Basically, many people are wondering what quantum computers really will do if and when it becomes a reality that they outperform classical computers. What are they going to be really good at?

John Preskill [00:13:44] – Well, you know what? I’m not really sure. If you look at the history of technology, it would be hubris to expect me to know. It’s a whole different way of dealing with information. Quantum information is not just … a quantum computer is not just a faster way of computing. It deals with information in a completely new way because of this interference phenomenon, because of entanglement that we’ve talked about. We have limited vision when it comes to predicting decades out what the impact will be of an entirely new way of doing things. Information processing, in particular. I mean you know this well. We go back to the 1960s, and people are starting to put a few transistors on a chip. Where is that going to lead? Nobody knew.

Craig Cannon [00:14:44] – Even early days of the internet.

John Preskill [00:14:45] – Yeah, good example.

Craig Cannon [00:14:46] – Even the first browser. No one really knew what anyone was going to do with it. It makes total sense.

John Preskill [00:14:52] – For good or ill. Yeah. But we have some ideas, you know? I think … why are we confident there will be some transformative effect on society? Of the things we know about, and I emphasize again, probably the most important ones are things we haven’t thought of when it comes to applications of quantum computing, the ones which will affect everyday life, I think, are better methods for understanding and inventing new materials, new chemical compounds. Things like that can be really important. If you find a better way of capturing carbon by designing a better catalyst, or you can design pharmaceuticals that have new effects, materials that have unusual properties. These are quantum physics problems because those properties of the molecule or the material really have to do with the underlying quantum behavior of the particles, and we don’t have a good way for solving such problems or predicting that behavior using ordinary digital computers. That’s what a quantum computer is good at. It’s good — but maybe not the only thing it’s good at — one thing it should certainly be good at is telling us quantitatively how quantum systems behave. In the two contexts I just mentioned, there’s little question that there will be practical impact of that.

Craig Cannon [00:16:37] – It’s not just doing the traveling salesman problem through the table of elements for why it can find these compounds.

John Preskill [00:16:49] – No. If it were, that wouldn’t be very efficient.

Craig Cannon [00:16:52] – Exactly.

John Preskill [00:16:53] – Yeah. No, it’s much trickier than that. Like I said, the exhaustive search, though conceptually it’s really interesting that quantum can speed it up because of interference, from a practical point of view it may not be that big a deal. It means that, well like I said, in the same amount of time you can solve an instance which is twice as big of the problem. What we really get excited about are the so-called exponential speed ups. That was why Shor’s algorithm was exciting in 1994, because factoring large numbers was a problem that had been studied by smart people for a long time, and on that basis, the fact that there weren’t any fast ways of solving it was pretty good evidence it’s a hard problem. Actually, we don’t know how to prove that from first principles. Maybe somebody will come along one day and figure out how to solve factoring very fast on a digital computer. It doesn’t seem very likely because people have been trying for so long to solve problems like that, and it’s just intractable with ordinary computers. You could say the same thing about these quantum physics problems. Maybe some brilliant graduate student is going to drop a paper on the arXiv tomorrow which will say, “Here, I solved quantum chemistry, and I can do it on a digital computer.” But we don’t think that’s very likely because we’ve been working pretty hard on these problems for decades and they seem to be really hard. Those cases, like these number theoretic problems,

John Preskill [00:18:40] – which have cryptological implications, and tasks for simulating the behavior of quantum systems, we’re pretty sure those are hard problems classically, and we’re pretty sure quantum computers … I mean we have algorithms that have been proposed, but which we can’t really run currently because our quantum computers aren’t big enough on the scale that’s needed to solve problems people really care about.

Craig Cannon [00:19:09] – Maybe we should jump to one of the questions from Twitter which is related to that. Travis Scholten (@Travis_Sch) asked, what are the most problem pressings in physics, let’s say specifically around quantum computers that you think substantial progress ought to be made in to move the field forward?

John Preskill [00:19:27] – I know Travis. He was an undergrad here. How you doing, Travis? The problems that we need to solve to make quantum computing closer to realization at the level that would solve problems people care about? Well, let’s go over where we are now.

Craig Cannon [00:19:50] – Yeah, definitely.

John Preskill [00:19:51] – People have been working on quantum hardware for 20 years, working hard, and there are a number of different approaches to building the hardware, and nobody really knows which is going to be the best. I think we’re far from collapsing to one approach which everybody agrees has the best long-term prospects for scalability. And so it’s important that a lot of different types of hardware are being pursued. We can come back to what some of the different approaches are later. Where are we now? We think in a couple of years we’ll have devices with about 50 qubits to 100, and we’ll be able to control them pretty well. That’s an interesting range because even though it’s only 50 to 100 qubits, doesn’t sound like that big a deal, but that’s already too many to simulate with a digital computer, even with the most powerful supercomputers today. From that point of view, these relatively small, near-term quantum computers which we’ll be fooling around with over the next five years or so, are doing something that’s kind of super-classical.

John Preskill [00:21:14] – At least, we don’t know how to do exactly the same things with ordinary computers. Now that doesn’t mean they’ll be able to do anything that’s practically important, but we’re going to try. We’re going to try, and there are ideas about things we’ll try out, including baby versions of these problems in chemistry, and materials, and ways of speeding up optimization problems. Nobody knows how well those things are going to work at these small scales. Part of the reason is not just the number of qubits is small, but they’re also not perfect. We can perform elementary operations on pairs of qubits, which we call quantum gates like the gates in ordinary logic. But they have an error rate a little bit below an error every 100 gates. If you have a circuit with 1000 qubits, that’s a lot of noise.

Craig Cannon [00:22:18] – Exactly. Does for instance, 100-qubit quantum computer really mean 100-qubit quantum computer or do you need a certain amount of backup going on?

John Preskill [00:22:29] – In the near term, we’re going to be trying out, and probably we have the best hopes for, kind of hybrid classical-quantum methods with some kind of classical feedback. You try to do something on the quantum computer, you make a measurement that gives you some information, then you change the way you did it a little bit, and try to converge on some better answer. That’s one possible way of addressing optimization that might be faster on a quantum computer. But I just wanted to emphasize that the number of qubits isn’t the only metric. How good they are, and in particular, the reliability of the gates, how well we can perform them … that’s equally important. Anyway, coming back to Travis’ question, there are lots of things that we’d like to be able to do better. But just having much better qubits would be huge, right? If you … more or less, with the technology we have now, you can have a gate error rate of a few parts in 1,000, you know? If you can improve that by orders of magnitude, then obviously, you could run bigger circuits. That would be very enabling.

John Preskill [00:23:58] – Even if you stick with 100 qubits just by having a circuit with more depth, more layers of gates, that increases the range of what you could do. That’s always going to be important. Because, I mean look at how crappy that is. A gate error rate, even if it’s one part in 1,000, that’s pretty lousy compared to if you look at where–

Craig Cannon [00:24:21] – Your phone has a billion transistors in it. Something like that, and 0%–

John Preskill [00:24:27] – You don’t worry about the … it’s gotten to the point where there is some error protection built in at the hardware level in a processor, because I mean, we’re doing these crazy things like going down from the 11 nanometer scale for features on a chip.

Craig Cannon [00:24:45] – How are folks trying to deal with interference right now?

John Preskill [00:24:50] – You mean, what types of devices? Yeah, so that’s interesting too because there are a range of different ways to do it. I mentioned that we could store information, we could make a qubit out of a single atom, for example. That’s one approach. You have to control a whole bunch of atoms and get them to interact with one another. One way of doing that is with what we call trapped ions. That means the atoms have electrical charges. That’s a good thing because then you could control them with electric fields. You could hold them in a trap, and you can isolate them, like I said, in a very high vacuum so they’re not interacting too much with other things in the laboratory, including stray electric and magnetic fields. But that’s not enough because you got to get them to talk to one another. You got to get them to interact. We have this set of desiderata, which are kind of in tension with one another. On the one hand, we want to isolate the qubits very well. On the other hand, we want to control them from the outside and get them to do what we want them to do, and eventually, we want to read them out. You have to be able to read out the result of the computation. But the key thing is the control. You could have two of those qubits in your device interact with one another in a specified way, and to do that very accurately you have to have some kind of bus that gets the two to talk to one another.

John Preskill [00:26:23] – The way they do that in an ion trap is pretty interesting. It’s by using lasers and controlling how the ions vibrate in the trap, and with a laser, kind of excite, wiggles of the ion, and then by determining whether the ions are wiggling or not, you can go address another ion, and that way you can do a two-qubit interaction. You can do that pretty well. Another way is really completely different. What I just described was encoding information at the one atom level. But another way is to use superconductivity — circuits in which electric current flows without any dissipation. In that case, you have a lot of freedom to sort of engineer the circuits to behave in a quantum way. There are many nuances there, but the key thing is that you can encode information now in a system that might involve the collective motion of billions of electrons, and yet you can control it as though it were a single atom. I mean, here’s one oversimplified way of thinking about it.

John Preskill [00:27:42] – Suppose you have a little loop of wire, and there’s current flowing in the loop. It’s a superconducting wire so it just keeps flowing. Normally, there’d be resistance, which would dissipate that as heat, but not for the superconducting circuit, which of course, has to be kept very cold so it stays superconducting. But you can imagine in this little loop that the current is either circulating clockwise or counterclockwise. That’s a way of encoding information. It could also be both at once, and that’s what makes it a qubit.

Craig Cannon [00:28:14] – Right.

John Preskill [00:28:15] – And so in that case, even though it involves lots of particles, the magic is that you can control that system extremely well. I mentioned individual electrons. That’s another approach. Put the qubit in the spin of a single electron.

Craig Cannon [00:28:32] – You also mentioned better qubits. What did you mean by that?

John Preskill [00:28:35] – Well, what I really care about is how well I can do the gates. There’s a whole other approach, which is motivated by the desire to have much, much better control over the quantum information than we do in those systems that I mentioned so far, superconducting circuits and trapped ions. That’s actually what Microsoft is pushing very hard. We call it topological quantum computing. Topological is a word physicists and mathematicians love. It means, well, we’ll come back to what it means. Anyway, let me just tell you what they’re trying to do. They’re trying to make a much, much better qubit, which they can control much, much better using a completely different hardware approach.

Craig Cannon [00:29:30] – Okay.

John Preskill [00:29:32] – It’s very ambitious because at this point, it’s not even clear they have a single qubit, but if that approach is successful, and it’s making progress, we will see a validated qubit of this type soon. Maybe next year. Nobody really knows where it goes from there, but suppose it’s the case that you could do a two-qubit gate with an error rate of one in a million instead of one in 1,000. That would be huge. Now, scaling all these technologies up, is really challenging from a number of perspectives, including just the control engineering.

Craig Cannon [00:30:17] – How are they doing it or attempting to do it?

John Preskill [00:30:21] – You know, you could ask, where did all this progress come from over 20 years, or so? For example, with the superconducting circuits, a sort of crucial measure is what we call the coherence time of the qubit, which roughly speaking, means how much it interacts with the outside world. The longer the coherence time, the better. The rate of what we call decoherence is essentially how much it’s getting buffeted around by outside influences. For the superconducting circuits, those coherence times have increased about a factor of 10 every three years, going back 15 years or so.

Craig Cannon [00:31:06] – Wow.

John Preskill [00:31:07] – Now, it won’t necessarily go on like that indefinitely, but in order to achieve that type of progress, better materials, better fabrication, better control. The way you control these things is with microwave circuitry. Not that different from the kind of things that are going on in communication devices. All those things are important, but going forward, the control is really the critical thing. Coherence times are already getting pretty long, I mean having them longer is certainly good. But the key thing is to get two qubits to interact just the way you want them to. Even if there is, now I keep saying the key thing is the environment, it’s not the only key thing, right? Because you have some qubit, like if you think about that electron spin, one way of saying it is I said it can be both up and down at the same time. Well, there’s a simpler way of saying that. It might not point either up or down. It might point some other way. But there really are a continuum of ways it could point. That’s not like a bit. See, it’s much easier to stabilize a bit because it’s got two states.

John Preskill [00:32:31] – But if it can kind of wander around in the space of possible configurations for a qubit, that makes it much harder to control. People have gotten better at that, a lot better at that in the last few years.

Craig Cannon [00:32:44] – Interesting. Joshua Harmon asked, what engineering strategy for quantum computers do you think has the most promise?

John Preskill [00:32:53] – Yeah, so I mentioned some of these different approaches, and I guess I’ll interpret the question as, which one is the winning horse? I know better than to answer that question! They’re all interesting. For the near term, the most advanced are superconducting circuits and trapped ions, which is why I mentioned those first. I think that will remain true over the next five to 10 years. Other technologies have the potential — like these topologically protected qubits — to surpass those, but it’s not going to happen real soon. I kind of like superconducting circuits because there’s so much phase space of things you can do with them. Of ways you can engineer and configure them, and imagine scaling them up.

John Preskill [00:33:54] – They have the advantage of being faster. The cycle time, time to do a gate, is faster than with the trapped ions. Just the basic physics of the interactions is different. In the long term, those electron spins could catapult ahead of these other things. That’s something that you can naturally do in silicon, and it’s potentially easy to integrate with silicon technology. Right now, the qubits and gates aren’t as good as the other technologies, but that can change. I mean, from a theorist’s perspective, this topological approach is very appealing. We can imagine it takes off maybe 10 years from now and it becomes the leader. I think it’s important to emphasize we don’t really know what’s going to scale the best.

Craig Cannon [00:34:50] – Right. And are there multiple attempts being made around programming quantum computers?

John Preskill [00:34:55] – Yeah. I mean, some of these companies– That are working on quantum technology now, which includes well-known big players like IBM, and Google, and Microsoft and Intel, but also a lot of startups now. They are trying to encompass the full stack, so they’re interested in the hardware, and the fabrication, and the control technology. But also, the software, the applications, the user interface. All those things are certainly going to be important eventually.

Craig Cannon [00:35:38] – Yeah, they’re pushing it almost to like an AWS layer. Where you interact with your quantum computer in a server farm and you don’t even touch it.

John Preskill [00:35:49] – That’s how it will be in the near term. You’re not going to have, most of us won’t, have a quantum computer sitting on your desktop, or in your pocket. Maybe someday. In the near term, it’ll be in the Cloud, and you’ll be able to run applications on it by some kind of web interface. Ideally, that should be designed so the user doesn’t have to know anything about quantum physics in order to program or use it, and I think that’s part of what some of these companies are moving toward.

Craig Cannon [00:36:24] – Do you think it will get to the level where it’s in your pocket? How do you deal with that when you’re below one kelvin?

John Preskill [00:36:32] – Well, if it’s in your pocket, it probably won’t be one kelvin.

Craig Cannon [00:36:35] – Yeah, probably not.

John Preskill [00:36:38] – What do you do? Well, there’s one approach, as an example, which I guess I mentioned in passing before, where maybe it doesn’t have to be at such low temperature, and that’s nuclear spins. Because they’re very weakly interacting with the outside world, you can have quantum information in a nuclear spin, which — I’m not saying that it would be undisturbed for years, but seconds, which is pretty good. And you can imagine that getting significantly longer. Someday you might have a little quantum smart card in your pocket. The nice thing about that particular technology is you could do it at room temperature. Still have long coherence times. If you go to the ATM and you’re worried that there’s a rogue bank that’s going to steal your information, one solution to that problem — I’m not saying there aren’t other solutions — is to have a quantum card where the bank will be able to authenticate it without being able to forge it.

Craig Cannon [00:37:54] – We should talk about the security element. Kevin Su asked what risk would quantum computers pose to current encryption schemes? So public key, and what changes should people be thinking about if quantum computers come in the next five years, 10 years?

John Preskill [00:38:12] – Yeah. Quantum computers threaten those systems that are in widespread use. Whenever you’re using a web browser and you see that little padlock and you’re at an HTTPS site, you’re using a public key cryptosystem to protect your privacy. Those cryptosystems rely for their security on the presumed hardness of computational problems. That is, it’s possible to crack them, but it’s just too hard. RSA, which is one of the ones that’s widely used … as typically practiced today, to break it you’d have to do something like factor a number which is over 2000 bits long, 2048. That’s too hard to do now. But that’s what quantum computers will be good at. Another one that’s widely used is called elliptic curve cryptography. Doesn’t really matter exactly what it is.

John Preskill [00:39:24] – But the point is that it’s also vulnerable to quantum attack, so we’re going to have to protect our privacy in different ways when quantum computers are prevalent.

Craig Cannon [00:39:37] – What are the attempts being made right now?

John Preskill [00:39:39] – There are two main classes of attempts. One is just to come up with a cryptographic protocol not so different conceptually from what’s done now, but based on a problem that’s hard for quantum computers.

Craig Cannon [00:39:59] – There you go.

John Preskill [00:40:02] – It turns out that what has sort of become the standard way doesn’t have that feature, and there are alternatives that people are working on. We speak of post-quantum cryptography, meaning the protocols that we’ll have to use when we’re worried that our adversaries have quantum computers. I don’t think there’s any proposed cryptosystem — although there’s a long list of them by now which people think are candidates for being quantum resistant, for being unbreakable, or hard to break by quantum computers. I don’t think there’s any one that the world has sufficient confidence in now that’s really hard for a quantum adversary that we’re all going to switch over. But it’s certainly time to be thinking about it. When people worry about their privacy, of course different users have different standards, but the US Government sometimes says they would like a system to stay secure for 50 years. They’d like to be able to use it for 20, roughly speaking, and then have the intercepted traffic be protected for another 30 after that. I don’t think, though I could be wrong, that we’re likely to have quantum computers that can break those public key cryptosystems in 10 years, but in 50 years seems not unlikely,

John Preskill [00:41:33] – and so we should really be worrying about it. The other one is actually using quantum communication for privacy. In other words, if you and I could send qubits to one another instead of bits, it opens up new possibilities. The way to think about these public key schemes — or one way — that we’re using now, is I want you to send me a private message, and I can send you a lockbox. It has a padlock on it, but I keep the key, okay? But you can close up the box and send it to me. But I’m the only one with the key. The key thing is that if you have the padlock you can’t reverse engineer the key. Of course, it’s a digital box and key, but that’s the idea of public key. The idea of what we call quantum key distribution, which is a particular type of quantum cryptography, is that I can actually send you the key, or you can send me your key, but why can’t any eavesdropper then listen in and know the key? Well it’s because it’s quantum, and remember, it has that property that if you look at it, you disturb it.

John Preskill [00:42:59] – So if you collect information about my key, or if the adversary does, that will cause some change in the key, and there are ways in which we can check whether what you received is really what I sent. And if it turns out it’s not, or it has too many errors in it, then we’ll be suspicious that there was an adversary who tampered with it, and then we won’t use that key. Because we haven’t used it yet — we’re just trying to establish the key. We do the test to see whether an adversary interfered. If it passes the test, then we can use the key. And if it fails the test, we throw that key away and we try again. That’s how quantum cryptography works, but it requires a much different infrastructure than what we’re using now. We have to be able to send qubits … well, it’s not completely different because you can do it with photons. Of course, that’s how we communicate through optical fiber now — we’re sending photons. It’s a little trickier sending quantum information through an optical fiber, because of that issue that interactions with the environment can disturb it. But nowadays, you can send quantum information through an optical fiber over tens of kilometers with a low enough error rate so it’s useful for communication.

Craig Cannon [00:44:22] – Wow.

John Preskill [00:44:23] – Of course, we’d like to be able to scale that up to global distances.

Craig Cannon [00:44:26] – Sure.

John Preskill [00:44:27] – And there are big challenges in that. But anyway, so that’s another approach to the future of privacy that people are interested in.

Craig Cannon [00:44:35] – Does that necessitate quantum computers on both ends?

John Preskill [00:44:38] – Yes, but not huge ones. The reason … well, yes and no. At the scale of tens of kilometers, no. You can do that now. There are prototype systems that are in existence. But if you really want to scale it up — in other words, to send things longer distance — then you have to bring this quantum error correction idea into the game.

John Preskill [00:45:10] – Because at least with our current photonics technology, there’s no way I can send a single photon from here to China without there being a very high probability that it gets lost in the fiber somewhere. We have to have what we call quantum repeaters, which can boost the signal. But it’s not like the usual type of repeater that we have in communication networks now. The usual type is you measure the signal, and then you resend it. That won’t work for quantum because as soon as you measure it you’re going to mess it up. You have to find a way of boosting it without knowing what it is. Of course, it’s important that it works that way because otherwise, the adversary could just intercept it and resend it. And so it will require some quantum processing to get that quantum error correction in the quantum repeater to work. But it’s a much more modest scale quantum processor than we would need to solve hard problems.

Craig Cannon [00:46:14] – Okay. Gotcha. What are the other things you’re both excited about, and worried about for potential business opportunities? Snehan, I’m mispronouncing names all the times, Snehan Kekre asks, budding entrepreneurs, what should they be thinking about in the context of quantum computing?

John Preskill [00:46:37] – There’s more to quantum technology than computing. Something which has good potential to have an impact in the relatively near future is improved sensing. Quantum systems, partly because of that property that I keep emphasizing that they can’t be perfectly isolated from the outside, they’re good at sensing things. Sometimes, you want to detect it when something in the outside world messes around with your qubit. Again, using this technology of nuclear spins, which I mentioned you can do at room temperature potentially, you can make a pretty good sensor, and it can potentially achieve higher sensitivity and spatial resolution, look at things on shorter distance scales than other existing sensing technology. One of the things people are excited about are the biological and medical implications of that.

John Preskill [00:47:53] – If you can monitor the behavior of molecular machines, probe biological systems at the molecular level using very powerful sensors, that would surely have a lot of applications. One interesting question you can ask is, can you use these quantum error correction ideas to make those sensors even more powerful? That’s another area of current basic research, where you could see significant potential economic impact.

Craig Cannon [00:48:29] – Interesting. In terms of your research right now, what are you working on that you find both interesting and incredibly difficult?

John Preskill [00:48:40] – Everything I work on–

Craig Cannon [00:48:41] – 100%.

John Preskill [00:48:42] – Is both interesting and incredibly difficult. Well, let me change direction a little from what we’ve been talking about so far. Well, I’m going to tell you a little bit about me.

Craig Cannon [00:48:58] – Sure.

John Preskill [00:49:00] – I didn’t start out interested in information in my career. I’m a physicist. I was trained as an elementary particle theorist, studying the fundamental interactions and the elementary particles. That drew me into an interest in gravitation because one thing that we still have a very poor understanding of is how gravity fits together with the other fundamental interactions. The way physicists usually say it is we don’t have a quantum theory of gravity, at least not one that we think is complete and satisfactory. I’ve been interested in that question for many decades, and then got sidetracked because I got excited about quantum computing. But you know what? I’ve always looked at quantum information not just as a technology. I’m a physicist, I’m not an engineer. I’m not trying to build a better computer, necessarily, though that’s very exciting, and worth doing, and if my work can contribute to that, that’s very pleasing. I see quantum information as a new frontier in the exploration of the physical sciences. Sometimes I call it the entanglement frontier. Physicists, we like to talk about frontiers, and stuff. Short distance frontier. That’s what we’re doing at CERN in the Large Hadron Collider, trying to discern new properties of matter at distances which are shorter than we’ve ever been able to explore before.

John Preskill [00:50:57] – There’s a long distance frontier in cosmology. We’re trying to look deeper into the universe and understand its structure and behavior at earlier times. Those are both very exciting frontiers. This entanglement frontier is increasingly going to be at the forefront of basic physics research in the 21st century. By entanglement frontier, I just mean scaling up quantum systems to larger and larger complexity where it becomes harder and harder to simulate those systems with our existing digital tools. That means we can’t very well anticipate the types of behavior that we’re going to see. That’s a great opportunity for new discovery, and that’s part of what’s going to be exciting even in the relatively near term. When we have 100 qubits … there are some things that we can do to understand the behavior of the dynamics of a highly complex system of 100 qubits that we’ve never been able to experimentally probe before. That’s going to be very interesting. But what we’re starting to see now is that these quantum information ideas are connecting to these fundamental questions about gravitation, and how to think about it quantumly. And it turns out, as is true for most of the broader implications of quantum physics, the key thing is entanglement.

John Preskill [00:52:36] – We can think of the microscopic structure of spacetime, the geometry of where we live. Geometry just means who’s close to who else. If we’re in the auditorium, and I’m in the first row and you’re in the fourth row, the geometry is how close we are to one another. Of course, that’s very fundamental in both space and time. How far apart are we in space? How far apart are we in time? Is geometry really a fundamental thing, or is it something that’s kind of emergent from some even more fundamental concept? It seems increasingly likely that it’s really an emergent property.

John Preskill [00:53:29] – That there’s something deeper than geometry. What is it? We think it’s quantum entanglement. That you can think of the geometry as arising from quantum correlations among parts of a system. That’s really what defines who’s close to who. We’re trying to explore that idea more deeply, and one of the things that comes in is the idea of quantum error correction. Remember the whole idea of quantum error correction was that we could make a quantum system behave the way we want it to because it’s well-protected against the damaging effects of noise. It seems like quantum error correction is part of the deep secret of how spacetime geometry works. It has a kind of intrinsic robustness coming from these ideas of quantum error correction that makes space meaningful, so that it doesn’t just evaporate when you tap on it. If you wanted to, you could think of the spacetime, the space that you’re in and the space that I’m in, as parts of a system that are entangled with one another.

John Preskill [00:54:45] – What would happen if we broke that entanglement and your part of space became disentangled from my part? Well what we think that would mean is that there’d be no way to connect us anymore. There wouldn’t be any path through space that starts over here with me and ends with you. It’d become broken apart into two pieces. It’s really the entanglement which holds space together, which keeps it from falling apart into little pieces. We’re trying to get a deeper grasp of what that means.

Craig Cannon [00:55:19] – How do you make any progress on that? That seems like the most unbelievably difficult problem to work on.

John Preskill [00:55:26] – It’s difficult because, well for a number of reasons, but in particular, because it’s hard to get guidance from experiment, which is how physics historically–

Craig Cannon [00:55:38] – All science.

John Preskill [00:55:38] – Has advanced.

Craig Cannon [00:55:39] – Yeah.

John Preskill [00:55:41] – Although it was fun a moment ago to talk about what would happen if we disentangled your part of space from mine, I don’t know how to do that in the lab right now. Of course, part of the reason is we have the audacity to think we can figure these things out just by thinking about them. Maybe that’s not true. Nobody knows, right? We should try. Solving these problems is a great challenge, and it may be that the apes that evolved on Earth don’t have the capacity to understand things like the quantum structure of spacetime. But maybe we do, so we should try. Now in the longer term, and maybe not such a long term, maybe we can get some guidance from experiment. In particular, what we’re going to be doing with quantum computers and the other quantum technologies that are becoming increasingly sophisticated in the next couple of decades, is we’ll be able to control very well highly entangled complex quantum systems. That should mean that in a laboratory, on a tabletop, I can sort of make my own little toy space time …

John Preskill [00:57:02] – with an emergent geometry arising from the properties of that entanglement, and I think that’ll teach us lessons because systems like that are the types of system that, because they’re so highly entangled, digital computers can’t simulate them. It seems like only quantum computers are potentially up to the task. So that won’t be quite the same as disentangling your side of the room from mine, in real life. But we’d be able to do it in a laboratory setting using model systems, which I think would help us to understand the basic principles better.

Craig Cannon [00:57:39] – Wild. Yeah, desktop space time seems pretty cool, if you could figure it out.

John Preskill [00:57:43] – Yeah, it’s pretty fundamental. We didn’t really talk about what people sometimes, we did implicitly, but not in so many words. We didn’t talk about what people sometimes call quantum non-locality. It’s another way of describing quantum entanglement, actually. There’s this notion of Bell’s theorem that when you look at the correlations among the parts of a quantum system, that they’re different from any possible classical correlations. Some things that you read give you the impression that you can use that to instantaneously send information over long distances. It is true that if we have two qubits, electron spins, say, and they’re entangled with one another, then what’s kind of remarkable is that I can measure my qubit to see along some axis whether it’s up or down, and you can measure yours, and we will get perfectly correlated results. When I see up, you’ll see up, say, and when I see down, you’ll see down. And sometimes, people make it sound like that’s remarkable. That’s not remarkable in itself. Somebody could’ve flipped a pair of coins, you know,

John Preskill [00:59:17] – so that they came up both heads or both tails, and given one to you and one –

Craig Cannon [00:59:20] – Split them apart.

John Preskill [00:59:20] – to me.

Craig Cannon [00:59:21] – Yeah.

John Preskill [00:59:22] – And gone a light year apart, and then we both … hey, mine’s heads. Mine’s heads too!

Craig Cannon [00:59:24] – And then they call it quantum teleportation on YouTube.

John Preskill [00:59:28] – Yeah. Of course, what’s really important about entanglement that makes it different from just those coins is that there’s more than one way of looking at a qubit. We have what we call complementary ways of measuring it, so you can ask whether it’s up or down along this axis or along that axis. There’s nothing like that for the coins. There’s just one way to look at it. What’s cool about entanglement is that we’ll get perfectly correlated results if we both measure in the same way, but there’s more than one possible way that we could measure. What sometimes gets said, or the impression people get, is that that means that when I do something to my qubit, it instantaneously affects your qubit, even if we’re on different sides of the galaxy. But that’s not what entanglement does. It just means they’re correlated in a certain way.

John Preskill [01:00:30] – When you look at yours, if we have maximally entangled qubits, you just see a random bit. It could be a zero or a one, each occurring with probability 1/2. That’s going to be true no matter what I did to my qubit, and so you can’t tell what I did by just looking at it. It’s only that if we compared notes later we can see how they’re correlated, and that correlation holds for either one of these two complementary ways in which we could both measure. It’s that fact that we have these complementary ways to measure that makes it impossible for a classical system to reproduce those same correlations. So that’s one misconception that’s pretty widespread. Another one is this about quantum computing, which is in trying to explain why quantum computers are powerful, people will sometimes say, well, it’s because you can superpose –I used that word before, you can add together many different possibilities. That means that, whereas an ordinary computer would just do a computation once, acting on a superposition a quantum computer can do a vast number of computations all at once.

John Preskill [01:01:54] – There’s a certain sense in which that’s mathematically true if you interpret it right, but it’s very misleading. Because in the end, you’re going to have to make some measurement to read out the result. When you read it out, there’s a limited amount of information you can get. You’re not going to be able to read out the results of some huge number of computations in a single shot measurement. Really the key thing that makes it work is this idea of interference, which we discussed briefly when you asked about Grover’s algorithm. The art of a quantum algorithm is to make sure that the wrong answers interfere and cancel one another out, so the right answer is enhanced. That’s not automatic. It requires that the quantum algorithm be designed in just the right way.

Craig Cannon [01:02:50] – Right. The diagrams I’ve seen online at least, involve usually you’re squaring the output as it goes along, and then essentially, that flips the correct answer to the positive, and the others are in the negative position. Is that accurate?

John Preskill [01:03:08] – I wouldn’t have said it the way you did– Because you can’t really measure it as you go along. Once you measure it, the magic of superposition is going to be lost.

John Preskill [01:03:19] – It means that now there’s some definite outcome or state. To take advantage of this interference phenomenon, you need to delay the measurement. Remember when we were talking about the double slit and I said, if you actually see these wiggles in the probability of detection, which is the signal of interference, that means that there’s no way anybody could know whether the electron went through hole one or hole two? It’s the same way with quantum computing. If you think of the computation as being a superposition of different possible computations, it wouldn’t work — there wouldn’t be a speed up — if you could know which of those paths the computation followed. It’s important that you don’t know. And so you have to sum up all the different computations, and that’s how the interference phenomenon comes into play.

Craig Cannon [01:04:17] – To take a little sidetrack, you mentioned Feynman before. And before we started recording you mentioned working with him. I know I’m in the Feynman fan club, for sure. What was that experience like?

John Preskill [01:04:32] – We never really collaborated. I mean, we didn’t write a paper together, or anything like that. We overlapped for five years at Caltech. I arrived here in 1983. He died in 1988. We had offices on the same corridor, and we talked pretty often because we were both interested in the fundamental interactions, and in particular, what we call quantum chromodynamics. It’s our theory of how nuclear matter behaves, how quarks interact, what holds the proton together, those kinds of things. One big question is what does hold the proton together? Why don’t the quarks just fall apart? That was an example of a problem that both he and I were very interested in, and which we talked about sometimes. Now, this was pretty late in his career. When I think about it now, when I arrived at Caltech, that was 1983, Feynman was born in 1918, so he was 65. I’m 64 now, so maybe he wasn’t so old, right? But at the time, he seemed pretty ancient to me. Since I was 30.

John Preskill [01:05:58] – Those who interacted with Dick Feynman when he was really at his intellectual peak in the ’40s, and ’50s, and ’60s, probably saw even more extraordinary intellectual feats than I witnessed interacting with the 65 year old Feynman. He just loved physics, you know? He just thought everything was so much fun. He loved talking about it. He wasn’t as good a listener as a talker, but actually – well that’s a little unfair, isn’t it? It was kind of funny because Feynman, he always wanted to think things through for himself, sort of from first principles, rather than rely on the guidance from experts who have thought about these things before. Well that’s fine. You should try to understand things as deeply as you can on your own, and sort of reconstruct the knowledge from the ground up. That’s very enabling, and gives you new insights. But he was a little too dismissive, in my view, of what the other guys knew. But I could slip it in because I didn’t tell him, “Dick, you should read this paper by Polyakov” — well maybe I did, but he wouldn’t have even heard that — because he solved that problem that you’re talking about.

John Preskill [01:07:39] – But I knew what Polyakov had said about it, so I would say, “Oh well, look, why don’t we look at it this way?” And so he thought I was, that I was having all these insights, but the truth was the big difference between Feynman and me in the mid 1980s was I was reading literature, and he wasn’t.

Craig Cannon [01:08:00] – That’s funny.

John Preskill [01:08:01] – Probably, if he had been, he would’ve been well served, but that wasn’t the way he liked to work on things. He wanted to find his own approach. Of course, that had worked out pretty well for him throughout his career.

Craig Cannon [01:08:15] – What other qualities did you notice about him when he was roaming the corridors?

John Preskill [01:08:21] – He’d always be drumming. So you would know he was around because he’d actually be walking down the hallway drumming on the wall.

Craig Cannon [01:08:27] – Wait, with his hands, or with sticks, or–

John Preskill [01:08:29] – No, hands. He’d just be tapping.

Craig Cannon [01:08:32] – Just a bongo thing.

John Preskill [01:08:33] – Yeah. That was one thing. He loved to tell stories. You’ve probably read the books that Ralph Leighton put together based on the stories Feynman told. Ralph did an amazing job, of capturing Feynman’s personality in writing those stories down because I’d heard a lot of them. I’m sure he told the same stories to many people many times, because he loved telling stories. But the book really captures his voice pretty well.

John Preskill [01:09:12] – If you had heard him tell some of these stories, and then you read the way Ralph Leighton transcribed them, you can hear Feynman talking. At the time that I knew him, one of the experiences that he went through was he was on the Challenger commission after the space shuttle blew up. He was in Washington a lot of the time, but he’d come back from time to time, and he would sort of sit back and relax in our seminar room and start bringing us up to date on all the weird things that were happening on the Challenger commission. That was pretty fun.

Craig Cannon [01:09:56] – That’s really cool.

John Preskill [01:09:56] – A lot of that got captured in the second volume. I guess it’s the one called, What Do You Care What Other People Think? There’s a chapter about him telling stories about the Challenger commission. He was interested in everything. It wasn’t just physics. He was very interested in biology. He was interested in computation. I remember how excited he was when he got his first IBM PC. Probably not long after I got to Caltech. Yeah, it was what they called the AT. We thought it was a pretty sexy machine. I had one, too. He couldn’t wait to start programming it in BASIC.

Craig Cannon [01:10:50] – Very cool.

John Preskill [01:10:51] – Because that was so much fun.

Craig Cannon [01:10:52] – There was a question that I was kind of curious to your answer. Tika asks about essentially, teaching about quantum computers. They say, many kids in grade 10 can code. Some can play with machine learning tools without knowing the math. Can quantum computing become as simple and/or accessible?

John Preskill [01:11:17] – Maybe so. At some level, when people say quantum mechanics is counterintuitive, it’s hard for us to grasp, it’s so foreign to our experience, that’s true. The way things behave at the microscopic scale are, like we discussed earlier, really different from the way ordinary stuff behaves. But it’s a question of familiarity. What I wouldn’t be surprised by is that if you go out a few decades, kids who are 10 years old are going to be playing quantum games. That’s an application area that doesn’t get discussed very much, but there could be a real market there because people love games. Quantum games are different, and the strategies are different, and what you have to do to win is different. If you play the game enough, you start to get the hang of it.

John Preskill [01:12:26] – I don’t see any reason why kids who have not necessarily deeply studied physics can’t get a pretty good feel for how quantum mechanics works. You know, the way ordinary physics works, maybe it’s not so intuitive. Newton’s laws … Aristotle couldn’t get it right. He thought you had to keep pushing on something to get it to keep moving. That wasn’t right. Galileo was able to roll balls down a ramp, and things like that, and see he didn’t have to keep pushing to keep it moving. He could see that it was uniformly accelerated in a gravitational field. Newton took that to a much more general and powerful level. You fool around with stuff, and you get the hang of it. And I think quantum stuff can be like that. We’ll experience it in a different way, but when we have quantum computers, in a way, that opens the opportunity for trying things out and seeing what happens.

John Preskill [01:13:50] – After you’ve played the game enough, you start to anticipate. And actually, it’s an important point about the applications. One of the questions you asked me at the beginning was what are we able to do with quantum computers? And I said, I don’t know. So how are we going to discover new applications? It might just be, at least in part, by fooling around. A lot of classical algorithms that people use on today’s computers were discovered, or that they were powerful was discovered, by experimenting. By trying it. I don’t know … what’s an example of that? Well, the simplex method that we use in linear programming. I don’t think there was a mathematical proof that it was fast at first, but people did experiments, and they said, hey, this is pretty fast.

Craig Cannon [01:14:53] – Well, you’re seeing it a lot now in machine learning.

John Preskill [01:14:57] – Yeah, well that’s a good example.

Craig Cannon [01:14:58] – You test it out a million times over when you’re running simulations, and it turns out, that’s what works. Following the thread of education, and maybe your political interest, given it’s the year that it is, do you have thoughts on how you would adjust or change STEM education?

John Preskill [01:15:23] – Well, no particularly original thoughts. But I do think that STEM education … we shouldn’t think of it as we’re going to need this technical workforce, and so we better train them. The key thing is we want the general population to be able to reason effectively, and to recognize when an argument is phony and when it’s authentic. To think about, well how can I check whether what I just read on Facebook is really true? And I see that as part of the goal of STEM education. When you’re teaching kids in school how to understand the world by doing experiments, by looking at the evidence, by reasoning from the evidence, this is something that we apply in everyday life, too. I don’t know exactly how to implement this–

John Preskill [01:16:36] – But I think we should have that perspective that we’re trying to educate a public, which is going to eventually make critical decisions about our democracy, and they should understand how to tell when something is true or not. That’s a hard thing to do in general, but you know what I mean. That there are some things that, if you’re a person with some — I mean it doesn’t necessarily have to be technical — but if you’re used to evaluating evidence and making a judgment based on that evidence about whether it’s a good argument or not, you can apply that to all the things you hear and read, and make better judgments.

Craig Cannon [01:17:23] – What about on the policy side? Let’s see, JJ Francis asked that, if you or any of your colleagues would ever consider running for office. Curious about science policy in the US.

John Preskill [01:17:38] – Well, it would be good if we had more scientifically trained people in government. Very few members of Congress. I know of one, Bill Foster’s a physicist in Illinois. He was a particle physicist, and he worked at Fermilab, and now he’s in Congress, and very interested in the science and educational policy aspects of government. Rush Holt was a congressman from New Jersey who had a background in physics. He retired from the House a couple of years ago, but he was in Congress for something like 18 years, and he had a positive influence, because he had a voice that people respected when it came to science policy. Having more people like that would help. Now, another thing, it doesn’t have to be elective office.

Craig Cannon [01:18:39] – Right.

John Preskill [01:18:42] – There are a lot of technically trained people in government, many of them making their careers in agencies that deal with technical issues. Department of Defense, of course, there are a lot of technical issues. In the Obama Administration we had two successive secretaries of energy who were very, very good physicists. Steve Chu was Nobel Prize winning physicist. Then Ernie Moniz, who’s a real authority on nuclear energy and weapons. That kind of expertise makes a difference in government.

John Preskill [01:19:24] – Now the Secretary of Energy is Rick Perry. It’s a different background.

Craig Cannon [01:19:28] – Yeah, you could say that. Just kind of historical reference, what policies did they put in place that you really felt their hand as a physicist move forward?

John Preskill [01:19:44] – You mean in particular–

Craig Cannon [01:19:45] – I’m talking the Obama Administration.

John Preskill [01:19:49] – Well, I think the Department of Energy, DOE, tried to facilitate technical innovation by seeding new technologies, by supporting startup companies that were trying to do things that would improve battery technology, and solar power, and things like that, which could benefit future generations. They had an impact by doing that. You don’t have to be a Nobel Prize winning physicist to think that’s a good idea. That the administration felt that was a priority made a difference, and appointing a physicist at Department of Energy was, if nothing else, highly symbolic of how important those things are.

Craig Cannon [01:20:52] – On the quantum side, someone asked Vikas Karad, he asked where the Quantum Valley might be. Do you have thoughts, as in Silicon Valley for quantum computing?

John Preskill [01:21:06] – Well… I don’t know, but you look at what’s happening the last couple of years, there have been a number of quantum startups. A notable number of them are in the Bay Area. Why so? Well, that’s where the tech industry is concentrated and where the people who are interested in financing innovative technical startups are concentrated. If you are an entrepreneur interested in starting a company, and you’re concerned about how to fundraise for it, it kind of makes sense to locate in that area. Now, that’s what’s sort of happening now, and may not continue, of course. It might not be like that indefinitely. Nothing lasts forever, but I would say… That’s the place, Silicon Valley is likely to be Quantum Valley, the way things are right now.

Craig Cannon [01:22:10] – Well then what about the physicists who might be listening to this? If they’re thinking about starting a company, do you have advice for them?

John Preskill [01:22:22] – Just speaking very generally, if you’re putting a team together… Different people have different expertise. Take quantum computing as an example, like we were saying earlier, some of the big players and the startups, they want to do everything. They want to build the hardware, figure out better ways to fabricate it. Better control, better software, better applications. Nobody can be an expert on all those things. Of course, you’ll hire a software person to write your software, and microwave engineer to figure out your control, and of course that’s the right thing to do. But I think in that arena, and it probably applies to other entrepreneurial activity relating to physics, being able to communicate across those boundaries is very valuable, and you can see it in quantum computing now. That if the man or woman who’s involved in the software has that background, but there’s not a big communication barrier talking to the people who are doing the control engineering, that can be very helpful. It makes sense to give some preference to the people who maybe are comfortable doing so, or have the background that stretches across more than one of those areas of expertise. That can be very enabling in a technology arena like quantum computing today, where we’re trying to do really, really hard stuff, and you don’t know whether you’ll succeed, and you want to give it your best go by seeing the connections between those different things.

Craig Cannon [01:24:28] – Would you advise someone then to maybe teach or try and explain it to, I don’t know their young cousins? Because Feynman maybe recognizes the king of communicating physics, at least for a certain period of time. How would you advise someone to get better at it so they can be more effective?

John Preskill [01:24:50] – Practice. There are different aspects of that. This isn’t what you meant at all, but I’ll say it anyway, because what you asked brought it to mind. If you teach, you learn. We have this odd model in the research university that a professor like me is supposed to do research and teach. Why don’t we hire teachers and researchers? Why do we have the same people doing both? Well, part of the reason for me is most of what I know, what I’ve learned since my own school education ended, is knowledge I acquired by trying to teach it. To keep our intellect rejuvenated, we have to have that experience of trying to teach new things that we didn’t know that well before to other people. That deepens your knowledge. Just thinking about how you convey it makes you ask questions that you might not think to ask otherwise, and you say “Hey, I don’t know the answer to that.” Then you have to try to figure it out. So I think that applies at varying levels to any situation in which a scientist, or somebody with a technical background, is trying to communicate.

John Preskill [01:26:21] – By thinking about how to get it across to other people, we can get new insights, you know? We can look at it in a different way. It’s not a waste of time. Aside from the benefits of actually successfully communicating, we benefit from it in this other way. But other than that… Have fun with it, you know? Don’t look at it as a burden, or some kind of task you have to do along with all the other things you’re doing. It should be a pleasure. When it’s successful, it’s very gratifying. If you put a lot of thought into how to communicate something and you think people are getting it, that’s one of the ways that somebody in my line of work can get a lot of satisfaction.

Craig Cannon [01:27:23] – If now were to be your opportunity to teach a lot of people about physics, and you could just point someone to things, who would you advise someone to be? They want to learn more about quantum computing, they want to learn about physics. What should they be reading? What YouTube channel should they follow? What should they pay attention to?

John Preskill [01:27:44] – Well one communicator who I have great admiration for is Leonard Susskind, who’s at Stanford. You mentioned Feynman as the great communicator, and that’s fair, but in terms of style and personality of physicists who are currently active, I think Lenny Susskind is the most similar to Feynman of anyone I can think of. He’s a no bullshit kind of guy. He wants to give you the straight stuff. He doesn’t want to water it down for you. But he’s very gifted when it comes to making analogies and creating the illusion that you’re understanding what he’s saying. He has … if you just go to YouTube and search Leonard Susskind you’ll see lectures that he’s given at Stanford where they have some kind of extension school for people who are not Stanford students, people in the community. A lot of them in the tech community because it’s Stanford, and he’s giving courses. Yeah, and on quite sophisticated topics, but also on more basic topics, and he’s in the process of turning those into books. I’m not sure how many of those have appeared, but he has a series called The Theoretical Minimum

John Preskill [01:29:19] – which is supposed to be the gentle introduction to different topics like classical physics, quantum physics, and so on. He’s pretty special I think in his ability to do that.

Craig Cannon [01:29:32] – I need to subscribe. Actually, here’s a question then. In the things you’ve relearned while teaching over the past, I guess it’s 35 years now.

John Preskill [01:29:46] – Shit, is that right?

Craig Cannon [01:29:47] – Something like that.

John Preskill [01:29:48] – That’s true. Yeah.

Craig Cannon [01:29:51] – What were the big thing, what were the revelations?

John Preskill [01:29:55] – That’s how I learned quantum computing, for one thing. I was not at all knowledgeable about information science. That wasn’t my training. Back when I was in school, physicists didn’t learn much about things like information theory, computer science, complexity theory. One of the great things about quantum computing is its interdisciplinary character, that it brings these different things into contact, which traditionally had not been part of the common curriculum of any community of scholars. I decided 20 years ago that I should teach a quantum information class at Caltech, and I worked very hard on it that year. Not that I’m an expert, or anything, but I learned a lot about information theory, and things like channel capacity, and computational complexity — how we classify the hardness of problems — and algorithms. Things like that, which I didn’t really know very well. I had sort of a passing familiarity with some of those things from reading some of the quantum computing literature. That’s no substitute for teaching a class because then you really have to synthesize it and figure out your way of presenting it. Most of the notes are typed up and you can still get to them on my website.That was pretty transformative for me … and it was easier then, 20 years ago, I guess than it is now because it was such a new topic.

John Preskill [01:31:49] – But I really felt I was kind of close enough to the cutting edge on most of those topics by the time I’d finished the class that I wasn’t intimidated by another paper I’d read or a new thing I’d hear about those things. That was probably the one case where it really made a difference in my foundation of knowledge which enabled me to do things. But I had the same experience in particle physics. When I was a student, I read a lot. I was very broadly interested in physics. But when the first time, I was still at Harvard at the time –later I taught a similar course here — I’m in my late 20s, I’m just a year or two out of graduate school, and I decide to teach a very comprehensive class on elementary particles … in particular, quantum chromodynamics, the theory of nuclear forces like we talked about before. It just really expanded my knowledge to have that experience of teaching that class. I still draw on that. I can still remember that experience and I think I get ideas that I might not otherwise have because I went through that.

Craig Cannon [01:33:23] – I want to get involved now. I want to go back to school, or maybe teach a class. I don’t know.

John Preskill [01:33:27] – Well, what’s stopping you?

Craig Cannon [01:33:29] – Nothing. Alright, thanks John.

John Preskill [01:33:32] – Okay, thank you Craig.

]]>Began in Queens – Far Rockaway.

It’s there a boy would stop and think

To fix a radio on the blink.

He grew up as a curious guy

Who showed his sister the night sky.

He wondered why, and wondered *why*

He wondered why he wondered why.

New Jersey followed MIT.

The cream *and* lemon in his tea

Taught Mr. Feynman when to joke

And how to act like normal folk.

Cracking safes, though loads of fun,

Could not conceal from everyone,

The mind behind that grinning brow:

A new Dirac, but human now.

In New York state he spun a plate

Which led, in nineteen forty eight

To diagrams that let us see

The processes of QED.

He left the east and made a trek

Until he landed at Caltech.

His genius brought us great acclaim.

This place would never be the same.

Dick’s teaching skills were next to none

When reinventing Physics 1.

His wisdom’s there for all to see

In red books numbered 1, 2, 3.

Always up and never glum

He loved to paint and play the drum.

His mind engaged with everything

For all the world is int’resting.

Dick proved that charm befits a nerd.

For papers read, and stories heard

We’ll always be in Feynman’s debt.

A giant we cannot forget.

What will you visit? The Ashmolean Museum, home to da Vinci drawings, samurai armor, and Egyptian mummies? The Bodleian, one of Europe’s oldest libraries? Turf Tavern, where former president Bill Clinton reportedly “didn’t inhale” marijuana?

Felix Binder showed us a cemetery.

Of course he showed us a cemetery. We were at a thermodynamics conference.

The Fifth Quantum Thermodynamics Conference took place in the City of Dreaming Spires.^{1 }Participants enthused about energy, information, engines, and the flow of time. About 160 scientists attended—roughly 60 more than attended the first conference, co-organizer Janet Anders estimated.

Weak measurements and quasiprobability distributions were trending. The news delighted me, *Quantum Frontiers* regulars won’t be surprised to hear.

Measurements disturb quantum systems, as early-20th-century physicist Werner Heisenberg intuited. Measure a system’s position strongly, and you forfeit your ability to predict the outcomes of future momentum measurements. Weak measurements don’t disturb the system much. In exchange, weak measurements provide little information about the system. But you can recoup information by performing a weak measurement in each of many trials, then processing the outcomes.

Strong measurements lead to probability distributions: Imagine preparing a particle in some quantum state, then measuring its position strongly, in each of many trials. From the outcomes, you can infer a probability distribution , wherein denotes the probability that the next trial will yield position .

Weak measurements lead analogously to quasiprobability distributions. Quasiprobabilities resemble probabilities but can misbehave: Probabilities are real numbers no less than zero. Quasiprobabilities can dip below zero and can assume nonreal values.

What relevance have weak measurements and quasiprobabilities to quantum thermodynamics? Thermodynamics involves work and heat. Work is energy harnessed to perform useful tasks, like propelling a train from London to Oxford. Heat is energy that jiggles systems randomly.

Quantum properties obscure the line between work and heat. (Here’s an illustration for experts: Consider an isolated quantum, such as a spin chain. Let denote the Hamiltonian that evolves with the time . Consider preparing the system in an energy eigenstate . This state has zero diagonal entropy: Measuring the energy yields deterministically. Considering tuning , as by changing a magnetic field. This change constitutes work, we learn in electrodynamics class. But if changes quickly, the state can acquire weight on multiple energy eigenstates. The diagonal entropy rises. The system’s energetics have gained an unreliability characteristic of heat absorption. But the system has remained isolated from any heat bath. Work mimics heat.)

Quantum thermodynamicists have defined work in terms of a *two-point measurement scheme*: Initialize the quantum system, such as by letting heat flow between the system and a giant, fixed-temperature heat reservoir until the system equilibrates. Measure the system’s energy strongly, and call the outcome . Isolate the system from the reservoir. Tune the Hamiltonian, performing the quantum equivalent of propelling the London train up a hill. Measure the energy, and call the outcome .

Any change in a system’s energy comes from heat and/or from work , by the First Law of Thermodynamics, Our system hasn’t exchanged energy with any heat reservoir between the measurements. So the energy change consists of work: .

Imagine performing this protocol in each of many trials. Different trials will require different amounts of work. Upon recording the amounts, you can infer a distribution . denotes the probability that the next trial will require an amount of work.

Measuring the system’s energy disturbs the system, squashing some of its quantum properties. (The measurement eliminates coherences, relative to the energy eigenbasis, from the state.) Quantum properties star in quantum thermodynamics. So the two-point measurement scheme doesn’t satisfy everyone.

Enter weak measurements. They can provide information about the system’s energy without disturbing the system much. Work probability distributions give way to quasiprobability distributions .

So propose Solinas and Gasparinetti, in these papers. Other quantum thermodynamicists apply weak measurements and quasiprobabilities differently.^{2} I proposed applying them to characterize chaos, and the scrambling of quantum information in many-body systems, at the conference.^{3} Feel free to add your favorite applications to the “comments” section.

Wednesday afforded an afternoon for touring. Participants congregated at the college of conference co-organizer Felix Binder.^{3} His tour evoked, for me, the ghosts of thermo conferences past: One conference, at the University of Cambridge, had brought me to the grave of thermodynamicist Arthur Eddington. Another conference, about entropies in information theory, had convened near Canada’s Banff Cemetery. Felix’s tour began with St. Edmund Hall’s cemetery. Thermodynamics highlights equilibrium, a state in which large-scale properties—like temperature and pressure—remain constant. Some things never change.

*With thanks to Felix, Janet, and the other coordinators for organizing the conference.*

^{1}Oxford derives its nickname from an elegy by Matthew Arnold. Happy National Poetry Month!

^{2}https://arxiv.org/abs/1508.00438, https://arxiv.org/abs/1610.04285,

https://arxiv.org/abs/1607.02404,

https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.070601,

https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.040602

^{3}Michele Campisi joined me in introducing out-of-time-ordered correlators (OTOCs) into the quantum-thermo conference: He, with coauthor John Goold, combined OTOCs with the two-point measurement scheme.

^{3}Oxford University contains 38 colleges, the epicenters of undergraduates’ social, dining, and housing experiences. Graduate students and postdoctoral scholars affiliate with colleges, and senior fellows—faculty members—govern the colleges.

There are two different branches to the origin story. The first was my personal motivation and the second is related to how I came into contact with my collaborators (who began working on the same project but with different motivation, namely to explain a phase transition described in this paper by Belin, Keller and Zadeh.)

During the first year of my PhD at Caltech I was working in the mathematics department and I had a few brief but highly influential interactions with Nikolai Makarov while I was trying to find a PhD advisor. His previous student, Stanislav Smirnov, had recently won a Fields Medal for his work studying Schramm-Loewner evolution (SLE) and I was captivated by the beauty of these objects.

One afternoon, I went to Professor Makarov’s office for a meeting and while he took a brief phone call I noticed a book on his shelf called Indra’s Pearls, which had a mesmerizing image on its cover. I asked Professor Makarov about it and he spent 30 minutes explaining some of the key results (which I didn’t understand at the time.) When we finished that part of our conversation Professor Makarov described this area of math as “math for the future, ahead of the tools we have right now” and he offered for me to borrow his copy. With a description like that I was hooked. I spent the next six months devouring this book which provided a small toehold as I tried to grok the relevant mathematics literature. This year or so of being obsessed with Kleinian groups (the underlying objects in Indra’s Pearls) comes back into the story soon. I also want to mention that during that meeting with Professor Makarov I was exposed to two other ideas that have driven my research as I moved from mathematics to physics: quasiconformal mappings and the simultaneous uniformization theorem, both of which will play heavy roles in the next paper I release. In other words, it was a pretty important 90 minutes of my life.

My life path then hit a discontinuity when I was recruited to work on a DARPA project, which led to taking an 18 month leave of absence from Caltech. It’s an understatement to say that being deployed in Afghanistan led to extreme introspection. While “down range” I had moments of clarity where I knew life was too short to work on anything other than ones’ deepest passions. Before math, the thing that got me into science was a childhood obsession with space and black holes. I knew that when I returned to Caltech I wanted to work on quantum gravity with John Preskill. I sent him an e-mail from Afghanistan and luckily he was willing to take me on as a student. But as a student in the mathematics department, I knew it would be tricky to find a project that involved all of: black holes (my interest), quantum information (John’s primary interest at the time) and mathematics (so I could get the degree.)

I returned to Caltech in May of 2012 which was only two months before the Firewall Paradox was introduced by Almheiri, Marolf, Polchinski and Sully. It was obvious that this was where most of the action would be for the next few years so I spent a great deal of time (years) trying to get sharp enough in the underlying concepts to be able to make comments of my own on the matter. Black holes are probably the closest things we have in Nature to the proverbial bottomless pit, which is an apt metaphor for thinking about the Firewall Paradox. After two years I was stuck. I still wasn’t close to confident enough with AdS/CFT to understand a majority of the promising developments. And then at exactly the right moment, in the summer of 2014, Preskill tipped my hat to a paper titled Multiboundary Wormholes and Holographic Entanglement by Balasubramanian, Hayden, Maloney, Marolf and Ross. It was immediately obvious to me that the tools of Indra’s Pearls (Kleinian groups) provided exactly the right language to study these “multiboundary wormholes.” But despite knowing a bridge could be built between these fields, I still didn’t have the requisite physics mastery (AdS/CFT) to build it confidently.

Before mentioning how I met my collaborators and describing the work we did together, let me first describe the worlds that we bridged together.

**3D Gravity and Universality**

As the media has sensationalized to death, one of the most outstanding questions in modern physics is to *discover* and then *understand* a theory of quantum gravity. As a quick aside, Quantum gravity is just a placeholder name for such a theory. I used italics because physicists have already *discovered* candidate theories, such as string theory and loop quantum gravity (I’m not trying to get into politics, just trying to demonstrate that there are multiple candidate theories). But *understanding* these theories — carrying out all of the relevant computations to confirm that they are consistent with Nature and then doing experiments to verify their novel predictions — is still beyond our ability. Surprisingly, without knowing the specific theory of quantum gravity that guides Nature’s hand, we’re still able to say a number of universal things that must be true for any theory of quantum gravity. The most prominent example being the holographic principle which comes from the entropy of black holes being proportional to the *surface area* encapsulated by the black hole’s horizon (a naive guess says the entropy should be proportional to the *volume *of the black hole; such as the entropy of a glass of water.) Universal statements such as this serve as guideposts and consistency checks as we try to understand quantum gravity.

It’s exceedingly rare to find universal statements that are true in physically realistic models of quantum gravity. The holographic principle is one such example but it pretty much stands alone in its power and applicability. By physically realistic I mean: 3+1-dimensional and with the curvature of the universe being either flat or very mildly positively curved. However, we can make additional simplifying assumptions where it’s easier to find universal properties. For example, we can reduce the number of spatial dimensions so that we’re considering 2+1-dimensional quantum gravity (3D gravity). Or we can investigate spacetimes that are negatively curved (anti-de Sitter space) as in the AdS/CFT correspondence. Or we can do BOTH! As in the paper that we just posted. The hope is that what’s learned in these limited situations will back-propagate insights towards reality.

The motivation for going to 2+1-dimensions is that gravity (general relativity) is much simpler here. This is explained eloquently in section II of Steve Carlip’s notes here. In 2+1-dimensions, there are no “local”/”gauge” degrees of freedom. This makes thinking about quantum aspects of these spacetimes much simpler.

The standard motivation for considering negatively curved spacetimes is that it puts us in the domain of AdS/CFT, which is the best understood model of quantum gravity. However, it’s worth pointing out that our results don’t rely on AdS/CFT. We consider negatively curved spacetimes (negatively curved Lorentzian manifolds) because they’re related to what mathematicians call hyperbolic manifolds (negatively curved Euclidean manifolds), and mathematicians know a great deal about these objects. It’s just a helpful coincidence that because we’re working with negatively curved manifolds we then get to unpack our statements in AdS/CFT.

**Multiboundary wormholes**

Finding solutions to Einstein’s equations of general relativity is a notoriously hard problem. Some of the more famous examples include: Minkowski space, de-Sitter space, anti-de Sitter space and Schwarzschild’s solution (which describes perfectly symmetrical and static black holes.) However, there’s a trick! Einstein’s equations only depend on the *local* curvature of spacetime while being insensitive to *global* topology (the number of boundaries and holes and such.) If ** M** is a solution of Einstein’s equations and is a discrete subgroup of the isometry group of , then the quotient space will also be a spacetime that solves Einstein’s equations! Here’s an example for intuition. Start with 2+1-dimensional Minkowski space, which is just a stack of flat planes indexed by time. One example of a “discrete subgroup of the isometry group” is the cyclic group generated by a single translation, say the translation along the x-axis by ten meters. Minkowski space quotiented by this group will also be a solution of Einstein’s equations, given as a stack of 10m diameter cylinders indexed by time.

D+1-dimensional Anti-de Sitter space () is the maximally symmetric d+1-dimensional Lorentzian manifold with negative curvature. Our paper is about 3D gravity in negatively curved spacetimes so our starting point is which can be thought of as a stack of Poincare disks (or hyperbolic sheets) with the time dimension telling you which disk (sheet) you’re on. The isometry group of is a group called which in turn is isomorphic to the group . The group isn’t a very common group but a single copy of is a very well-studied group. Discrete subgroups of it are called Fuchsian groups. Every element in the group should be thought of as a 2×2 matrix which corresponds to a Mobius transformation of the complex plane. The quotients that we obtain from these Fuchsian groups, or the larger isometry group yield a rich infinite family of new spacetimes, which are called multiboundary wormholes. Multiboundary wormholes have risen in importance over the last few years as powerful toy models when trying to understand how entanglement is dispersed near black holes (Ryu-Takayanagi conjecture) and for how the holographic dictionary works in terms of mapping operators in the boundary CFT to fields in the bulk (entanglement wedge reconstruction.)

I now want to work through a few examples.

**BTZ black hole: **this is the simplest possible example. It’s obtained by quotienting by a cyclic group , generated by a single matrix which for example could take the form . The matrix *A* acts by fractional linear transformation on the complex plane, so in this case the point gets mapped to . In this case

**Three boundary wormhole: **

There are many parameterizations that we can choose to obtain the three boundary wormhole. I’ll only show schematically how the gluings go. A nice reference with the details is this paper by Henry Maxfield.

**Torus wormhole: **

It’s simpler to write down generators for the torus wormhole; but following along with the gluings is more complicated. To obtain the three boundary wormhole we quotient by the free group where and . (Note that this is only one choice of generators, and a highly symmetrical one at that.)

**Lorentzian to Euclidean spacetimes**

So far we have been talking about negatively curved *Lorentzian* manifolds. These are manifolds that have a notion of both “time” and “space.” The technical definition involves differential geometry and it is related to the signature of the metric. On the other hand, mathematicians know a great deal about negatively curved *Euclidean* manifolds. Euclidean manifolds only have a notion of “space” (so no time-like directions.) Given a multiboundary wormhole, which by definition, is a quotient of where is a discrete subgroup of Isom(*), *there’s a procedure to analytically continue this to a Euclidean hyperbolic manifold of the form where is three dimensional hyperbolic space and is a discrete subgroup of the isometry group of , which is . This analytic continuation procedure is well understood for time-symmetric spacetimes but it’s subtle for spacetimes that don’t have time-reversal symmetry. A discussion of this subtlety will be the topic of my next paper. To keep this blog post at a reasonable level of technical detail I’m going to need you to take it on a leap of faith that to every Lorentzian 3-manifold multiboundary wormhole there’s an associated Euclidean hyperbolic 3-manifold. Basically you need to believe that given a discrete subgroup of there’s a procedure to obtain a discrete subgroup of . Discrete subgroups of are called Kleinian groups and quotients of by groups of this form yield hyperbolic 3-manifolds. These Euclidean manifolds obtained by analytic continuation arise when studying the thermodynamics of these spacetimes or also when studying correlation functions; there’s a sense in which they’re physical.

**TLDR: you start with a 2+1-d Lorentzian 3-manifold obtained as a quotient and analytic continuation gives a Euclidean 3-manifold obtained as a quotient where is 3-dimensional hyperbolic space and is a discrete subgroup of (Kleinian group.) **

**Limit sets: **

Every Kleinian group has a fractal that’s naturally associated with it. The fractal is obtained by finding the fixed points of every possible combination of generators and their inverses. Moreover, there’s a beautiful theorem of Patterson, Sullivan, Bishop and Jones that says the smallest eigenvalue of the spectrum of the Laplacian on the quotient Euclidean spacetime is related to the Hausdorff dimension of this fractal (call it ) by the formula . This smallest eigenvalue controls a number of the quantities of interest for this spacetime but calculating it directly is usually intractable. However, McMullen proposed an algorithm to calculate the Hausdorff dimension of the relevant fractals so we can get at the spectrum efficiently, albeit indirectly.

**What we did**

Our primary result is a generalization of the Hawking-Page phase transition for multiboundary wormholes. To understand the thermodynamics (from a 3d quantum gravity perspective) one starts with a fixed boundary Riemann surface and then looks at the contributions to the partition function from each of the ways to fill in the boundary (each of which is a hyperbolic 3-manifold). We showed that the expected dominant contributions, which are given by handlebodies, are unstable when the kinetic operator is negative, which happens whenever the Hausdorff dimension of the limit set of is greater than the lightest scalar field living in the bulk. One has to go pretty far down the quantum gravity rabbit hole (black hole) to understand why this is an interesting research direction to pursue, but at least anyone can appreciate the pretty pictures!

]]>Assyrian art fills a gallery in London’s British Museum. *Lamassu *flank the gallery’s entrance. Carvings fill the interior: depictions of soldiers attacking, captives trudging, and kings hunting lions. The artwork’s vastness, its endurance, and the contact with a three-thousand-year-old civilization floor me. I tore myself away as the museum closed one Sunday night.

I visited the British Museum the night before visiting Jonathan Oppenheim’s research group at University College London (UCL). Jonathan combines quantum information theory with thermodynamics. He and others co-invented thermodynamic resource theories (TRTs), which *Quantum Frontiers* regulars will know of. TRTs are quantum-information-theoretic models for systems that exchange energy with their environments.

Energy is conjugate to time: Hamiltonians, mathematical objects that represent energy, represent also translations through time. We measure time with clocks. Little wonder that one can study quantum clocks using a model for energy exchanges.

Mischa Woods, Ralph Silva, and Jonathan used a resource theory to design an autonomous quantum clock. “Autonomous” means that the clock contains all the parts it needs to operate, needs no periodic winding-up, etc. When might we want an autonomous clock? When building quantum devices that operate independently of classical engineers. Or when performing a quantum computation: Computers must perform logical gates at specific times.

Wolfgang Pauli and others studied quantum clocks, the authors recall. How, Pauli asked, would an ideal clock look? Its Hamiltonian, , would have eigenstates . The labels denote possible amounts of energy.

The Hamiltonian would be conjugate to a “time operator” . Let denote an eigenstate of . This “time state” would equal an even superposition over the ’s. The clock would occupy the state at time .

Imagine measuring the clock, to learn the time, or controlling another system with the clock. The interaction would disturb the clock, changing the clock’s state. The disturbance wouldn’t mar the clock’s timekeeping, if the clock were ideal. What would enable an ideal clock to withstand the disturbances? The ability to have any amount of energy: must stretch from to . Such clocks can’t exist.

Approximations to them can. Mischa, Ralph, and Jonathan designed a finite-size clock, then characterized how accurately the clock mimics the ideal. (Experts: The clock corresponds to a Hilbert space of finite dimensionality . The clock begins in a Gaussian state that peaks at one time state . The finite-width Gaussian offers more stability than a clock state.)

Disturbances degrade our ability to distinguish instants by measuring the clock. Imagine gazing at a kitchen clock through blurry lenses: You couldn’t distinguish 6:00 from 5:59 or 6:01. Disturbances also hinder the clock’s ability to implement processes, such as gates in a computation, at desired instants.

Mischa & co. quantified these degradations. The errors made by the clock, they found, decay inverse-exponentially with the clock’s size: Grow the clock a little, and the errors shrink a lot.

Time has degraded the *lamassu*, but only a little. You can distinguish feathers in their wings and strands in their beards. People portray such artifacts as having “withstood the flow of time,” or “evaded,” or “resisted.” Such portrayals have never appealed to me. I prefer to think of the *lamassu *as surviving not because they clash with time, but because they harmonize with it. The prospect of harmonizing with time—whatever that means—has enticed me throughout my life. The prospect partially underlies my research into time—perhaps childishly, foolishly—I recognize if I remove my blurry lenses before gazing in the mirror.

The creation of lasting works, like *lamassu*, has enticed me throughout my life. I’ve scrapbooked, archived, and recorded, and tended memories as though they were Great-Grandma’s cookbook. Ancient civilizations began alluring me at age six, partially due to artifacts’ longevity. No wonder I study the second law of thermodynamics.

Yet doing theoretical physics makes no sense from another perspective. The ancient Egyptians sculpted granite, when they could afford it. Gudea, king of the ancient city-state of Lagash, immortalized himself in diorite. I fashion ideas, which lack substance. Imagine playing, rather than rock-paper-scissors, granite-diorite-idea. The idea wouldn’t stand a chance.

Would it? Because an idea lacks substance, it can manifest in many forms. Plato’s cave allegory has manifested as a story, as classroom lectures, on handwritten pages, on word processors and websites, in cartloads of novels, in the film *The Matrix*, in one of the four most memorable advertisements I received from colleges as a high-school junior, and elsewhere. Plato’s allegory has survived since about the fourth century BCE. King Ashurbanipal’s lion-hunt reliefs have survived for only about 200 years longer.

The lion-hunt reliefs—and *lamassu*—exude a grandness, a majesty that’s attracted me as their longevity has. The nature of time and the perfect clock have as much grandness. Leaving the British Museum’s Assyrian gallery at 6 PM one Sunday, I couldn’t have asked for a more fitting location, 24 hours later, than in a theoretical-physics conversation.

*With thanks to Jonathan, to Álvaro Martín-Alhambra, and to Mischa for their hospitality at UCL; to Ada Cohen for the “Art history of ancient Egypt and the ancient Near East” course for which I’d been hankering for years; to my brother, for transmitting the ancient-civilizations bug; and to my parents, who fed the infection with museum visits.*

*Click here for **a follow-up to the quantum-clock paper.*

About Stephen Hawking.

My good friend

Explained how time can end.

And clued us in

On how time can begin.

Always droll,

He spoke about a hole:

“Now, wait a minute, Jack,

A black hole ain’t so black!”

Those immortal words he said,

Which millions now have duly read,

Hit physics like a ton of bricks.

Well, that’s how Stephen got his kicks.

Always grinning through his glasses,

He brought science to the masses,

Displayed a rare capacity

For humor and audacity.

And that’s why, on this somber day,

With relish we can gladly say:

“Thanks, Stephen, for the things you’ve done.

And most of all, thanks for the fun!”

And though there’s more to say, my friend,

This poem, too, must, sadly, end.

However, with that many physicists, you will find a few trying to make science cool, or at least having fun while they try. One relatively untapped market in my opinion is montages. Take the Imagine Dragons song Believer, whose music video has lead signer Dan Reynolds mostly getting his ass kicked by veteran brawler Dolph Lundgren. Who says that training montages can’t also be for mental training? Sub out Dan for a young graduate student, replace Dolph with an imposing physicist, and substitute boxing with drama about writing equations on paper or a blackboard. Don’t believe it can be cool? I don’t blame you, but science montages have been done before, playing to science’s mystical side. And with sufficient experience, creativity, and money, I believe the sky is the limit.

But back to more concrete things. Having fun while trying to promote science is the main goal of the March Meeting Rock ‘n Roll Physics Sing-Along — a social and outreach event where a band of musicians, mostly scientists attending the meeting, plays well-known songs whose lyrics are substituted for science-themed prose. The audience then sings the new technically oriented lyrics along with the performers. Below is an example with the Smashmouth song I’m a Believer, but we play all kinds of genres, from power ballads to Britney Spears.

This year, we have an especially exciting line-up as we are joined by professional science entertainer, Einstein’s girl Gia Mora! Some of you may remember Gia from her performance with John Preskill at One Entangled Evening. She will join us to perform, among other hits, the funky E=mc^2:

The sing-along is run by the curator of all things related to physics songs, singer and songwriter Prof. Walter F. Smith of Haverford College. Adept at using songs to help teach physics, Walter has carefully collected a database of such songs dating back to the early 20th century; he believes that James Clerk Maxwell may have been the first song parody-er with his version of the lyrics to the Scotch Air Comin’ Thro’ the Rye. You can see James jamming alongside Emmy Noether, Paul Dirac, and Satyendra Bose below to questionable lyrics. The most well-known US physics song pioneer is Harvard grad Tom Lehrer, who recorded his first album in the 50s. Contrary to the general nature of scientists to be constantly worried about preserving their neutral academic self-image, Lehrer tackled edgy topics with creativity and humor.

The sing-along started in 2006, where the only accompaniment was a guitar and bongo, growing into a full rock band later on. The drums were first played by a Soviet-born physicist named Victor, and that has yet to change today despite it being a different person. The rest of the band this year consists of Walter, his wife Marian McKenzie on the flute, Lev Krayzman from Yale on the guitar, Prof. Esa Räsänen from Tampere University of Technology on the bass, Lenny Campanello from the University of Maryland on the keyboard, and of course the talented Gia Mora on voice. We hope that you can join us next week, as this year’s sing-along is sure to be one for the books!

**March Meeting Rock-n-Roll Physics Sing-along**

Wednesday, March 7, 2018

9:00 PM–10:30 PM

J.W. Marriott Room: Platinum D

See you there!

]]>So relates the narrator of the short story “The Library of Babel*.”* The Argentine magical realist Jorge Luis Borges wrote the story in 1941.

Librarians are committing suicide partially because they can’t find the books they seek. The librarians are born in, and curate, a library called “infinite” by the narrator. The library consists of hexagonal cells, of staircases, of air shafts, and of closets for answering nature’s call. The narrator has never heard of anyone’s finding an edge of the library. Each hexagon houses 20 shelves, each of which houses 32 books, each of which contains 410 pages, each of which contains 40 lines, each of which consists of about 80 symbols. Every symbol comes from a set of 25: 22 letters, the period, the comma, and the space.

The library, a sage posited, contains every combination of the 25 symbols that satisfy the 410-40-and-80-ish requirement. His compatriots rejoiced:

*All men felt themselves to be the masters of an intact and secret treasure. There was no personal or world problem whose eloquent solution did not exist in some hexagon. [ . . . ] a great deal was said about the Vindications: books of apology and prophecy which vindicated for all time the acts of every man in the universe and retained prodigious arcana for his future. Thousands of the greedy abandoned their sweet native hexagons and rushed up the stairways, urged on by the vain intention of finding their Vindication.*

Probability punctured their joy: “the possibility of a man’s finding his Vindication, or some treacherous variation thereof, can be computed as zero.”

Many-body quantum physicists can empathize with Borges’s librarian.

A handful of us will huddle over a table or cluster in front of a chalkboard.

“Has anyone found this Hamiltonian’s ground space?” someone will ask.^{1}

A Hamiltonian is an observable, a measurable property. Consider a quantum system *S*, such as a set of particles hopping between atoms. We denote the system’s Hamiltonian by *H*. *H* determines how the system’s state changes in time. A musical about *H* swept Broadway last year.

A quantum system’s energy, *E*, might assume any of many possible values. *H* encodes the possible values. The least possible value, *E*_{0}*,* we call *the ground-state energy*.

Under what condition does *S* have an amount *E*_{0}* *of energy? *S* must occupy a *ground state*. Consider Olympic snowboarder Shaun White in a half-pipe. He has kinetic energy, or energy of motion, when sliding along the pipe. He gains gravitational energy upon leaving the ground. He has little energy when sitting still on the snow. A quantum analog of that sitting constitutes a ground state.^{2}

Consider, for example, electrons in a magnetic field. Each electron has a property called *spin*, illustrated with an arrow. The arrow’s direction represents the spin’s state. The system occupies a ground state when every arrow points in the same direction as the magnetic field.

Shaun White has as much energy, sitting on the ground in the half-pipe’s center, as he has sitting at the bottom of an edge of the half-pipe. Similarly, a quantum system might have multiple ground states. These states form the* ground space.*

“Has anyone found this Hamiltonian’s ground space?”

“Find” means, here,“identify the form of.” We want to derive a mathematical expression for the quantum analog of “sitting still, at the bottom of the half-pipe.”

“Find” often means “locate.” How do we locate an object such as a library? By identifying its spatial coordinates. We specify coordinates relative to directions, such as north, east, and up. We specify coordinates also when “finding” ground states.

Libraries occupy the physical space we live in. Ground states occupy an abstract mathematical space, a *Hilbert space*. The Hilbert space consists of the (pure) quantum states accessible to the system—loosely speaking, how the spins can orient themselves.

Libraries occupy a three-dimensional space. An *N*-spin system corresponds to a 2* ^{N}*-dimensional Hilbert space. Finding a ground state amounts to identifying 2

An exponential quantifies also the size of the librarian’s problem. Imagine trying to locate some book in the Library of Babel. How many books should you expect to have to check? How many books does the library hold? Would you have more hope of finding the book, wandering the Library of Babel, or finding a ground state, wandering the Hilbert space? (Please take this question with a grain of whimsy, not as instructions for calculating ground states.)

A book’s first symbol has one of 25 possible values. So does the second symbol. The pair of symbols has one of possible values. A trio has one of possible values, and so on.

How many symbols does a book contain? About or a million. The number of books grows exponentially with the number of symbols per book: The library contains about books. You contain only about atoms. No wonder librarians are committing suicide.

Do quantum physicists deserve more hope? Physicists want to find ground states of chemical systems. Example systems are discussed here* *and here. The second paper refers to 65 electrons distributed across 57 orbitals (spatial regions). How large a Hilbert space does this system have? Each electron has a spin that, loosely speaking, can point upward or downward (that corresponds to a two-dimensional Hilbert space). One might expect each electron to correspond to a Hilbert space of dimensionality . The 65 electrons would correspond to a Hilbert space of dimensionality .

But no two electrons can occupy the same one-electron state, due to Pauli’s exclusion principle. Hence has dimensionality (“114 choose 65″), the number of ways in which you can select 65 states from a set of 114 states.

equals approximately . Mathematica (a fancy calculator) can print a one followed by 34 zeroes*. *Mathematica refuses to print the number of Babel’s books. Pity the librarians more than the physicists.

Pity us less when we have quantum computers (QCs). They could find ground states far more quickly than today’s supercomputers. But building QCs is taking about as long as Borges’s narrator wandered the library, searching for “the catalogue of catalogues.”

What would Borges and his librarians make of QCs? QCs will be able to search unstructured databases quickly, via Grover’s algorithm. Babel’s library lacks structure. Grover’s algorithm outperforms classical algorithms just when fed large databases. books constitute a large database. Researchers seek a “killer app” for QCs. Maybe Babel’s librarians could vindicate quantum computing and quantum computing could rescue the librarians. If taken with a grain of magical realism.

^{1}Such questions remind me of an Uncle Alfred who’s misplaced his glasses. I half-expect an Auntie Muriel to march up to us physicists. She, sensible in plaid, will cross her arms.

“Where did you last see your ground space?” she’ll ask. “Did you put it on your dresser before going to bed last night? Did you use it at breakfast, to read the newspaper?”

We’ll bow our heads and shuffle off to double-check the kitchen.

^{2}More accurately, a ground state parallels Shaun White’s lying on the ground, stone-cold.

**How I first learned about Fractons**

Back in the early 2000s, a question that kept attracting and frustrating people in quantum information is how to build a quantum hard drive to store quantum information. This is of course a natural question to ask as quantum computing has been demonstrated to be possible, at least in theory, and experimental progress has shown great potential. It turned out, however, that the question is one of those deceptively enticing ones which are super easy to state, but extremely hard to answer. In a classical hard drive, information is stored using magnetism. Quantum information, instead of being just 0 and 1, is represented using superpositions of 0 and 1, and can be probed in non-commutative ways (that is, measuring along different directions can alter previous answers). To store quantum information, we need “magnetism” in all such non-commutative channels. But how can we do that?

At that time, some proposals had been made, but they either involve actively looking for and correcting errors throughout the time during which information is stored (which is something we never have to do with our classical hard drives) or going into four spatial dimensions. Reliable passive storage of quantum information seemed out of reach in the three-dimensional world we live in, even at the level of a proof of principle toy model!

Given all the previously failed attempts and without a clue about where else to look, this problem probably looked like a dead-end to many. But not to Jeongwan Haah, a fearless graduate student in Preskill’s group at IQIM at that time, who turned the problem from guesswork into a systematic computer search (over a constrained set of models). The result of the search surprised everyone. Jeongwan found a three-dimensional quantum spin model with physical properties that had never been seen before, making it a better quantum hard drive than any other model that we know of!

The model looks surprising not only to the quantum information community, but even more so to the condensed matter community. It is a strongly interacting quantum many-body model, a subject that has been under intense study in condensed matter physics. Yet it exhibits some very strange behaviors whose existence had not even been suspected. It is a condensed matter discovery made not from real materials in real experiments, but through computer search!

In condensed matter systems, what we know can happen is that elementary excitations can come in the form of point particles – usually called quasi-particles – which can then move around and interact with other excitations. In Jeongwan’s model, now commonly referred to as Haah’s code, elementary excitations still come in the form of point particles, but they cannot freely move around. Instead, if they want to move, four of them have to coordinate with each other to move together, so that they stay at the vertices of a fractal shaped structure! The restricted motion of the quasi-particles leads to slower dynamics at low energy, making the model much better suited for the purpose of storing quantum information.

But how can something like this happen? This is the question that I want to yell out loud every time I read Jeongwan’s papers or listen to his talks. Leaving aside the motivation of building a quantum hard drive, this model presents a grand challenge to the theoretical framework we now have in condensed matter. All of our intuitions break down in predicting the behavior of this model; even some of the most basic assumptions and definitions do not apply.

I felt so uncomfortable and so excited at the same time because there was something out there that should be related to things I know, yet I totally did not understand how. And there was an even bigger problem. I was like a sick person going to a doctor but unable to pinpoint what was wrong. Something must have been wrong, but I didn’t know what that was and I didn’t know how to even begin to look for it. The model looked so weird. Interaction involved eight spins at a time; there was no obvious symmetry other than translation. Jeongwan, with his magic math power, worked out explicitly many of the amazing properties of the model, but that to me only added to the mystery. Where did all these strange properties coming from?

**From the unfathomable to the seemingly approachable**

I remained in this superposition of excited state and powerless state for several years, until Jeongwan moved to MIT and posted some papers with Sagar Vijay and Liang Fu in 2015 and 2016.

In these papers, they listed several other models, which, similar to Haah’s code, contain quasi-particle excitations whose motion is constrained. The constraints are weaker and these models do not make good quantum hard drives, but they still represent new condensed matter phenomena. What’s nice about these models is that the form of interaction is more symmetric, takes a simpler form, or is similar to some other models we are familiar with. The quasi-particles do not need a fractal-shaped structure to move around, instead they move along a line, in a plane, or at the corner of a rectangle. In fact, as early as 2005 – six years before Haah’s code, Claudio Chamon at Boston University already proposed a model of this kind. Together with the previous fractal examples, these models are what’s now being referred to as the fracton models. If the original Haah’s code looks like an ET from beyond the milky way, these models at least seem to live somewhere in the solar system. So there must be something that we can do to understand them better!

Obviously, I was not the only one who felt this way. A flurry of papers appeared on these “fracton” models. People came at these models armed with their favorite tools in condensed matter, looking for an entry point to crack them open. The two approaches that I found most attractive was the coupled layer construction and the higher rank gauge theory, and I worked on these ideas together with Han Ma, Ethan Lake and Michael Hermele. Each approach comes from a different perspective and establishes a connection between fractons and physical models that we are familiar with. In the coupled layer construction, the connection is to the 2D discrete gauge theories, while in the higher rank approach it is to the 3D gauge theory of electromagnetism.

I was excited about these results. They each point to simple physical mechanisms underlying the existence of fractons in some particular models. By relating these models to things I already know, I feel a bit relieved. But deep down, I know that this is far from the complete story. Our understanding barely goes beyond the particular models discussed in the paper. In condensed matter, we spend a lot of time studying toy models; but toy models are not the end goal. Toy models are only meaningful if they represent some generic feature in a whole class of models. It is not clear at all to what extent this is the case for fractons.

**Step zero: define “order”, define “topological order”**

I gave a talk about these results at KITP last fall under the title “Fracton Topological Order”. It was actually too big a title because all we did was to study specific realizations of individual models and analyze their properties. To claim topological order, one needs to show much more. The word “order” refers to the common properties of a continuous range of models within the same phase. For example, crystalline order refers to the regular lattice organization of atoms in the solid phase within a continuous range of temperature and pressure. When the word “topological” is added in front of “order”, it signifies that such properties are usually directly related to the topology of the system. A prototypical example is the fractional quantum Hall system, whose ground state degeneracy is directly determined by the topology of the manifold the system lives in. For fractons, we are far from an understanding at this level. We cannot answer basic questions like what range of models form a phase, what is the order (the common properties of this whole range of models) characterizing each phase, and in what sense is the order topological. So, the title was more about what I hope will happen than what has already happened.

But it did lead to discussions that could make things happen. After my talk, Zhenghan Wang, a mathematician at Microsoft Station Q, said to me, “I would agree these fracton models are topological if you can show me how to define them on different three manifolds”. Of course! How can I claim anything related to topology if all I know is one model on a cubic lattice with periodic boundary condition? It is like claiming a linear relation between two quantities with only one data point.

But how to get more data points? Well, from the paper by Haah, Vijay and Fu, we knew how to define the model on cubic lattices. With periodic boundary conditions, the underlying manifold is a three torus. Is it possible to have a cubic lattice, or something similar, in other three manifolds as well? Usually, this kind of request would be too much to ask. But it turns out that if you whisper your wish to the right mathematician, even the craziest ones can come true. With insightful suggestions from Michael Freedman (the Fields medalist leading Station Q) and Zhenghan, and through the amazing work of Kevin Slagle (U Toronto) and Wilbur Shirley (Caltech), we found that if we make use of a structure called Total Foliation, one of the fracton models can be generalized to different kinds of three manifolds and we can see how the properties of the model are related to certain topological features of the manifold!

Foliation is the process of dividing a manifold into parallel planes. Total foliation is a set of three foliations which intersect each other in a transversal way. The xy, yz, and zx planes in a cubic lattice form a total foliation and similar constructions can be made for other three manifolds as well.

Things start to get technical from here, but the basic lesson we learned about some of the fracton models is that structural-wise, they pretty much look like an onion. Even though onions look like a three-dimensional object from the outside, they actually grow in a layered structure. Some of the properties of the fracton models are simply determined by the layers, and related

to the topology of the layers. Once we peel off all the layers, we find that for some, there is nothing left while for others, there is a nontrivial core. This observation allows us to better address the previous questions: we defined a fracton phase (one type of it) as models smoothly related to each other after adding or removing layers; the topological nature of the order is manifested in how the properties of the model are determined by the topology of the layers.

The onion structure is nice, because it allows us to reduce much of the story from 3D to 2D, where we understand things much better. It clarifies many of the weirdnesses of the fracton model we studied, and there is indication that it may apply to a much wider range of fracton models, so we have an exciting road ahead of us. On the other hand, it is also clear that the onion structure does not cover everything. In particular, it does not cover Haah’s code! Haah’s code cannot be built in a layered way and its properties are in a sense intrinsically three dimensional. So, after finishing this whole journey through the onion field, I will be back to staring at Haah’s code again and wondering what to do with it, like what I have been doing in the eight years since Jeongwan’s paper first came out. But maybe this time I will have some better ideas.

]]>