In late 2020, I was sitting by a window in my home office (AKA living room) in Cambridge, Massachusetts. I’d drafted 15 chapters of my book Quantum Steampunk. The epilogue, I’d decided, would outline opportunities for the future of quantum thermodynamics. So I had to come up with opportunities for the future of quantum thermodynamics. The rest of the book had related foundational insights provided by quantum thermodynamics about the universe’s nature. For instance, quantum thermodynamics had sharpened the second law of thermodynamics, which helps explain time’s arrow, into more-precise statements. Conventional thermodynamics had not only provided foundational insights, but also accompanied the Industrial Revolution, a paragon of practicality. Could quantum thermodynamics, too, offer practical upshots?
Quantum thermodynamicists had designed quantum engines, refrigerators, batteries, and ratchets. Some of these devices could outperform their classical counterparts, according to certain metrics. Experimentalists had even realized some of these devices. But the devices weren’t useful. For instance, a simple quantum engine consisted of one atom. I expected such an atom to produce one electronvolt of energy per engine cycle. (A light bulb emits about 1021 electronvolts of light per second.) Cooling the atom down and manipulating it would cost loads more energy. The engine wouldn’t earn its keep.
Autonomous quantum machines offered greater hope for practicality. By autonomous, I mean, not requiring time-dependent external control: nobody need twiddle knobs or push buttons to guide the machine through its operation. Such control requires work—organized, coordinated energy. Rather than receiving work, an autonomous machine accesses a cold environment and a hot environment. Heat—random, disorganized energy cheaper than work—flows from the hot to the cold. The machine transforms some of that heat into work to power itself. That is, the machine sources its own work from cheap heat in its surroundings. Some air conditioners operate according to this principle. So can some quantum machines—autonomous quantum machines.
Thermodynamicists had designed autonomous quantum engines and refrigerators. Trapped-ion experimentalists had realized one of the refrigerators, in a groundbreaking result. Still, the autonomous quantum refrigerator wasn’t practical. Keeping the ion cold and maintaining its quantum behavior required substantial work.
My community needed, I wrote in my epilogue, an analogue of solar panels in southern California. (I probably drafted the epilogue during a Boston winter, thinking wistfully of Pasadena.) If you built a solar panel in SoCal, you could sit back and reap the benefits all year. The panel would fulfill its mission without further effort from you. If you built a solar panel in Rochester, you’d have to scrape snow off of it. Also, the panel would provide energy only a few months per year. The cost might not outweigh the benefit. Quantum thermal machines resembled solar panels in Rochester, I wrote. We needed an analogue of SoCal: an appropriate environment. Most of it would be cold (unlike SoCal), so that maintaining a machine’s quantum nature would cost a user almost no extra energy. The setting should also contain a slightly warmer environment, so that net heat would flow. If you deposited an autonomous quantum machine in such a quantum SoCal, the machine would operate on its own.
Where could we find a quantum SoCal? I had no idea.
A few months later, I received an email from quantum experimentalist Simone Gasparinetti. He was setting up a lab at Chalmers University in Sweden. What, he asked, did I see as opportunities for experimental quantum thermodynamics? We’d never met, but we agreed to Zoom. Quantum Steampunk on my mind, I described my desire for practicality. I described autonomous quantum machines. I described my yearning for a quantum SoCal.
I have it, Simone said.
Simone and his colleagues were building a quantum computer using superconducting qubits. The qubits fit on a chip about the size of my hand. To keep the chip cold, the experimentalists put it in a dilution refrigerator. You’ve probably seen photos of dilution refrigerators from Google, IBM, and the like. The fridges tend to be cylindrical, gold-colored monstrosities from which wires stick out. (That is, they look steampunk.) You can easily develop the impression that the cylinder is a quantum computer, but it’s only the fridge.
Not a quantum computer
The fridge, Simone said, resembles an onion: it has multiple layers. Outer layers are warmer, and inner layers are colder. The quantum computer sits in the innermost layer, so that it behaves as quantum mechanically as possible. But sometimes, even the fridge doesn’t keep the computer cold enough.
Imagine that you’ve finished one quantum computation and you’re preparing for the next. The computer has written quantum information to certain qubits, as you’ve probably written on scrap paper while calculating something in a math class. To prepare for your next math assignment, given limited scrap paper, you’d erase your scrap paper. The quantum computer’s qubits need erasing similarly. Erasing, in this context, means cooling down even more than the dilution refrigerator can manage.
Why not use an autonomous quantum refrigerator to cool the scrap-paper qubits?
I loved the idea, for three reasons. First, we could place the quantum refrigerator beside the quantum computer. The dilution refrigerator would already be cold, for the quantum computations’ sake. Therefore, we wouldn’t have to spend (almost any) extra work on keeping the quantum refrigerator cold. Second, Simone could connect the quantum refrigerator to an outer onion layer via a cable. Heat would flow from the warmer outer layer to the colder inner layer. From the heat, the quantum refrigerator could extract work. The quantum refrigerator would use that work to cool computational qubits—to erase quantum scrap paper. The quantum refrigerator would service the quantum computer. So, third, the quantum refrigerator would qualify as practical.
Over the next three years, we brought that vision to life. (By we, I mostly mean Simone’s group, as my group doesn’t have a lab.)
Artist’s conception of the autonomous-quantum-refrigerator chip. Credit: Chalmers University of Technology/Boid AB/NIST.
Postdoc Aamir Ali spearheaded the experiment. Then-master’s student Paul Jamet Suria and PhD student Claudia Castillo-Moreno assisted him. Maryland postdoc Jeffrey M. Epstein began simulating the superconducting qubits numerically, then passed the baton to PhD student José Antonio Marín Guzmán.
The experiment provided a proof of principle: it demonstrated that the quantum refrigerator could operate. The experimentalists didn’t apply the quantum refrigerator in a quantum computation. Also, they didn’t connect the quantum refrigerator to an outer onion layer. Instead, they pumped warm photons to the quantum refrigerator via a cable. But even in such a stripped-down experiment, the quantum refrigerator outperformed my expectations. I thought it would barely lower the “scrap-paper” qubit’s temperature. But that qubit reached a temperature of 22 milliKelvin (mK). For comparison: if the qubit had merely sat in the dilution refrigerator, it would have reached a temperature of 45–70 mK. State-of-the-art protocols had lowered scrap-paper qubits’ temperatures to 40–49 mK. So our quantum refrigerator outperformed our competitors, through the lens of temperature. (Our quantum refrigerator cooled more slowly than they did, though.)
Simone, José Antonio, and I have followed up on our autonomous quantum refrigerator with a forward-looking review about useful autonomous quantum machines. Keep an eye out for a blog post about the review…and for what we hope grows into a subfield.
In summary, yes, publishing a popular-science book can benefit one’s research.
At this week’s American Physical Society Global Physics Summit in Anaheim, California, John Preskill spoke at an event celebrating 100 years of groundbreaking advances in quantum mechanics. Here are his remarks.
Welcome, everyone, to this celebration of 100 years of quantum mechanics hosted by the Physical Review Journals. I’m John Preskill and I’m honored by this opportunity to speak today. I was asked by our hosts to express some thoughts appropriate to this occasion and to feel free to share my own personal journey as a physicist. I’ll embrace that charge, including the second part of it, perhaps even more that they intended. But over the next 20 minutes I hope to distill from my own experience some lessons of broader interest.
I began graduate study in 1975, the midpoint of the first 100 years of quantum mechanics, 50 years ago and 50 years after the discovery of quantum mechanics in 1925 that we celebrate here. So I’ll seize this chance to look back at where quantum physics stood 50 years ago, how far we’ve come since then, and what we can anticipate in the years ahead.
As an undergraduate at Princeton, I had many memorable teachers; I’ll mention just one: John Wheeler, who taught a full-year course for sophomores that purported to cover all of physics. Wheeler, having worked with Niels Bohr on nuclear fission, seemed implausibly old, though he was actually 61. It was an idiosyncratic course, particularly because Wheeler did not refrain from sharing with the class his current research obsessions. Black holes were a topic he shared with particular relish, including the controversy at the time concerning whether evidence for black holes had been seen by astronomers. Especially notably, when covering the second law of thermodynamics, he challenged us to ponder what would happen to entropy lost behind a black hole horizon, something that had been addressed by Wheeler’s graduate student Jacob Bekenstein, who had finished his PhD that very year. Bekenstein’s remarkable conclusion that black holes have an intrinsic entropy proportional to the event horizon area delighted the class, and I’ve had had many occasions to revisit that insight in the years since then. The lesson being that we should not underestimate the potential impact of sharing our research ideas with undergraduate students.
Stephen Hawking made that connection between entropy and area precise the very next year when he discovered that black holes radiate; his resulting formula for black hole entropy, a beautiful synthesis of relativity, quantum theory, and thermodynamics ranks as one of the shining achievements in the first 100 years of quantum mechanics. And it raised a deep puzzle pointed out by Hawking himself with which we have wrestled since then, still without complete success — what happens to information that disappears inside black holes?
Hawking’s puzzle ignited a titanic struggle between cherished principles. Quantum mechanics tells us that as quantum systems evolve, information encoded in a system can get scrambled into an unrecognizable form, but cannot be irreversibly destroyed. Relativistic causality tells us that information that falls into a black hole, which then evaporates, cannot possibly escape and therefore must be destroyed. Who wins – quantum theory or causality? A widely held view is that quantum mechanics is the victor, that causality should be discarded as a fundamental principle. This calls into question the whole notion of spacetime — is it fundamental, or an approximate property that emerges from a deeper description of how nature works? If emergent, how does it emerge and from what? Fully addressing that challenge we leave to the physicists of the next quantum century.
I made it to graduate school at Harvard and the second half century of quantum mechanics ensued. My generation came along just a little too late to take part in erecting the standard model of particle physics, but I was drawn to particle physics by that intoxicating experimental and theoretical success. And many new ideas were swirling around in the mid and late 70s of which I’ll mention only two. For one, appreciation was growing for the remarkable power of topology in quantum field theory and condensed matter, for example the theory of topological solitons. While theoretical physics and mathematics had diverged during the first 50 years of quantum mechanics, they have frequently crossed paths in the last 50 years, and topology continues to bring both insight and joy to physicists. The other compelling idea was to seek insight into fundamental physics at very short distances by searching for relics from the very early history of the universe. My first publication resulted from contemplating a question that connected topology and cosmology: Would magnetic monopoles be copiously produced in the early universe? To check whether my ideas held water, I consulted not a particle physicist or a cosmologist, but rather a condensed matter physicist (Bert Halperin) who provided helpful advice. The lesson being that scientific opportunities often emerge where different subfields intersect, a realization that has helped to guide my own research over the following decades.
Looking back at my 50 years as a working physicist, what discoveries can the quantumists point to with particular pride and delight?
I was an undergraduate when Phil Anderson proclaimed that More is Different, but as an arrogant would be particle theorist at the time I did not appreciate how different more can be. In the past 50 years of quantum mechanics no example of emergence was more stunning than the fractional quantum Hall effect. We all know full well that electrons are indivisible particles. So how can it be that in a strongly interacting two-dimensional gas an electron can split into quasiparticles each carrying a fraction of its charge? The lesson being: in a strongly-correlated quantum world, miracles can happen. What other extraordinary quantum phases of matter await discovery in the next quantum century?
Another thing I did not adequately appreciate in my student days was atomic physics. Imagine how shocked those who elucidated atomic structure in the 1920s would be by the atomic physics of today. To them, a quantum measurement was an action performed on a large ensemble of similarly prepared systems. Now we routinely grab ahold of a single atom, move it, excite it, read it out, and induce pairs of atoms to interact in precisely controlled ways. When interest in quantum computing took off in the mid-90s, it was ion-trap clock technology that enabled the first quantum processors. Strong coupling between single photons and single atoms in optical and microwave cavities led to circuit quantum electrodynamics, the basis for today’s superconducting quantum computers. The lesson being that advancing our tools often leads to new capabilities we hadn’t anticipated. Now clocks are so accurate that we can detect the gravitational redshift when an atom moves up or down by a millimeter in the earth’s gravitational field. Where will the clocks of the second quantum century take us?
Surely one of the great scientific triumphs of recent decades has been the success of LIGO, the laser interferometer gravitational-wave observatory. If you are a gravitational wave scientist now, your phone buzzes so often to announce another black hole merger that it’s become annoying. LIGO would not be possible without advanced laser technology, but aside from that what’s quantum about LIGO? When I came to Caltech in the early 1980s, I learned about a remarkable idea (from Carl Caves) that the sensitivity of an interferometer can be enhanced by a quantum strategy that did not seem at all obvious — injecting squeezed vacuum into the interferometer’s dark port. Now, over 40 years later, LIGO improves its detection rate by using that strategy. The lesson being that theoretical insights can enhance and transform our scientific and technological tools. But sometimes that takes a while.
What else has changed since 50 years ago? Let’s give thanks for the arXiv. When I was a student few scientists would type their own technical papers. It took skill, training, and patience to operate the IBM typewriters of the era. And to communicate our results, we had no email or world wide web. Preprints arrived by snail mail in Manila envelopes, if you were lucky enough to be on the mailing list. The Internet and the arXiv made scientific communication far faster, more convenient, and more democratic, and LaTeX made producing our papers far easier as well. And the success of the arXiv raises vexing questions about the role of journal publication as the next quantum century unfolds.
I made a mid-career shift in research direction, and I’m often asked how that came about. Part of the answer is that, for my generation of particle physicists, the great challenge and opportunity was to clarify the physics beyond the standard model, which we expected to provide a deeper understanding of how nature works. We had great hopes for the new phenomenology that would be unveiled by the Superconducting Super Collider, which was under construction in Texas during the early 90s. The cancellation of that project in 1993 was a great disappointment. The lesson being that sometimes our scientific ambitions are thwarted because the required resources are beyond what society will support. In which case, we need to seek other ways to move forward.
And then the next year, Peter Shor discovered the algorithm for efficiently finding the factors of a large composite integer using a quantum computer. Though computational complexity had not been part of my scientific education, I was awestruck by this discovery. It meant that the difference between hard and easy problems — those we can never hope to solve, and those we can solve with advanced technologies — hinges on our world being quantum mechanical. That excited me because one could anticipate that observing nature through a computational lens would deepen our understanding of fundamental science. I needed to work hard to come up to speed in a field that was new to me — teaching a course helped me a lot.
Ironically, for 4 ½ years in the mid-1980s I sat on the same corridor as Richard Feynman, who had proposed the idea of simulating nature with quantum computers in 1981. And I never talked to Feynman about quantum computing because I had little interest in that topic at the time. But Feynman and I did talk about computation, and in particular we were both very interested in what one could learn about quantum chromodynamics from Euclidean Monte Carlo simulations on conventional computers, which were starting to ramp up in that era. Feynman correctly predicted that it would be a few decades before sufficient computational power would be available to make accurate quantitative predictions about nonperturbative QCD. But it did eventually happen — now lattice QCD is making crucial contributions to the particle physics and nuclear physics programs. The lesson being that as we contemplate quantum computers advancing our understanding of fundamental science, we should keep in mind a time scale of decades.
Where might the next quantum century take us? What will the quantum computers of the future look like, or the classical computers for that matter? Surely the qubits of 100 years from now will be much different and much better than what we have today, and the machine architecture will no doubt be radically different than what we can currently envision. And how will we be using those quantum computers? Will our quantum technology have transformed medicine and neuroscience and our understanding of living matter? Will we be building materials with astonishing properties by assembling matter atom by atom? Will our clocks be accurate enough to detect the stochastic gravitational wave background and so have reached the limit of accuracy beyond which no stable time standard can even be defined? Will quantum networks of telescopes be observing the universe with exquisite precision and what will that reveal? Will we be exploring the high energy frontier with advanced accelerators like muon colliders and what will they teach us? Will we have identified the dark matter and explained the dark energy? Will we have unambiguous evidence of the universe’s inflationary origin? Will we have computed the parameters of the standard model from first principles, or will we have convinced ourselves that’s a hopeless task? Will we have understood the fundamental constituents from which spacetime itself is composed?
There is an elephant in the room. Artificial intelligence is transforming how we do science at a blistering pace. What role will humans play in the advancement of science 100 years from now? Will artificial intelligence have melded with quantum intelligence? Will our instruments gather quantum data Nature provides, transduce it to quantum memories, and process it with quantum computers to discern features of the world that would otherwise have remained deeply hidden?
To a limited degree, in contemplating the future we are guided by the past. Were I asked to list the great ideas about physics to surface over the 50-year span of my career, there are three in particular I would nominate for inclusion on that list. (1) The holographic principle, our best clue about how gravity and quantum physics fit together. (2) Topological quantum order, providing ways to distinguish different phases of quantum matter when particles strongly interact with one another. (3) And quantum error correction, our basis for believing we can precisely control very complex quantum systems, including advanced quantum computers. It’s fascinating that these three ideas are actually quite closely related. The common thread connecting them is that all relate to the behavior of many-particle systems that are highly entangled.
Quantum error correction is the idea that we can protect quantum information from local noise by encoding the information in highly entangled states such that the protected information is inaccessible locally, when we look at just a few particles at a time. Topological quantum order is the idea that different quantum phases of matter can look the same when we observe them locally, but are distinguished by global properties hidden from local probes — in other words such states of matter are quantum memories protected by quantum error correction. The holographic principle is the idea that all the information in a gravitating three-dimensional region of space can be encoded by mapping it to a local quantum field theory on the two-dimensional boundary of the space. And that map is in fact the encoding map of a quantum error-correcting code. These ideas illustrate how as our knowledge advances, different fields of physics are converging on common principles. Will that convergence continue in the second century of quantum mechanics? We’ll see.
As we contemplate the long-term trajectory of quantum science and technology, we are hampered by our limited imaginations. But one way to loosely characterize the difference between the past and the future of quantum science is this: For the first hundred years of quantum mechanics, we achieved great success at understanding the behavior of weakly correlated many-particles systems relevant to for example electronic structure, atomic and molecular physics, and quantum optics. The insights gained regarding for instance how electrons are transported through semiconductors or how condensates of photons and atoms behave had invaluable scientific and technological impact. The grand challenge and opportunity we face in the second quantum century is acquiring comparable insight into the complex behavior of highly entangled states of many particles which are well beyond the reach of current theory or computation. This entanglement frontier is vast, inviting, and still largely unexplored. The wonders we encounter in the second century of quantum mechanics, and their implications for human civilization, are bound to supersede by far those of the first century. So let us gratefully acknowledge the quantum heroes of the past and present, and wish good fortune to the quantum explorers of the future.
In January 2016, Caltech’s Institute for Quantum Information and Matter unveiled a YouTube video featuring an extraordinary chess showdown between actor Paul Rudd (a.k.a. Ant-Man) and the legendary Dr. Stephen Hawking. But this was no ordinary match—Rudd had challenged Hawking to a game of Quantum Chess. At the time, Fast Company remarked, “Here we are, less than 10 days away from the biggest advertising football day of the year, and one of the best ads of the week is a 12-minute video of quantum chess from Caltech.” But a Super Bowl ad for what, exactly?
For the past nine years, Quantum Realm Games, with continued generous support from IQIM and other strategic partnerships, has been tirelessly refining the rudimentary Quantum Chess prototype showcased in that now-viral video, transforming it into a fully realized game—one you can play at home or even on a quantum computer. And now, at long last, we’ve reached a major milestone: the launch of Quantum Chess 1.0. You might be wondering—what took us so long?
The answer is simple: developing an AI capable of playing Quantum Chess.
Before we dive into the origin story of the first-ever AI designed to master a truly quantum game, it’s important to understand what enables modern chess AI in the first place.
Chess AI is a vast and complex field, far too deep to explore in full here. For those eager to delve into the details, the Chess Programming Wiki serves as an excellent resource. Instead, this post will focus on what sets Quantum Chess AI apart from its classical counterpart—and the unique challenges we encountered along the way.
With Chess AI, the name of the game is “depth”, at least for versions based on the Minimax strategy conceived by John von Neumann in 1928 (we’ll say a bit about Neural Network based AI later). The basic idea is that the AI will simulate the possible moves each player can make, down to some depth (number of moves) into the future, then decide which one is best based on a set of evaluation criteria (minimizing the maximum loss incurred by the opponent). The faster it can search, the deeper it can go. And the deeper it can go, the better its evaluation of each potential next move is.
Searching into the future can be modelled as a branching tree, where each branch represents a possible move from a given position (board configuration). The average branching factor for chess is about 35. That means that for a given board configuration, there are about 35 different moves to choose from. So if the AI looks 2 ply (moves) ahead, it sees 35×35 moves on average, and this blows up quickly. By 4 ply, the AI already has 1.5 million moves to evaluate.
Modern chess engines, like Stockfish and Leela, gain their strength by looking far into the future. Depth 10 is considered low in these cases; you really need 20+ if you want the engine to return an accurate evaluation of each move under consideration. To handle that many evaluations, these engines use strong heuristics to prune branches (the width of the tree), so that they don’t need to calculate the exponentially many leaves of the tree. For example, if one of the branches involves losing your Queen, the algorithm may decide to prune that branch and all the moves that come after. But as experienced players can see already, since a Queen sacrifice can sometimes lead to massive gains down the road, such a “naive” heuristic may need to be refined further before it is implemented. Even so, the tension between depth-first versus breadth-first search is ever present.
The addition of split and merge moves in Quantum Chess absolutely explodes the branching factor. Early simulations have shown that it may be in the range of 100-120, but more work is needed to get an accurate count. For all we know, branching could be much bigger. We can get a sense by looking at a single piece, the Queen.
On an otherwise empty chess board, a single Queen on d4 has 27 possible moves (we leave it to the reader to find them all). In Quantum Chess, we add the split move: every piece, besides pawns, can move to any two empty squares it can reach legally. This adds every possible paired combination of standard moves to the list.
But wait, there’s more!
Order matters in Quantum Chess. The Queen can split to d3 and c4, but it can also split to c4 and d3. These subtly different moves can yield different underlying phase structures (given their implementation via a square-root iSWAP gate between the source square and the first target, followed by an iSWAP gate between the source and the second target), potentially changing how interference works on, say, a future merge move. So you get 27*26 = 702 possible moves! And that doesn’t include possible merge moves, which might add another 15-20 branches to each node of our tree.
Do the math and we see that there are roughly 30 times as many moves in Quantum Chess for that queen. Even if we assume the branching factor is only 100, by ply 4 we have 100 million moves to search. We obviously need strong heuristics to do some very aggressive pruning.
But where do we get strong heuristics for a new game? We don’t have centuries of play to study and determine which sequences of moves are good and which aren’t. This brings us to our first attempt at a Quantum Chess AI. Enter StoQfish.
StoQfish
Quantum Chess is based on chess (in fact, you can play regular Chess all the way through if you and your opponent decide to make no quantum moves), which means that chess skill matters. Could we make a strong chess engine work as a quantum chess AI? Stockfish is open source, and incredibly strong, so we started there.
Given the nature of quantum states, the first thing you think about when you try to adapt a classical strategy into a quantum one, is to split the quantum superposition underlying the state of the game into a series of classical states and then sample them according to their (squared) amplitude in the superposition. And that is exactly what we did. We used the Quantum Chess Engine to generate several chess boards by sampling the current state of the game, which can be thought of as a quantum superposition of classical chess configurations, according to the underlying probability distribution. We then passed these boards to Stockfish. Stockfish would, in theory, return its own weighted distribution of the best classical moves. We had some ideas on how to derive split moves from this distribution, but let’s not get ahead of ourselves.
This approach had limited success and significant failures. Stockfish is highly optimized for classical chess, which means that there are some positions that it cannot process. For example, consider the scenario where a King is in superposition of being captured and not captured; upon capture of one of these Kings, samples taken after such a move will produce boards without a King! Similarly, what if a King in superposition is in check, but you’re not worried because the other half of the King is well protected, so you don’t move to protect it? The concept of check is a problem all around, because Quantum Chess doesn’t recognize it. Things like moving “through check” are completely fine.
You can imagine then why whenever Stockfish encounters a board without a King it crashes. In classical Chess, there is always a King on the board. In Quantum Chess, the King is somewhere in the chess multiverse, but not necessarily in every board returned by the sampling procedure.
You might wonder if we couldn’t just throw away boards that weren’t valid. That’s one strategy, but we’re sampling probabilities so if we throw out some of the data, then we introduce bias into the calculation, which leads to poor outcomes overall.
We tried to introduce a King onto boards where he was missing, but that became its own computational problem: how do you reintroduce the King in a way that doesn’t change the assessment of the position?
We even tried to hack Stockfish to abandon its obsession with the King, but that caused a cascade of other failures, and tracing through the Stockfish codebase became a problem that wasn’t likely to yield a good result.
This approach wasn’t working, but we weren’t done with Stockfish just yet. Instead of asking Stockfish for the next best move given a position, we tried asking Stockfish to evaluate a position. The idea was that we could use the board evaluations in our own Minimax algorithm. However, we ran into similar problems, including the illegal position problem.
So we decided to try writing our own minimax search, with our own evaluation heuristics. The basics are simple enough. A board’s value is related to the value of the pieces on the board and their location. And we could borrow from Stockfish’s heuristics as we saw fit.
This gave us Hal 9000. We were sure we’d finally mastered quantum AI. Right? Find out what happened, in the next post.
I’ve worked on topological quantum computation, one of Alexei Kitaev’s brilliant innovations, for around 15 years now. It’s hard to find a more beautiful physics problem, combining spectacular quantum phenomena (non-Abelian anyons) with the promise of transformative technological advances (inherently fault-tolerant quantum computing hardware). Problems offering that sort of combination originally inspired me to explore quantum matter as a graduate student.
Non-Abelian anyons are emergent particles born within certain exotic phases of matter. Their utility for quantum information descends from three deeply related defining features:
Nucleating a collection of well-separated non-Abelian anyons within a host platform generates a set of quantum states with the same energy (at least to an excellent approximation). Local measurements give one essentially no information about which of those quantum states the system populates—i.e., any evidence of what the system is doing is hidden from the observer and, crucially, the environment. In turn, qubits encoded in that space enjoy intrinsic resilience against local environmental perturbations.
Swapping the positions of non-Abelian anyons manipulates the state of the qubits. Swaps can be enacted either by moving anyons around each other as in a shell game, or by performing a sequence of measurements that yields the same effect. Exquisitely precise qubit operations follow depending only on which pairs the user swaps and in what order. Properties (1) and (2) together imply that non-Abelian anyons offer a pathway both to fault-tolerant storage and manipulation of quantum information.
A pair of non-Abelian anyons brought together can “fuse” into multiple different kinds of particles, for instance a boson or a fermion. Detecting the outcome of such a fusion process provides a method for reading out the qubit states that are otherwise hidden when all the anyons are mutually well-separated. Alternatively, non-local measurements (e.g., interferometry) can effectively fuse even well-separated anyons, thus also enabling qubit readout.
I entered the field back in 2009 during the last year of my postdoc. Topological quantum computing—once confined largely to the quantum Hall realm—was then in the early stages of a renaissance driven by an explosion of new candidate platforms as well as measurement and manipulation schemes that promised to deliver long-sought control over non-Abelian anyons. The years that followed were phenomenally exciting, with broadly held palpable enthusiasm for near-term prospects not yet tempered by the practical challenges that would eventually rear their head.
A PhD comics cartoon on non-Abelian anyons from 2014.
In 2018, near the height of my optimism, I gave an informal blackboard talk in which I speculated on a new kind of forthcoming NISQ era defined by the birth of a Noisy Individual Semi-topological Qubit. To less blatantly rip off John Preskill’s famous acronym, I also—jokingly of course—proposed the alternative nomenclature POST-Q (Piece Of S*** Topological Qubit) era to describe the advent of such a device. The rationale behind those playfully sardonic labels is that the inaugural topological qubit would almost certainly be far from ideal, just as the original transistor appears shockingly crude when compared to modern electronics. You always have to start somewhere. But what does it mean to actually create a topological qubit, and how do you tell that you’ve succeeded—especially given likely POST-Q-era performance?
To my knowledge those questions admit no widely accepted answers, despite implications for both quantum science and society. I would like to propose defining an elementary topological qubit as follows:
A device that leverages non-Abelian anyons to demonstrably encode and manipulate a single qubit in a topologically protected fashion.
Some of the above words warrant elaboration. As alluded to above, non-Abelian anyons can passively encode quantum information—a capability that by itself furnishes a quantum memory. That’s the “encode” part. The “manipulate” criterion additionally entails exploiting another aspect of what makes non-Abelian anyons special—their behavior under swaps—to enact gate operations. Both the encoding and manipulation should benefit from intrinsic fault-tolerance, hence the “topologically protected fashion” qualifier. And very importantly, these features should be “demonstrably” verified. For instance, creating a device hosting the requisite number of anyons needed to define a qubit does not guarantee the all-important property of topological protection. Hurdles can still arise, among them: if the anyons are not sufficiently well-separated, then the qubit states will lack the coveted immunity from environmental perturbations; thermal and/or non-equilibrium effects might still induce significant errors (e.g., by exciting the system into other unwanted states); and measurements—for readout and possibly also manipulation—may lack the fidelity required to fruitfully exploit topological protection even if present in the qubit states themselves.
The preceding discussion raises a natural follow-up question: How do you verify topological protection in practice? One way forward involves probing qubit lifetimes, and fidelities of gates resulting from anyon swaps, upon varying some global control knob like magnetic field or gate voltage. As the system moves deeper into the phase of matter hosting non-Abelian anyons, both the lifetime and gate fidelities ought to improve dramatically—reflecting the onset of bona fide topological protection. First-generation “semi-topological” devices will probably fare modestly at best, though one can at least hope to recover general trends in line with this expectation.
By the above proposed definition, which I contend is stringent yet reasonable, realization of a topological qubit remains an ongoing effort. Fortunately the journey to that end offers many significant science and engineering milestones worth celebrating in their own right. Examples include:
Platform verification. This most indirect milestone evidences the formation of a non-Abelian phase of matter through (thermal or charge) Hall conductance measurements, detection of some anticipated quantum phase transition, etc.
Detection of non-Abelian anyons. This step could involve conductance, heat capacity, magnetization, or other types of measurements designed to support the emergence of either individual anyons or a collection of anyons. Notably, such techniques need not reveal the precise quantum state encoded by the anyons—which presents a subtler challenge.
Establishing readout capabilities. Here one would demonstrate experimental techniques, interferometry for example, that in principle can address that key challenge of quantum state readout, even if not directly applied yet to a system hosting non-Abelian anyons.
Fusion protocols. Readout capabilities open the door to more direct tests of the hallmark behavior predicted for a putative topological qubit. One fascinating experiment involves protocols that directly test non-Abelian anyon fusion properties. Successful implementation would solidify readout capabilities applied to an actual candidate topological qubit device.
Probing qubit lifetimes. Fusion protocols further pave the way to measuring the qubit coherence times, e.g., and —addressing directly the extent of topological protection of the states generated by non-Abelian anyons. Behavior clearly conforming to the trends highlighted above could certify the device as a topological quantum memory. (Personally, I most anxiously await this milestone.)
Fault-tolerant gates from anyon swaps. Likely the most advanced milestone, successfully implementing anyon swaps, again with appropriate trends in gate fidelity, would establish the final component of an elementary topological qubit.
Most experiments to date focus on the first two items above, platform verification and anyon detection. Microsoft’s recent Nature paper, together with the simultaneous announcement of supplementary new results, combine efforts in those areas with experiments aiming to establish interferometric readout capabilities needed for a topological qubit. Fusion, (idle) qubit lifetime measurements, and anyon swaps have yet to be demonstrated in any candidate topological quantum computing platform, but at least partially feature in Microsoft’s future roadmap. It will be fascinating to see how that effort evolves, especially given the aggressive timescales predicted by Microsoft for useful topological quantum hardware. Public reactions so far range from cautious optimism to ardent skepticism; data will hopefully settle the situation one way or another in the near future. My own take is that while Microsoft’s progress towards qubit readout is a welcome advance that has value regardless of the nature of the system to which those techniques are currently applied, convincing evidence of topological protection may still be far off.
In the meantime, I maintain the steadfast conviction that topological qubits are most certainly worth pursuing—in a broad range of platforms. Non-Abelian quantum Hall states seem resurgent candidates, and should not be discounted. Moreover, the advent of ultra-pure, highly tunable 2D materials provide new settings in which one can envision engineering non-Abelian anyon devices with complementary advantages (and disadvantages) compared to previously explored settings. Other less obvious contenders may also rise at some point. The prospect of discovering new emergent phenomena mitigating the need for quantum error correction warrants continued effort with an open mind.