The most ingenious invention to surprise me at CERN was a box of chocolates. CERN is a multinational particle-physics collaboration. Based in Geneva, CERN is famous for having “the world’s largest and most powerful accelerator,” according to its website. So a physicist will take for granted its colossal magnets, subatomic finesse, and petabytes of experimental data.
But I wasn’t expecting the chocolates.
In the main cafeteria, beside the cash registers, stood stacks of Toblerone. Sweet-tooth owners worldwide recognize the yellow triangular prisms stamped with Toblerone’s red logo. But I’d never seen such a prism emblazoned with CERN’s name. Scientists visit CERN from across the globe, and probably many return with Swiss-chocolate souvenirs. What better way to promulgate CERN’s influence than by coupling Switzerland’s scientific might with its culinary?1
I visited CERN last November for Sparks!, an annual public-outreach event. The evening’s speakers and performers offer perspectives on a scientific topic relevant to CERN. This year’s event highlighted quantum technologies. Physicist Sofia Vallecorsa described CERN’s Quantum Technology Initiative, and IBM philosopher Mira Wolf-Bauwens discussed ethical implications of quantum technologies. (Yes, you read that correctly: “IBM philosopher.”) Dancers Wenchi Su and I-Fang Lin presented an audiovisual performance, Rachel Maze elucidated government policies, and I spoke about quantum steampunk.
Around Sparks!, I played the physicist tourist: presented an academic talk, descended to an underground detector site, and shot the scientific breeze with members of the Quantum Technology Initiative. (What, don’t you present academic talks while touristing?) I’d never visited CERN before, but much of it felt eerily familiar.
A theoretical-physics student studies particle physics and quantum field theory (the mathematical framework behind particle physics) en route to a PhD. CERN scientists accelerate particles to high speeds, smash them together, and analyze the resulting debris. The higher the particles’ initial energies, the smaller the debris’s components, and the more elementary the physics we can infer. CERN made international headlines in 2012 for observing evidence of the Higgs boson, the particle that endows other particles with masses. As a scientist noted during my visit, one can infer CERN’s impact from how even Auto World (if I recall correctly) covered the Higgs discovery. Friends of mine process data generated by CERN, and faculty I met at Caltech helped design CERN experiments. When I mentioned to a colleague that I’d be flying to Geneva, they responded, “Oh, are you visiting CERN?” All told, a physicist can avoid CERN as easily as one can avoid the Panama Canal en route from the Atlantic Ocean to the Pacific through South America. So, although I’d never visited, CERN felt almost like a former stomping ground. It was the details that surprised me.
Familiar book, new (CERN) bookstore.
Take the underground caverns. CERN experiments take place deep underground, where too few cosmic rays reach to muck with observations much. I visited the LHCb experiment, which spotlights a particle called the “beauty quark” in Europe and the less complimentary “bottom quark” in the US. LHCb is the first experiment that I learned has its own X/Twitter account. Colloquia (weekly departmental talks at my universities) had prepared me for the 100-meter descent underground, for the hard hats we’d have to wear, and for the detector many times larger than I.
A photo of the type bandied about in particle-physics classes
A less famous hard-hat photo, showing a retired detector’s size.
But I hadn’t anticipated the bright, single-tone colors. Between the hard hats and experimental components, I felt as though I were inside the Google logo.
Or take CERN’s campus. I wandered around it for a while before a feeling of nostalgia brought me up short: I was feeling lost in precisely the same way in which I’d felt lost countless times at MIT. Numbers, rather than names, label both MIT’s and CERN’s buildings. Somebody must have chosen which number goes where by throwing darts at a map while blindfolded. Part of CERN’s hostel, building 39, neighbors buildings 222 and 577. I shouldn’t wonder to discover, someday, that the CERN building I’m searching for has wandered off to MIT.
Part of the CERN map. Can you explain it?
Between the buildings wend streets named after famous particle physicists. I nodded greetings to Einstein, Maxwell, Democritus (or Démocrite, as the French Swiss write), and Coulomb. But I hadn’t anticipated how much civil engineers venerate particle physicists. So many physicists did CERN’s designers stuff into walkways that the campus ran out of streets and had to recycle them. Route W. F. Weisskopf turns into Route R. P. Feynman at a…well, at nothing notable—not a fork or even a spoon. I applaud the enthusiasm for history; CERN just achieves feats in navigability that even MIT hasn’t.
The familiar mingled with the unfamiliar even in the crowd on campus. I was expecting to recognize only the personnel I’d coordinated with electronically. But three faces surprised me at my academic talk. I’d met those three physicists through different channels—a summer school in Malta, Harvard collaborators, and the University of Maryland—at different times over the years. But they happened to be visiting CERN at the same time as I, despite their not participating in Sparks! I’m half-reminded of the book Roughing It, which describes how Mark Twain traveled the American West via stagecoach during the 1860s. He ran into a long-lost friend “on top of the Rocky Mountains thousands of miles from home.” Exchange “on top of the Rockies” for “near the Alps” and “thousands of miles” for “even more thousands of miles.”
CERN unites physicists. We learn about its discoveries in classes, we collaborate on its research or have friends who do, we see pictures of its detectors in colloquia, and we link to its science-communication pages in blog posts. We respect CERN, and I hope we can be forgiven for fondly poking a little fun at it. So successfully has CERN spread its influence, I felt a sense of recognition upon arriving.
I didn’t buy any CERN Toblerones. But I arrived home with 4.5 pounds of other chocolates, which I distributed to family and friends, the thermodynamics lunch group I run at the University of Maryland, and—perhaps most importantly—my research group. I’ll take a leaf out of CERN’s book: to hook students on fundamental physics, start early, and don’t stint on the sweets.
With thanks to Claudia Marcelloni, Alberto Di Meglio, Michael Doser, Antonella Del Rosso, Anastasiia Lazuka, Salome Rohr, Lydia Piper, and Paulina Birtwistle for inviting me to, and hosting me at, CERN.
1After returning home, I learned that an external company runs CERN’s cafeterias and that the company orders and sells the Toblerones. Still, the idea is brilliant.
On December 6, I gave a keynote address at the Q2B 2023 Conference in Silicon Valley. Here is a transcript of my remarks. The slides I presented are here. A video of my presentation is here.
Toward quantum value
The theme of this year’s Q2B meeting is “The Roadmap to Quantum Value.” I interpret “quantum value” as meaning applications of quantum computing that have practical utility for end-users in business. So I’ll begin by reiterating a point I have made repeatedly in previous appearances at Q2B. As best we currently understand, the path to economic impact is the road through fault-tolerant quantum computing. And that poses daunting challenges for our field and for the quantum industry.
We are in the NISQ era. NISQ (rhymes with “risk’”) is an acronym meaning “Noisy Intermediate-Scale Quantum.” Here “intermediate-scale” conveys that current quantum computing platforms with of order 100 qubits are difficult to simulate by brute force using the most powerful currently existing supercomputers. “Noisy” reminds us that today’s quantum processors are not error-corrected, and noise is a serious limitation on their computational power. NISQ technology already has noteworthy scientific value. But as of now there is no proposed application of NISQ computing with commercial value for which quantum advantage has been demonstrated when compared to the best classical hardware running the best algorithms for solving the same problems. Furthermore, currently there are no persuasive theoretical arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.
A useful survey of quantum computing applications, over 300 pages long, recently appeared, providing rough estimates of end-to-end run times for various quantum algorithms. This is hardly the last word on the subject — new applications are continually proposed, and better implementations of existing algorithms continually arise. But it is a valuable snapshot of what we understand today, and it is sobering.
There can be quantum advantage in some applications of quantum computing to optimization, finance, and machine learning. But in this application area, the speedups are typically at best quadratic, meaning the quantum run time scales as the square root of the classical run time. So the advantage kicks in only for very large problem instances and deep circuits, which we won’t be able to execute without error correction.
Larger polynomial advantage and perhaps superpolynomial advantage is possible in applications to chemistry and materials science, but these may require at least hundreds of very well-protected logical qubits, and hundreds of millions of very high-fidelity logical gates, if not more. Quantum fault tolerance will be needed to run these applications, and fault tolerance has a hefty cost in both the number of physical qubits and the number of physical gates required. We should also bear in mind that the speed of logical gates is relevant, since the run time as measured by the wall clock will be an important determinant of the value of quantum algorithms.
Overcoming noise in quantum devices
Already in today’s quantum processors steps are taken to address limitations imposed by the noise — we use error mitigation methods like zero noise extrapolation or probabilistic error cancellation. These methods work effectively at extending the size of the circuits we can execute with useful fidelity. But the asymptotic cost scales exponentially with the size of the circuit, so error mitigation alone may not suffice to reach quantum value. Quantum error correction, on the other hand, scales much more favorably, like a power of a logarithm of the circuit size. But quantum error correction is not practical yet. To make use of it, we’ll need better two-qubit gate fidelities, many more physical qubits, robust systems to control those qubits, as well as the ability to perform fast and reliable mid-circuit measurements and qubit resets; all these are technically demanding goals.
To get a feel for the overhead cost of fault-tolerant quantum computing, consider the surface code — it’s presumed to be the best near-term prospect for achieving quantum error correction, because it has a high accuracy threshold and requires only geometrically local processing in two dimensions. Once the physical two-qubit error rate is below the threshold value of about 1%, the probability of a logical error per error correction cycle declines exponentially as we increase the code distance d:
Plogical = (0.1)(Pphysical/Pthreshold)(d+1)/2
where the number of physical qubits in the code block (which encodes a single protected qubit) is the distance squared.
Suppose we wish to execute a circuit with 1000 qubits and 100 million time steps. Then we want the probability of a logical error per cycle to be 10-11. Assuming the physical error rate is 10-3, better than what is currently achieved in multi-qubit devices, from this formula we infer that we need a code distance of 19, and hence 361 physical qubits to encode each logical qubit, and a comparable number of ancilla qubits for syndrome measurement — hence over 700 physical qubits per logical qubit, or a total of nearly a million physical qubits. If the physical error rate improves to 10-4 someday, that cost is reduced, but we’ll still need hundreds of thousands of physical qubits if we rely on the surface code to protect this circuit.
Progress toward quantum error correction
The study of error correction is gathering momentum, and I’d like to highlight some recent experimental and theoretical progress. Specifically, I’ll remark on three promising directions, all with the potential to hasten the arrival of the fault-tolerant era: erasure conversion, biased noise, and more efficient quantum codes.
Erasure conversion
Error correction is more effective if we know when and where the errors occurred. To appreciate the idea, consider the case of a classical repetition code that protects against bit flips. If we don’t know which bits have errors we can decode successfully by majority voting, assuming that fewer than half the bits have errors. But if errors are heralded then we can decode successfully by just looking at any one of the undamaged bits. In quantum codes the details are more complicated but the same principle applies — we can recover more effectively if so-called erasure errors dominate; that is, if we know which qubits are damaged and in which time steps. “Erasure conversion” means fashioning a processor such that the dominant errors are erasure errors.
We can make use of this idea if the dominant errors exit the computational space of the qubit, so that an error can be detected without disturbing the coherence of undamaged qubits. One realization is with Alkaline earth Rydberg atoms in optical tweezers, where 0 is encoded as a low energy state, and 1 is a highly excited Rydberg state. The dominant error is the spontaneous decay of the 1 to a lower energy state. But if the atomic level structure and the encoding allow, 1 usually decays not to a 0, but rather to another state g. We can check whether the g state is occupied, to detect whether or not the error occurred, without disturbing a coherent superposition of 0 and 1.
Erasure conversion can also be arranged in superconducting devices, by using a so-called dual-rail encoding of the qubit in a pair of transmons or a pair of microwave resonators. With two resonators, for example, we can encode a qubit by placing a single photon in one resonator or the other. The dominant error is loss of the photon, causing either the 01 state or the 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred, without disturbing a coherent superposition of 01 and 10.
Erasure detection has been successfully demonstrated in recent months, for both atomic (here and here) and superconducting (here and here) qubit encodings.
Biased noise
Another setting in which the effectiveness of quantum error correction can be enhanced is when the noise is highly biased. Quantum error correction is more difficult than classical error correction partly because more types of errors can occur — a qubit can flip in the standard basis, or it can flip in the complementary basis, what we call a phase error. In suitably designed quantum hardware the bit flips are highly suppressed, so we can concentrate the error-correcting power of the code on protecting against phase errors. For this scheme to work, it is important that phase errors occurring during the execution of a quantum gate do not propagate to become bit-flip errors. And it was realized just a few years ago that such bias-preserving gates are possible for qubits encoded in continuous variable systems like microwave resonators.
Specifically, we may consider a cat code, in which the encoded 0 and encoded 1 are coherent states, well separated in phase space. Then bit flips are exponentially suppressed as the mean photon number in the resonator increases. The main source of error, then, is photon loss from the resonator, which induces a phase error for the cat qubit, with an error rate that increases only linearly with photon number. We can then strike a balance, choosing a photon number in the resonator large enough to provide physical protection against bit flips, and then use a classical code like the repetition code to build a logical qubit well protected against phase flips as well.
Work on such repetition cat codes is ongoing (see here, here, and here), and we can expect to hear about progress in that direction in the coming months.
More efficient codes
Another exciting development has been the recent discovery of quantum codes that are far more efficient than the surface code. These include constant-rate codes, in which the number of protected qubits scales linearly with the number of physical qubits in the code block, in contrast to the surface code, which protects just a single logical qubit per block. Furthermore, such codes can have constant relative distance, meaning that the distance of the code, a rough measure of how many errors can be corrected, scales linearly with the block size rather than the square root scaling attained by the surface code.
These new high-rate codes can have a relatively high accuracy threshold, can be efficiently decoded, and schemes for executing fault-tolerant logical gates are currently under development.
A drawback of the high-rate codes is that, to extract error syndromes, geometrically local processing in two dimensions is not sufficient — long-range operations are needed. Nonlocality can be achieved through movement of qubits in neutral atom tweezer arrays or ion traps, or one can use the native long-range coupling in an ion trap processor. Long-range coupling is more challenging to achieve in superconducting processors, but should be possible.
An example with potential near-term relevance is a recently discovered code with distance 12 and 144 physical qubits. In contrast to the surface code with similar distance and length which encodes just a single logical qubit, this code protects 12 logical qubits, a significant improvement in encoding efficiency.
The quest for practical quantum error corrections offers numerous examples like these of co-design. Quantum error correction schemes are adapted to the features of the hardware, and ideas about quantum error correction guide the realization of new hardware capabilities. This fruitful interplay will surely continue.
An exciting time for Rydberg atom arrays
In this year’s hardware news, now is a particularly exciting time for platforms based on Rydberg atoms trapped in optical tweezer arrays. We can anticipate that Rydberg platforms will lead the progress in quantum error correction for at least the next few years, if two-qubit gate fidelities continue to improve. Thousands of qubits can be controlled, and geometrically nonlocal operations can be achieved by reconfiguring the atomic positions. Further improvement in error correction performance might be possible by means of erasure conversion. Significant progress in error correction using Rydberg platforms is reported in a paper published today.
But there are caveats. So far, repeatable error syndrome measurement has not been demonstrated. For that purpose, continuous loading of fresh atoms needs to be developed. And both the readout and atomic movement are relatively slow, which limits the clock speed.
Movability of atomic qubits will be highly enabling in the short run. But in the longer run, movement imposes serious limitations on clock speed unless much faster movement can be achieved. As things currently stand, one can’t rapidly accelerate an atom without shaking it loose from an optical tweezer, or rapidly accelerate an ion without heating its motional state substantially. To attain practical quantum computing using Rydberg arrays, or ion traps, we’ll eventually need to make the clock speed much faster.
Cosmic rays!
To be fair, other platforms face serious threats as well. One is the vulnerability of superconducting circuits to ionizing radiation. Cosmic ray muons for example will occasionally deposit a large amount of energy in a superconducting circuit, creating many phonons which in turn break Cooper pairs and induce qubit errors in a large region of the chip, potentially overwhelming the error-correcting power of the quantum code. What can we do? We might go deep underground to reduce the muon flux, but that’s expensive and inconvenient. We could add an additional layer of coding to protect against an event that wipes out an entire surface code block; that would increase the overhead cost of error correction. Or maybe modifications to the hardware can strengthen robustness against ionizing radiation, but it is not clear how to do that.
Outlook
Our field and the quantum industry continue to face a pressing question: How will we scale up to quantum computing systems that can solve hard problems? The honest answer is: We don’t know yet. All proposed hardware platforms need to overcome serious challenges. Whatever technologies may seem to be in the lead over, say, the next 10 years might not be the best long-term solution. For that reason, it remains essential at this stage to develop a broad array of hardware platforms in parallel.
Today’s NISQ technology is already scientifically useful, and that scientific value will continue to rise as processors advance. The path to business value is longer, and progress will be gradual. Above all, we have good reason to believe that to attain quantum value, to realize the grand aspirations that we all share for quantum computing, we must follow the road to fault tolerance. That awareness should inform our thinking, our strategy, and our investments now and in the years ahead.
Crossing the quantum chasm (image generated using Midjourney)
Mid-afternoon, one Saturday late in September, I forgot where I was. I forgot that I was visiting Seattle for the second time; I forgot that I’d just finished co-organizing a workshop partially about nuclear physics for the first time. I’d arrived at a crowded doorway in the Chihuly Garden and Glass museum, and a froth of blue was towering above the onlookers in front of me. Glass tentacles, ranging from ultramarine through turquoise to clear, extended from the froth. Golden conch shells, starfish, and mollusks rode the waves below. The vision drove everything else from my mind for an instant.
Much had been weighing on my mind that week. The previous day had marked the end of a workshop hosted by the Inqubator for Quantum Simulation (IQuS, pronounced eye-KWISS) at the University of Washington. I’d co-organized the workshop with IQuS member Niklas Mueller, NIST physicist Alexey Gorshkov, and nuclear theorist Raju Venugopalanan (although Niklas deserves most of the credit). We’d entitled the workshop “Thermalization, from Cold Atoms to Hot Quantum Chromodynamics.” Quantum chromodynamics describes the strong force that binds together a nucleus’s constituents, so I call the workshop “Journey to the Center of the Atom” to myself.
We aimed to unite researchers studying thermal properties of quantum many-body systems from disparate perspectives. Theorists and experimentalists came; and quantum information scientists and nuclear physicists; and quantum thermodynamicists and many-body physicists; and atomic, molecular, and optical physicists. Everyone cared about entanglement, equilibration, and what else happens when many quantum particles crowd together and interact.
We quantum physicists crowded together and interacted from morning till evening. We presented findings to each other, questioned each other, coagulated in the hallways, drank tea together, and cobbled together possible projects. The week electrified us like a chilly ocean wave but also wearied me like an undertow. Other work called for attention, and I’d be presenting four more talks at four more workshops and campus visits over the next three weeks. The day after the workshop, I worked in my hotel half the morning and then locked away my laptop. I needed refreshment, and little refreshes like art.
Strongly interacting physicists
Chihuly Garden and Glass, in downtown Seattle, succeeded beyond my dreams: the museum drew me into somebody else’s dreams. Dale Chihuly grew up in Washington state during the mid-twentieth century. He studied interior design and sculpture before winning a Fulbright Fellowship to learn glass-blowing techniques in Murano, Italy. After that, Chihuly transformed the world. I’ve encountered glass sculptures of his in Pittsburgh; Florida; Boston; Jerusalem; Washington, DC; and now Seattle—and his reach dwarfs my travels.
Chihuly chandelier at the Renwick Gallery in Washington, DC
After the first few encounters, I began recognizing sculptures as Chihuly’s before checking their name plates. Every work by his team reflects his style. Tentacles, bulbs, gourds, spheres, and bowls evidence what I never expected glass to do but what, having now seen it, I’m glad it does.
This sentiment struck home a couple of galleries beyond the Seaforms. The exhibit Mille Fiori drew inspiration from the garden cultivated by Chihuly’s mother. The name means A Thousand Flowers, although I spied fewer flowers than what resembled grass, toadstools, and palm fronds. Visitors feel like grasshoppers amongst the red, green, and purple stalks that dwarfed some of us. The narrator of Jules Vernes’s Journey to the Center of the Earth must have felt similarly, encountering mastodons and dinosaurs underground. I encircled the garden before registering how much my mind had lightened. Responsibilities and cares felt miles away—or, to a grasshopper, backyards away. Wonder does wonders.
Mille Fiori
Near the end of the path around the museum, a theater plays documentaries about Chihuly’s projects. The documentaries include interviews with the artist, and several quotes reminded me of the science I’d been trained to seek out: “I really wanted to take glass to its glorious height,” Chihuly said, “you know, really make something special.” “Things—pieces got bigger, pieces got taller, pieces got wider.” He felt driven to push art forms as large as the glass would permit his team. Similarly, my PhD advisor John Preskill encouraged me to “think big.” What physics is worth doing—what would create an impact?
How did a boy from Tacoma, Washington impact not only fellow blown-glass artists—not only artists—not only an exhibition here and there in his home country—but experiences across the globe, including that of a physicist one weekend in September?
One idea from the IQuS workshop caught my eye. Some particle colliders accelerate heavy ions to high energies and then smash the ions together. Examples include lead and gold ions studied at CERN in Geneva. After a collision, the matter expands and cools. Nuclear physicists don’t understand how the matter cools; models predict cooling times longer than those observed. This mismatch has persisted across decades of experiments. The post-collision matter evades attempts at computer simulation; it’s literally a hot mess. Can recent advances in many-body physics help?
The exhibit Persian Ceiling at Chihuly Garden and Glass. Doesn’t it look like it could double as an artist’s rendering of a heavy-ion collision?
Martin Savage, the director of IQuS, hopes so. He hopes that IQuS will impact nuclear physics across the globe. Every university and its uncle boasts a quantum institute nowadays, but IQuS seems to me to have carved out a niche for itself. IQuS has grown up in the bosom of the Institute for Nuclear Theory at the University of Washington, which has guided nuclear theory for decades. IQuS is smashing that history together with the future of quantum simulators. IQuS doesn’t strike me as just another glass bowl in the kitchen of quantum science. A bowl worthy of Chihuly? I don’t know, but I’d like to hope so.
I left Chihuly Garden and Glass with respect for the past week and energy for the week ahead. Whether you find it in physics or in glass or in both—or in plunging into a dormant Icelandic volcano in search of the Earth’s core—I recommend the occasional dose of awe.
Participants in the final week of the workshop
With thanks to Martin Savage, IQuS, and the University of Washington for their hospitality.
This July, I came upon a museum called the Haus der Musik in one of Vienna’s former palaces. The museum contains a room dedicated to Johann Strauss II, king of the waltz. The room, dimly lit, resembles a twilit gazebo. I could almost believe that a hidden orchestra was playing the rendition of “The Blue Danube” that filled the room. Glass cases displayed dance cards and accessories that dancers would bring to a nineteenth-century ball.
A ball. Who hasn’t read about one in a novel or seen one in a film? A throng of youngsters and their chaperones, rustling in silk. The glint of candles, the vigor of movement, the thrill of interaction, the anxiety of establishing one’s place in society.
Another throng gathered a short walk from the Haus der Musik this summer. The Vienna University of Technology hosted the conference Quantum Thermodynamics (QTD) in the heart of the city. Don’t tell the other annual conferences, but QTD is my favorite. It spotlights the breed of quantum thermodynamics that’s surged throughout the past decade—the breed saturated with quantum information theory. I began attending QTD as a PhD student, and the conference shifts from city to city from year to year. I reveled in returning in person for the first time since the pandemic began.
Yet this QTD felt different. First, instead of being a PhD student, I brought a PhD student of my own. Second, granted, I enjoyed catching up with colleagues-cum-friends as much as ever. I especially relished seeing the “classmates” who belonged to my academic generation. Yet we were now congratulating each other on having founded research groups, and we were commiserating about the workload of primary investigators.
Third, I found myself a panelist in the annual discussion traditionally called “Quo vadis, quantum thermodynamics?” The panel presented bird’s-eye views on quantum thermodynamics, analyzing trends and opining on the direction our field was taking (or should take).1 Fourth, at the end of the conference, almost the last sentence spoken into any microphone was “See you in Maryland next year.” Colleagues and I will host QTD 2024.
One of my dearest quantum-thermodynamic “classmates,” Nelly Ng, participated in the panel discussion, too. We met as students (see thesetwo blog posts), and she’s now an assistant professor at Nanyang Technological University. Photo credit: Jakub Czartowski.
The day after QTD ended, I boarded an Austrian Airlines flight. Waltzes composed by Strauss played over the loudspeakers. They flipped a switch in my mind: I’d come of age, I thought. I’d attended QTD 2017 as a debutante, presenting my first invited talk at the conference series. I’d danced through QTD 2018 in Santa Barbara, as well as the online iterations held during the pandemic. I’d reveled in the vigor of scientific argumentation, the thrill of learning, the glint of slides shining on projector screens (not really). Now, I was beginning to shoulder responsibilities like a ballgown-wearing chaperone.
As I came of age, so did QTD. The conference series budded around the time I started grad school and embarked upon quantum-thermodynamics research. In 2017, approximately 80 participants attended QTD. This year, 250 people registered to attend in person, and others attended online. Two hundred fifty! Quantum thermodynamics scarcely existed as a field of research fifteen years ago.
I’ve heard that organizers of another annual conference, Quantum Information Processing (QIP), reacted similarly to a 250-person registration list some years ago. Aram Harrow, a professor and quantum information theorist at MIT, has shared stories about co-organizing the first QIPs. As a PhD student, he’d sat in his advisor’s office, taking notes, while the local quantum-information theorists chose submissions to highlight. Nowadays, a small army of reviewers and subreviewers processes the hordes of submissions. And, from what I heard about this year’s attendance, you almost might as well navigate a Disney theme park on a holiday as the QIP crowd.
Will QTD continue to grow like QIP? Would such growth strengthen or fracture the community? Perhaps we’ll discuss those questions at a “Quo vadis?” session in Maryland next year. But I, at least, hope to continue always to grow—and to dance.2
Ludwig Boltzmann, a granddaddy of thermodynamics, worked in Vienna. I’ve waited for years to make a pilgrimage.
1My opinion: Now that quantum thermodynamics has showered us with fundamental insights, we should apply it in practical applications. How? Collaborators and I suggest one path here.
2I confess to having danced the waltz step (gleaned during my 14 years of ballet training) around that Strauss room in the Haus der Musik. I didn’t waltz around the conference auditorium, though.
Late in the summer of 2021, I visited a physics paradise in a physical paradise: the Kavli Institute for Theoretical Physics (KITP). The KITP sits at the edge of the University of California, Santa Barbara like a bougainvillea bush at the edge of a yard. I was eating lunch outside the KITP one afternoon, across the street from the beach. PhD student Arman Babakhani, whom a colleague had just introduced me to, had joined me.
The KITP’s Kohn Hall
What physics was I working on nowadays? Arman wanted to know.
Thermodynamic exchanges.
The world consists of physical systems exchanging quantities with other systems. When a rose blooms outside the Santa Barbara mission, it exchanges pollen with the surrounding air. The total amount of pollen across the rose-and-air whole remains constant, so we call the amount a conserved quantity. Quantum physicists usually analyze conservation of particles, energy, and magnetization. But quantum systems can conserve quantities that participate in uncertainty relations. Such quantities are called incompatible, because you can’t measure them simultaneously. The -, -, and -components of a qubit’s spin are incompatible.
The Santa Barbara mission…
…and its roses
Exchanging and conserving incompatible quantities, systems can violate thermodynamic expectations. If one system is much larger than the other, we expect the smaller system to thermalize; yet incompatibility invalidates derivations of the thermal state’s form. Incompatibility reduces the thermodynamic entropy produced by exchanges. And incompatibility can raise the average amount entanglement in the pair of systems—the total system.
If the total system conserves incompatible quantities, what happens to the eigenstate thermalization hypothesis (ETH)? Last month’s blog post overviewed the ETH, a framework for understanding how quantum many-particle systems thermalize internally. That post labeled Mark Srednicki, a professor at the KITP, a high priest of the ETH. I want, I told Arman, to ask Mark what happens when you combine the ETH with incompatible conserved quantities.
I’ll do it, Arman said.
Soon after, I found myself in the fishbowl. High up in the KITP, a room filled with cushy seats overlooks the ocean. The circular windows lend the room its nickname. Arrayed on the armchairs and couches were Mark, Arman, Mark’s PhD student Fernando Iniguez, and Mark’s recent PhD student Chaitanya Murthy. The conversation went like this:
Mark was frustrated about not being able to answer the question. I was delighted to have stumped him. Over the next several weeks, the group continued meeting, and we emailed out notes for everyone to criticize. I particulary enjoyed watching Mark and Chaitanya interact. They’d grown so intellectually close throughout Chaitanya’s PhD studies, they reminded me of an old married couple. One of them had to express only half an idea for the other to realize what he’d meant and to continue the thread. Neither had any qualms with challenging the other, yet they trusted each other’s judgment.1
In vintage KITP fashion, we’d nearly completed a project by the time Chaitanya and I left Santa Barbara. Physical Review Letters published our paper this year, and I’m as proud of it as a gardener of the first buds from her garden. Here’s what we found.
Southern California spoiled me for roses.
Incompatible conserved quantities conflict with the ETH and the ETH’s prediction of internal thermalization. Why? For three reasons. First, when inferring thermalization from the ETH, we assume that the Hamiltonian lacks degeneracies (that no energy equals any other). But incompatible conserved quantities force degeneracies on the Hamiltonian.2
Second, when inferring from the ETH that the system thermalizes, we assume that the system begins in a microcanonical subspace. That’s an eigenspace shared by the conserved quantities (other than the Hamiltonian)—usually, an eigenspace of the total particle number or the total spin’s -component. But, if incompatible, the conserved quantities share no eigenbasis, so they might not share eigenspaces, so microcanonical subspaces won’t exist in abundance.
Third, let’s focus on a system of qubits. Say that the Hamiltonian conserves the total spin components , , and . The Hamiltonian obeys the Wigner–Eckart theorem, which sounds more complicated than it is. Suppose that the qubits begin in a state labeled by a spin quantum number and a magnetic spin quantum number . Let a particle hit the qubits, acting on them with an operator With what probability (amplitude) do the qubits end up with quantum numbers and ? The answer is . The Wigner–Eckart theorem dictates this probability amplitude’s form.
and are Hamiltonian eigenstates, thanks to the conservation law. The ETH is an ansatz for the form of —of the elements of matrices that represent operators relative to the energy eigenbasis. The ETH butts heads with the Wigner–Eckart theorem, which also predicts the matrix element’s form.
The Wigner–Eckart theorem wins, being a theorem—a proved claim. The ETH is, as the H in the acronym relates, only a hypothesis.
If conserved quantities are incompatible, we have to kiss the ETH and its thermalization predictions goodbye. But must we set ourselves adrift entirely? Can we cling to no buoy from physics’s best toolkit for quantum many-body thermalization?
No, and yes, respectively. Our clan proposed a non-Abelian ETH for Hamiltonians that conserve incompatible quantities—or, equivalently, that have non-Abelian symmetries. The non-Abelian ETH depends on and on Clebsch–Gordan coefficients—conversion factors between total-spin eigenstates and product states .
Using the non-Abelian ETH, we proved that many systems thermalize internally, despite conserving incompatible quantities. Yet the incompatibility complicates the proof enormously, extending it from half a page to several pages. Also, under certain conditions, incompatible quantities may alter thermalization. According to the conventional ETH, time-averaged expectation values come to equal thermal expectation values to within corrections, as I explained last month. The correction can grow polynomially larger in the system size, to , if conserved quantities are incompatible. Our conclusion holds under an assumption that we argue is physically reasonable.
So incompatible conserved quantities do alter the ETH, yet another thermodynamic expectation. Physicist Jae Dong Noh began checking the non-Abelian ETH numerically, and more testing is underway. And I’m looking forward to returning to the KITP this fall. Tales do say that paradise is a garden.
View through my office window at the KITP
1Not that married people always trust each other’s judgment.
2The reason is Schur’s lemma, a group-theoretic result. Appendix A of this paper explains the details.
Editor’s note: On 10 August 2023, Caltech celebrated the groundbreaking for the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement, which will open in 2025. At a lunch following the ceremony, John Preskill made these remarks.
Rendering of the facade of the Ginsburg Center
Hello everyone. I’m John Preskill, a professor of theoretical physics at Caltech, and I’m honored to have this opportunity to make some brief remarks on this exciting day.
In 2025, the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement will open on the Caltech campus. That will certainly be a cause for celebration. Quite fittingly, in that same year, we’ll have something else to celebrate — the 100th anniversary of the formulation of quantum mechanics in 1925. In 1900, it had become clear that the physics of the 19th century had serious shortcomings that needed to be addressed, and for 25 years a great struggle unfolded to establish a firm foundation for the science of atoms, electrons, and light; the momentous achievements of 1925 brought that quest to a satisfying conclusion. No comparably revolutionary advance in fundamental science has occurred since then.
For 98 years now we’ve built on those achievements of 1925 to arrive at a comprehensive understanding of much of the physical world, from molecules to materials to atomic nuclei and exotic elementary particles, and much else besides. But a new revolution is in the offing. And the Ginsburg Center will arise at just the right time and at just the right place to drive that revolution forward.
Up until now, most of what we’ve learned about the quantum world has resulted from considering the behavior of individual particles. A single electron propagating as a wave through a crystal, unfazed by barriers that seem to stand in its way. Or a single photon, bouncing hundreds of times between mirrors positioned kilometers apart, dutifully tracking the response of those mirrors to gravitational waves from black holes that collided in a galaxy billions of light years away. Understanding that single-particle physics has enabled us to explore nature in unprecedented ways, and to build information technologies that have profoundly transformed our lives.
At the groundbreaking: Physics, Math and Astronomy Chair Fiona Harrison, California Assemblymember Chris Holden, President Tom Rosenbaum, Charlotte Ginsburg, Dr. Allen Ginsburg, Pasadena Mayor Victor Gordo, Provost Dave Tirrell.
What’s happening now is that we’re getting increasingly adept at instructing particles to move in coordinated ways that can’t be accurately described in terms of the behavior of one particle at a time. The particles, as we like to say, can become entangled. Many particles, like electrons or photons or atoms, when highly entangled, exhibit an extraordinary complexity that we can’t capture with the most powerful of today’s supercomputers, or with our current theories of how Nature works. That opens extraordinary opportunities for new discoveries and new applications.
We’re very proud of the role Caltech has played in setting the stage for the next quantum revolution. Richard Feynman envisioning quantum computers that far surpass the computers we have today. Kip Thorne proposing ways to use entangled photons to perform extraordinarily precise measurements. Jeff Kimble envisioning and executing ingenious methods for entangling atoms and photons. Jim Eisenstein creating and studying extraordinary phenomena in a soup of entangled electrons. And much more besides. But far greater things are yet to come.
How can we learn to understand and exploit the behavior of many entangled particles that work together? For that, we’ll need many scientists and engineers who work together. I joined the Caltech faculty in August 1983, almost exactly 40 years ago. These have been 40 good years, but I’m having more fun now than ever before. My training was in elementary particle physics. But as our ability to manipulate the quantum world advances, I find that I have more and more in common with my colleagues from different specialties. To fully realize my own potential as a researcher and a teacher, I need to stay in touch with atomic physics, condensed matter physics, materials science, chemistry, gravitational wave physics, computer science, electrical engineering, and much else. Even more important, that kind of interdisciplinary community is vital for broadening the vision of the students and postdocs in our research groups.
Nurturing that community — that’s what the Ginsburg Center is all about. That’s what will happen there every day. That sense of a shared mission, enhanced by colocation, will enable the Ginsburg Center to lead the way as quantum science and technology becomes increasingly central to Caltech’s research agenda in the years ahead, and increasingly important for science and engineering around the globe. And I just can’t wait for 2025.
During the past seven months, I’ve steamed across the Atlantic, sailed in a flying castle, teleported across the globe, and shuttled forward and backward in time. Literarily, not literally—the Quantum-Steampunk Short-Story Contest began welcoming submissions in October 2022. We challenged everybody aged 13 and over to write a steampunk narrative that involves a real or imagined quantum technology. One hundred sixty-seven entries arrived from 29 countries. Professional writers submitted stories, as did 13-year-olds. Tenured physics professors, librarians, English and math teachers, undergraduates, physicians, graduate students, and a United States Senate staffer entered. Thanks to their creativity, I now have a folder full of other worlds.
I’m over the moon (in a steam-powered ship) to announce the winners. David Wakeham received the $1,500 grand prize for the story The Creature of Ashen House. First runner-up Gerard McCaul won $1,000 for Doctor Up and Mister Down, and second runner-up Paulo Barreto won $500 for Eikonal. The People’s Choice Award ($500) went to Cristina Legarda for Pursuit, also nominated by two judges for a “Please Turn This into a Novel” award. Thanks to the 261 of you who voted in the People’s Choice competition!
In addition to traditional awards, we created four idiosyncratic ones, each entailing $250. We recognized Jeff Provine’s Stealing Buttons for its badass steampunk heroine; Matt King’s Three Imperiled Scientists for its wit and (relatedly) its portrayal of academia; Rick Searle’s The Recurrence Machine for its steampunk atmosphere; and Claudia Clarke’s Looking Forward, Looking Back, for its heart-capturing automaton. You can read all the finalist stories here.
Quantum-steampunk graphic by contest entrant Kayla Phan, who used YouChat Imagine
Sending our judges the finalists in March, I felt not only exhilaration (and relief, as whittling down 167 entries entails no little hand wringing), but also anxiety. Would the stories measure up? So I must have glowed when the first judge submitted his evaluations: Speculative-fiction author Ken Liu enthused, “The entries were so fun to read.” Similar reactions followed from across the panel, which featured experts in mathematics, philosophy, creative writing, experimental quantum physics, and history: “I had a very good time reading these stories,” another panelist wrote. “This was fun and some excellent spring break airplane (no dirigibles, I’m afraid) reading,” said another. Many thanks to our judges and short-listing committee for their input. University of Maryland undergraduates Hannah Cho and Jade Leschack led the team of students who narrowed down the candidates. I couldn’t resist treating the committee to a Victorian-inspired thank-you upon announcing the winners.
Thank-yous baked by Panera, not me
Although this year’s contest has ended, quantum-steampunk literature has just shipped out from its berth. Two contest entrants have posted their stories on their own online domains: You can read the mystery by Duke physics professor Ken Brown here and the adventure by quantum-algorithm designer Brian Siegelwax here. All other entrants, please feel free to post your stories and to submit them to other literary contests. Drop me a line, and leave a link in the chat below, when your story is published. I’d love to hear how your journey continues.
Also, stay tuned for v2.0 of the Quantum-Steampunk Short-Story Contest. An organization has expressed interest in a reboot during the 2024–2025 academic year. AI-collaboration category, anyone? Bonus points if you use a quantum neural network. Please email me if you’d like to support the effort!
Quantum-steampunk graphic by contest entrant Necklace Devkota
The opportunity to helm this contest has been a privilege and a dream. Many thanks to our writers, readers, funder (the John Templeton Foundation), staff (especially webmaster Anıl Zenginoğlu), judges, and shortlisting committee. Keep writing, and keep experimenting.
pon receiving my speaking assignments for the Tucson Festival of Books, I mentally raised my eyebrows. I’d be participating in a panel discussion with Mike Evans, the founder of Grubhub? But I hadn’t created an app that’s a household name. I hadn’t transformed 30 million people’s eating habits. I’m a theoretical physicist; I build universes in my head for a living. I could spend all day trying to prove a theorem and failing, and no stocks would tumble as a result.
Once the wave of incredulity had crested, I noticed that the panel was entitled “The Future of Tech.” Grubhub has transformed technology, I reasoned, and quantum computing is in the process of doing so. Fair enough.
Besides, my husband pointed out, the food industry requires fridges. Physicists building quantum computers from superconductors need fridges. The latter fridges require temperatures ten million times lower than restaurateurs do, but we still share an interest.
Very well, I thought. Game on.
Tucson hosts the third-largest book festival in the United States. And why shouldn’t it, as the festival takes place in early March, when much of the country is shivering and eyeing Arizona’s T-shirt temperatures with envy? If I had to visit any institution in the winter, I couldn’t object to the festival’s home, the University of Arizona.
The day before the festival, I presented a colloquium at the university, for the Arizona Quantum Alliance. The talk took place in the Wyant College of Optical Sciences, the home of an optical-instruments museum. Many of the instruments date to the 1800s and, built from brass and wood, smack of steampunk. I approved. Outside the optics building, workers were setting up tents to house the festival’s science activities.
The next day—a Saturday—dawned clear and bright. Late in the morning, I met Mike and our panel’s moderator, Bob Griffin, another startup veteran. We sat down at a table in the back of a broad tent, the tent filled up with listeners, and the conversation began.
I relished the conversation as I’d relished an early-morning ramble along the trails by my hotel at the base of the Santa Catalina Mountains. I joined theoretical physics for the love of ideas, and this exchange of ideas offered an intellectual workout. One of Mike’s points resonated with me most: Grubhub didn’t advance technology much. He shifted consumers from ordering pizza via phone call to ordering pizza via computer, then to ordering pizza via apps on phones. Yet these small changes, accumulated across a population and encouraged by a pandemic, changed society. Food-delivery services exploded and helped establish the gig economy (despite Mike’s concerns about worker security). One small step for technology, adopted by tens of millions, can constitute one giant leap for commerce.
To me, Grubhub offered a foil for quantum computing, which offers a giant leap in technology: The physical laws best-suited to describing today’s computers can’t describe quantum computers. Some sources portray this advance as bound to transform all our lives in countless ways. This portrayal strikes some quantum scientists as hype that can endanger quality work.
Quantum computers will transform cybersecurity, being able to break the safeguards that secure our credit-card information when we order food via Grubhub. Yet most consumers don’t know what safeguards are protecting us. We simply trust that safeguards exist. How they look under the hood will change by the time large-scale quantum computers exist—will metamorphose perhaps as dramatically as did Gregor Samsa before he woke up as an insect. But consumers’ lives might not metamorphose.
Quantum scientists hope and anticipate that quantum computers will enable discoveries in chemistry, materials science, and pharmacology. Molecules are quantum, and many materials exhibit quantum properties. Simulating quantum systems takes classical (everyday) computers copious amounts of time and memory—in some cases, so much that a classical computer the size of the universe would take ages. Quantum computers will be able to simulate quantum subjects naturally. But how these simulations will impact everyday life remains a question.
For example, consider my favorite potential application of quantum computers: fertilizer production, as envisioned by Microsoft’s quantum team. Humanity spends about 3% of the world’s energy on producing fertilizer, using a technique developed in 1909. Bacteria accomplish the same goal far more efficiently. But those bacteria use a molecule—nitrogenase—too complicated for us to understand using classical computers. Being quantum, the molecule invites quantum computation. Quantum computers may crack the molecule’s secrets and transform fertilizer production and energy use. The planet and humanity would benefit. We might reduce famines or avert human-driven natural disasters. But would the quantum computation change my neighbor’s behavior as Grubhub has? I can’t say.
Finally, evidence suggests that quantum computers can assist with optimization problems. Imagine a company that needs to transport supplies to various places at various times. How can the company optimize this process—implement it most efficiently? Quantum computers seem likely to be able to help. The evidence isn’t watertight, however, and quantum computers might not solve optimization problems exactly. If the evidence winds up correct, industries will benefit. But would this advance change Jane Doe’s everyday habits? Or will she only receive pizza deliveries a few minutes more quickly?
Don’t get me wrong; quantum technology has transformed our lives. It’s enabled the most accurate, most precise clocks in the world, which form the infrastructure behind GPS. Quantum physics has awed us, enabling the detection of gravitational waves—ripples, predicted by Einstein, in spacetime. But large-scale quantum computers—the holy grail of quantum technology—don’t suit all problems, such as totting up the miles I traveled en route to Tucson; and consumers might not notice quantum computers’ transformation of cybersecurity. I expect quantum computing to change the world, but let’s think twice about whether quantum computing will change everyone’s life like a blockbuster app.
I’ve no idea how many people have made this pun about Mike’s work, but the panel discussion left me with food for thought. He earned his undergraduate degree at MIT, by the way; so scientifically inclined Quantum Frontiers readers might enjoy his memoir, Hangry. It conveys a strong voice and dishes on data and diligence through stories. (For the best predictor of whether you’ll enjoy a burrito, ignore the starred reviews. Check how many people have reordered the burrito.)
The festival made my week. After the panel, I signed books; participated in a discussion about why “The Future Is Quantum!” with law professor Jane Bambauer; and narrowly missed a talk by Lois Lowry, a Newbury Award winner who wrote novels that I read as a child. (The auditorium filled up before I reached the door, but I’m glad that it did; Lois Lowry deserves a packed house and then some.) I learned—as I’d wondered—that yes, there’s something magical to being an author at a book festival. And I learned about how the future of tech depends on more than tech.
My best friend—who’s held the title of best friend since kindergarten—calls me the keeper of her childhood memories. I recall which toys we played with, the first time I visited her house,1 and which beverages our classmates drank during snack time in kindergarten.2 She wouldn’t be surprised to learn that the first workshop I’ve co-organized centered on memory.
Memory—and the loss of memory—stars in thermodynamics. As an example, take what my husband will probably do this evening: bake tomorrow’s breakfast. I don’t know whether he’ll bake fruit-and-oat cookies, banana muffins, pear muffins, or pumpkin muffins. Whichever he chooses, his baking will create a scent. That scent will waft across the apartment, seep into air vents, and escape into the corridor—will disperse into the environment. By tomorrow evening, nobody will be able to tell by sniffing what my husband will have baked.
That is, the kitchen’s environment lacks a memory. This lack contributes to our experience of time’s arrow: We sense that time passes partially by smelling less and less of breakfast. Physicists call memoryless systems and processes Markovian.
Our kitchen’s environment is Markovian because it’s large and particles churn through it randomly. But not all environments share these characteristics. Metaphorically speaking, a dispersed memory of breakfast may recollect, return to a kitchen, and influence the following week’s baking. For instance, imagine an atom in a quantum computer, rather than a kitchen in an apartment. A few other atoms may form our atom’s environment. Quantum information may leak from our atom into that environment, swish around in the environment for a time, and then return to haunt our atom. We’d call the atom’s evolution and environment non-Markovian.
I had the good fortune to co-organize a workshop about non-Markovianity—about memory—this February. The workshop took place at the Banff International Research Station, abbreviated BIRS, which you pronounce like the plural of what you say when shivering outdoors in Canada. BIRS operates in the Banff Centre for Arts and Creativity, high in the Rocky Mountains. The Banff Centre could accompany a dictionary entry for pristine, to my mind. The air feels crisp, the trees on nearby peaks stand out against the snow like evergreen fringes on white velvet, and the buildings balance a rustic-mountain-lodge style with the avant-garde.
The workshop balanced styles, too, but skewed toward the theoretical and abstract. We learned about why the world behaves classically in our everyday experiences; about information-theoretic measures of the distances between quantum states; and how to simulate, on quantum computers, chemical systems that interact with environments. One talk, though, brought our theory back down to (the snow-dusted) Earth.
Gabriela Schlau-Cohen runs a chemistry lab at MIT. She wants to understand how plants transport energy. Energy arrives at a plant from the sun in the form of light. The light hits a pigment-and-protein complex. If the plant is lucky, the light transforms into a particle-like packet of energy called an exciton. The exciton traverses the receptor complex, then other complexes. Eventually, the exciton finds a spot where it can enable processes such as leaf growth.
A high fraction of the impinging photons—85%—transform into excitons. How do plants convert and transport energy as efficiently as they do?
Gabriela’s group aims to find out—not by testing natural light-harvesting complexes, but by building complexes themselves. The experimentalists mimic the complex’s protein using DNA. You can fold DNA into almost any shape you want, by choosing the DNA’s base pairs (basic units) adroitly and by using “staples” formed from more DNA scraps. The sculpted molecules are called DNA origami.
Gabriela’s group engineers different DNA structures, analogous to complexes’ proteins, to have different properties. For instance, the experimentalists engineer rigid structures and flexible structures. Then, the group assesses how energy moves through each structure. Each structure forms an environment that influences excitons’ behaviors, similarly to how a memory-containing environment influences an atom.
Courtesy of Gabriela Schlau-Cohen
The Banff environment influenced me, stirring up memories like powder displaced by a skier on the slopes above us. I first participated in a BIRS workshop as a PhD student, and then I returned as a postdoc. Now, I was co-organizing a workshop to which I brought a PhD student of my own. Time flows, as we’re reminded while walking down the mountain from the Banff Centre into town: A cemetery borders part of the path. Time flows, but we belong to that thermodynamically remarkable class of systems that retain memories…memories and a few other treasures that resist change, such as friendships held since kindergarten.
1Plushy versions of Simba and Nala from The Lion King. I remain grateful to her for letting me play at being Nala.
2I’d request milk, another kid would request apple juice, and everyone else would request orange juice.
I didn’t fancy the research suggestion emailed by my PhD advisor.
A 2016 email from John Preskill led to my publishing a paper about quantum complexity in 2022, as I explained in last month’s blog post. But I didn’t explain what I thought of his email upon receiving it.
It didn’t float my boat. (Hence my not publishing on it until 2022.)
The suggestion contained ingredients that ordinarily would have caulked any cruise ship of mine: thermodynamics, black-hole-inspired quantum information, and the concept of resources. John had forwarded a paper drafted by Stanford physicists Adam Brown and Lenny Susskind. They act as grand dukes of the community sussing out what happens to information swallowed by black holes.
From Rare-Gallery
We’re not sure how black holes work. However, physicists often model a black hole with a clump of particles squeezed close together and so forced to interact with each other strongly. The interactions entangle the particles. The clump’s quantum state—let’s call it —grows not only complicated with time (), but also complex in a technical sense: Imagine taking a fresh clump of particles and preparing it in the state via a sequence of basic operations, such as quantum gates performable with a quantum computer. The number of basic operations needed is called the complexity of . A black hole’s state has a complexity believed to grow in time—and grow and grow and grow—until plateauing.
This growth echoes the second law of thermodynamics, which helps us understand why time flows in only one direction. According to the second law, every closed, isolated system’s entropy grows until plateauing.1 Adam and Lenny drew parallels between the second law and complexity’s growth.
The less complex a quantum state is, the better it can serve as a resource in quantum computations. Recall, as we did last month, performing calculations in math class. You needed clean scratch paper on which to write the calculations. So does a quantum computer. “Scratch paper,” to a quantum computer, consists of qubits—basic units of quantum information, realized in, for example, atoms or ions. The scratch paper is “clean” if the qubits are in a simple, unentangled quantum state—a low-complexity state. A state’s greatest possible complexity, minus the actual complexity, we can call the state’s uncomplexity. Uncomplexity—a quantum state’s blankness—serves as a resource in quantum computation.
Manny Knill and Ray Laflamme realized this point in 1998, while quantifying the “power of one clean qubit.” Lenny arrived at a similar conclusion while reasoning about black holes and firewalls. For an introduction to firewalls, see this blog post by John. Suppose that someone—let’s call her Audrey—falls into a black hole. If it contains a firewall, she’ll burn up. But suppose that someone tosses a qubit into the black hole before Audrey falls. The qubit kicks the firewall farther away from the event horizon, so Audrey will remain safe for longer. Also, the qubit increases the uncomplexity of the black hole’s quantum state. Uncomplexity serves as a resource also to Audrey.
A resource is something that’s scarce, valuable, and useful for accomplishing tasks. Different things qualify as resources in different settings. For instance, imagine wanting to communicate quantum information to a friend securely. Entanglement will serve as a resource. How can we quantify and manipulate entanglement? How much entanglement do we need to perform a given communicational or computational task? Quantum scientists answer such questions with a resource theory, a simple information-theoretic model. Theorists have defined resource theories for entanglement, randomness, and more. In many a blog post, I’ve eulogized resource theories for thermodynamic settings. Can anyone define, Adam and Lenny asked, a resource theory for quantum uncomplexity?
Resource thinking pervades our world.
By late 2016, I was a quantum thermodynamicist, I was a resource theorist, and I’d just debuted my first black-hole–inspired quantum information theory. Moreover, I’d coauthored a review about the already-extant resource theory that looked closest to what Adam and Lenny sought. Hence John’s email, I expect. Yet that debut had uncovered reams of questions—questions that, as a budding physicist heady with the discovery of discovery, I could own. Why would I answer a question of someone else’s instead?
So I thanked John, read the paper draft, and pondered it for a few days. Then, I built a research program around my questions and waited for someone else to answer Adam and Lenny.
Three and a half years later, I was still waiting. The notion of uncomplexity as a resource had enchanted the black-hole-information community, so I was preparing a resource-theory talk for a quantum-complexity workshop. The preparations set wheels churning in my mind, and inspiration struck during a long walk.2
After watching my workshop talk, Philippe Faist reached out about collaborating. Philippe is a coauthor, a friend, and a fellow quantum thermodynamicist and resource theorist. Caltech’s influence had sucked him, too, into the black-hole community. We Zoomed throughout the pandemic’s first spring, widening our circle to include Teja Kothakonda, Jonas Haferkamp, and Jens Eisert of Freie University Berlin. Then, Anthony Munson joined from my nascent group in Maryland. Physical Review A published our paper, “Resource theory of quantum uncomplexity,” in January.
The next four paragraphs, I’ve geared toward experts. An agent in the resource theory manipulates a set of qubits. The agent can attempt to perform any gate on any two qubits. Noise corrupts every real-world gate implementation, though. Hence the agent effects a gate chosen randomly from near . Such fuzzy gates are free. The agent can’t append or discard any system for free: Appending even a maximally mixed qubit increases the state’s uncomplexity, as Knill and Laflamme showed.
Fuzzy gates’ randomness prevents the agent from mapping complex states to uncomplex states for free (with any considerable probability). Complexity only grows or remains constant under fuzzy operations, under appropriate conditions. This growth echoes the second law of thermodynamics.
We also defined operational tasks—uncomplexity extraction and expenditure analogous to work extraction and expenditure. Then, we bounded the efficiencies with which the agent can perform these tasks. The efficiencies depend on a complexity entropy that we defined—and that’ll star in part trois of this blog-post series.
Now, I want to know what purposes the resource theory of uncomplexity can serve. Can we recast black-hole problems in terms of the resource theory, then leverage resource-theory results to solve the black-hole problem? What about problems in condensed matter? Can our resource theory, which quantifies the difficulty of preparing quantum states, merge with the resource theory of magic, which quantifies that difficulty differently?
Unofficial mascot for fuzzy operations
I don’t regret having declined my PhD advisor’s recommendation six years ago. Doing so led me to explore probability theory and measurement theory, collaborate with two experimental labs, and write ten papers with 21 coauthors whom I esteem. But I take my hat off to Adam and Lenny for their question. And I remain grateful to the advisor who kept my goals and interests in mind while checking his email. I hope to serve Anthony and his fellow advisees as well.
1…en route to obtaining a marriage license. My husband and I married four months after the pandemic throttled government activities. Hours before the relevant office’s calendar filled up, I scored an appointment to obtain our license. Regarding the metro as off-limits, my then-fiancé and I walked from Cambridge, Massachusetts to downtown Boston for our appointment. I thank him for enduring my requests to stop so that I could write notes.
2At least, in the thermodynamic limit—if the system is infinitely large. If the system is finite-size, its entropy grows on average.