As a physicist, one grows used to answering audience questions at the end of a talk one presents. As a quantum physicist, one grows used to answering questions about futuristic technologies. As a quantum-steampunk physicist, one grows used to the question “Which are your favorite steampunk books?”
Literary Hub has now published my answer.
According to its website, “Literary Hub is an organizing principle in the service of literary culture, a single, trusted, daily source for all the news, ideas and richness of contemporary literary life. There is more great literary content online than ever before, but it is scattered, easily lost—with the help of its editorial partners, Lit Hub is a site readers can rely on for smart, engaged, entertaining writing about all things books.”
In lieu of composing a blog post this month, I’m publishing an article in Quanta Magazine. The article provides an introduction to fluctuation relations, souped-up variations on the second law of thermodynamics, which helps us understand why time flows in only one direction. The earliest fluctuation relations described classical systems, such as single strands of DNA. Many quantum versions have been proved since. Their proliferation contrasts with the stereotype of physicists as obsessed with unification—with slimming down a cadre of equations into one über-equation. Will one quantum fluctuation relation emerge to rule them all? Maybe, and maybe not. Maybe the multiplicity of quantum fluctuation relations reflects the richness of quantum thermodynamics.
You can read more in Quanta Magazinehere and yet more in chapter 9 of my book. For recent advances in fluctuation relations, as opposed to the broad introduction there, check out earlier Quantum Frontiers posts here, here, here, here, and here.
Caltech condensed-matter theorist Gil Refael explained his scientific raison dê’tre early in my grad-school career: “What really gets me going is seeing a plot [of experimental data] and being able to say, ‘I can explain that.’” The quote has stuck with me almost word for word. When I heard it, I was working deep in abstract quantum information theory and thermodynamics, proving theorems about thought experiments. Embedding myself in pure ideas has always held an aura of romance for me, so I nodded along without seconding Gil’s view.
Throughout the universe, small systems exchange stuff with their environments. For instance, the Earth exchanges heat and light with the rest of the solar system. After exchanging stuff for long enough, the small system equilibrates with the environment: Large-scale properties of the small system (such as its volume and energy) remain fairly constant; and as much stuff enters the small system as leaves, on average. The Earth remains far from equilibrium, which is why we aren’t dead yet.
In many cases, in equilibrium, the small system shares properties of the environment, such as the environment’s temperature. In these cases, we say that the small system has thermalized and, if it’s quantum, has reached a thermal state.
The stuff exchanged can consist of energy, particles, electric charge, and more. Unlike classical planets, quantum systems can exchange things that participate in quantum uncertainty relations (experts: that fail to commute). Quantum uncertainty mucks up derivations of the thermal state’s mathematical form. Some of us quantum thermodynamicists discovered the mucking up—and identified exchanges of quantum-uncertain things as particularly nonclassical thermodynamics—only a few years ago. We reworked conventional thermodynamic arguments to accommodate this quantum uncertainty. The small system, we concluded, likely equilibrates to near a thermal state whose mathematical form depends on the quantum-uncertain stuff—what we termed a non-Abelian thermal state. I wanted to see this equilibration in the lab. So I proposed an experiment with theory collaborators; and Manoj, Florian, and Christian took a risk on us.
The experimentalists arrayed between six and fifteen ions in a line. Two ions formed the small system, and the rest formed the quantum environment. The ions exchanged the -, -, and -components of their spin angular momentum—stuff that participates in quantum uncertainty relations. The ions began with a fairly well-defined amount of each spin component, as described in another blog post. The ions exchanged stuff for a while, and then the experimentalists measured the small system’s quantum state.
The small system equilibrated to near the non-Abelian thermal state, we found. No conventional thermal state modeled the results as accurately. Score!
My postdoc and numerical-simulation wizard Aleks Lasek modeled the experiment on his computer. The small system, he found, remained farther from the non-Abelian thermal state in his simulation than in the experiment. Aleks plotted the small system’s distance to the non-Abelian thermal state against the ion chain’s length. The points produced experimentally sat lower down than the points produced numerically. Why?
I think I can explain that, I said. The two ions exchange stuff with the rest of the ions, which serve as a quantum environment. But the two ions exchange stuff also with the wider world, such as stray electromagnetic fields. The latter exchanges may push the small system farther toward equilibrium than the extra ions alone do.
Fortunately for the development of my explanatory skills, collaborators prodded me to hone my argument. The wider world, they pointed out, effectively has a very high temperature—an infinite temperature.1 Equilibrating with that environment, the two ions would acquire an infinite temperature themselves. The two ions would approach an infinite-temperature thermal state, which differs from the non-Abelian thermal state we aimed to observe.
Fair, I said. But the extra ions probably have a fairly high temperature themselves. So the non-Abelian thermal state is probably close to the infinite-temperature thermal state. Analogously, if someone cooks goulash similarly to his father, and the father cooks goulash similarly to his grandfather, then the youngest chef cooks goulash similarly to his grandfather. If the wider world pushes the two ions to equilibrate to infinite temperature, then, because the infinite-temperature state lies near the non-Abelian thermal state, the wider world pushes the two ions to equilibrate to near the non-Abelian thermal state.
I plugged numbers into a few equations to check that the extra ions do have a high temperature. (Perhaps I should have done so before proposing the argument above, but my collaborators were kind enough not to call me out.)
Aleks hammered the nail into the problem’s coffin by incorporating into his simulations the two ions’ interaction with an infinite-temperature wider world. His numerical data points dropped to near the experimental data points. The new plot supported my story.
I can explain that! Aleks’s results buoyed me the whole next day; I found myself smiling at random times throughout the afternoon. Not that I’d explained a grand mystery, like the unexpected hiss heard by Arno Penzias and Robert Wilson when they turned on a powerful antenna in 1964. The hiss turned out to come from the cosmic microwave background (CMB), a collection of photons that fill the visible universe. The CMB provided evidence for the then-controversial Big Bang theory of the universe’s origin. Discovering the CMB earned Penzias and Wilson a Nobel Prize. If the noise caused by the CMB was music to cosmologists’ ears, the noise in our experiment is the quiet wailing of a shy banshee. But it’s our experiment’s noise, and we understand it now.
The experience hasn’t weaned me off the romance of proving theorems about thought experiments. Theorems about thermodynamic quantum uncertainty inspired the experiment that yielded the plot that confused us. But I now second Gil’s sentiment. In the throes of an experiment, “I can explain that” can feel like a battle cry.
1Experts: The wider world effectively has an infinite temperature because (i) the dominant decoherence is dephasing relative to the product eigenbasis and (ii) the experimentalists rotate their qubits often, to simulate a rotationally invariant Hamiltonian evolution. So the qubits effectively undergo dephasing relative to the , , and eigenbases.
I felt like a gum ball trying to squeeze my way out of a gum-ball machine.
I was one of 50-ish physicists crammed into the lobby—and in the doorway, down the stairs, and onto the sidewalk—of a Manhattan hotel last December. Everyone had received a COVID vaccine, and the omicron variant hadn’t yet begun chewing up North America. Everyone had arrived on the same bus that evening, feeding on the neon-bright views of Fifth Avenue through dinnertime. Everyone wanted to check in and offload suitcases before experiencing firsthand the reason for the nickname “the city that never sleeps.” So everyone was jumbled together in what passed for a line.
We’d just passed the halfway point of the week during which I was pretending to be a string theorist. I do that whenever my research butts up against black holes, chaos, quantum gravity (the attempt to unify quantum physics with Einstein’s general theory of relativity), and alternative space-times. These topics fall under the heading “It from Qubit,” which calls for understanding puzzling physics (“It”) by analyzing how quantum systems process information (“Qubit”). The “It from Qubit” crowd convenes for one week each December, to share progress and collaborate.1 The group spends Monday through Wednesday at Princeton’s Institute for Advanced Study (IAS), dogged by photographs of Einstein, busts of Einstein, and roads named after Einstein. A bus ride later, the group spends Thursday and Friday at the Simons Foundation in New York City.
I don’t usually attend “It from Qubit” gatherings, as I’m actually a quantum information theorist and quantum thermodynamicist. Having admitted as much during the talk I presented at the IAS, I failed at pretending to be a string theorist. Happily, I adore being the most ignorant person in a roomful of experts, as the experience teaches me oodles. At lunch and dinner, I’d plunk down next to people I hadn’t spoken to and ask what they see as trending in the “It from Qubit” community.
One buzzword, I’d first picked up on shortly before the pandemic had begun (replicas). Having lived a frenetic life, that trend seemed to be declining. Rising buzzwords (factorization and islands), I hadn’t heard in black-hole contexts before. People were still tossing around terms from when I’d first forayed into “It from Qubit” (scrambling and out-of-time-ordered correlator), but differently from then. Five years ago, the terms identified the latest craze. Now, they sounded entrenched, as though everyone expected everyone else to know and accept their significance.
One buzzword labeled my excuse for joining the workshops: complexity. Complexity wears as many meanings as the stereotypical New Yorker wears items of black clothing. Last month, guest blogger Logan Hillberry wrote about complexity that emerges in networks such as brains and social media. To “It from Qubit,” complexity quantifies the difficulty of preparing a quantum system in a desired state. Physicists have conjectured that a certain quantum state’s complexity parallels properties of gravitational systems, such as the length of a wormhole that connects two black holes. The wormhole’s length grows steadily for a time exponentially large in the gravitational system’s size. So, to support the conjecture, researchers have been trying to prove that complexity typically grows similarly. Collaborators and I proved that it does, as I explained in my talk and as I’ll explain in a future blog post. Other speakers discussed experimental complexities, as well as the relationship between complexity and a simplified version of Einstein’s equations for general relativity.
I learned a bushel of physics, moonlighting as a string theorist that week. The gum-ball-machine lobby, though, retaught me something I’d learned long before the pandemic. Around the time I squeezed inside the hotel, a postdoc struck up a conversation with the others of us who were clogging the doorway. We had a decent fraction of an hour to fill; so we chatted about quantum thermodynamics, grant applications, and black holes. I asked what the postdoc was working on, he explained a property of black holes, and it reminded me of a property of thermodynamics. I’d nearly reached the front desk when I realized that, out of the sheer pleasure of jawing about physics with physicists in person, I no longer wanted to reach the front desk. The moment dangles in my memory like a crystal ornament from the lobby’s tree—pendant from the pandemic, a few inches from the vaccines suspended on one side and from omicron on the other. For that moment, in a lobby buoyed by holiday lights, wrapped in enough warmth that I’d forgotten the December chill outside, I belonged to the “It from Qubit” community as I hadn’t belonged to any community in 22 months.
Happy new year.
1In person or virtually, pandemic-dependently.
Thanks to the organizers of the IAS workshop—Ahmed Almheiri, Adam Bouland, Brian Swingle—for the invitation to present and to the organizers of the Simons Foundation workshop—Patrick Hayden and Matt Headrick—for the invitation to attend.
Shortly after becoming a Fellow of QuICS, the Joint Center for Quantum Information and Computer Science, I received an email from a university communications office. The office wanted to take professional photos of my students and postdocs and me. You’ve probably seen similar photos, in which theoretical physicists are writing equations, pointing at whiteboards, and thinking deep thoughts. No surprise there.
A big surprise followed: Tom Ventsias, the director of communications at the University of Maryland Institute for Advanced Computer Studies (UMIACS), added, “I wanted to hear your thoughts about possibly doing a dual photo shoot for you—one more ‘traditional,’ one ‘quantum steampunk’ style.”
Steampunk, as Quantum Frontiers regulars know, is a genre of science fiction. It combines futuristic technologies, such as time machines and automata, with Victorian settings. I call my research “quantum steampunk,” as it combines the cutting-edge technology of quantum information science with the thermodynamics—the science of energy—developed during the 1800s. I’ve written a thesis called “Quantum steampunk”; authored a trade nonfiction book with the same title; and presented enough talks about quantum steampunk that, strung together, they’d give one laryngitis. But I don’t own goggles, hoop skirts, or petticoats. The most steampunk garb I’d ever donned before this autumn, I wore for a few minutes at age six or so, for dress-up photos at a theme park. I don’t even like costumes.
But I earned my PhD under the auspices of fellow Quantum Frontiers blogger John Preskill,1 whose career suggests a principle to live by: While unravelling the universe’s nature and helping to shape humanity’s intellectual future, you mustn’t take yourself too seriously. This blog has exhibited a photo of John sitting in Caltech’s information-sciences building, exuding all the gravitas of a Princeton degree, a Harvard degree, and world-impacting career—sporting a baseball glove you’d find in a high-school gym class, as though it were a Tag Heuer watch. John adores baseball, and the photographer who documented Caltech’s Institute for Quantum Information and Matter brought out the touch of whimsy like the ghost of a smile.
Let’s try it, I told Tom.
One rust-colored November afternoon, I climbed to the top of UMIACS headquarters—the Iribe Center—whose panoramic view of campus begs for photographs. Two students were talking in front of a whiteboard, and others were lunching on the sandwiches, fruit salad, and cheesecake ordered by Tom’s team. We took turns brandishing markers, gesturing meaningfully, and looking contemplative.
Then, the rest of my team dispersed, and the clock rewound 150 years.
The professionalism and creativity of Tom’s team impressed me. First, they’d purchased a steampunk hat, complete with goggles and silver wires. Recalling the baseball-glove photo, I suggested that I wear the hat while sitting at a table, writing calculations as I ordinarily would.
Then, the team upped the stakes. Earlier that week, Maria Herd, a member of the communications office, had driven me to the University of Maryland performing-arts center. We’d sifted through the costume repository until finding skirts, vests, and a poofy white shirt reminiscent of the 1800s. I swapped clothes near the photo-shoot area, while the communications team beamed a London street in from the past. Not really, but they nearly did: They’d found a backdrop suitable for the 2020 Victorian-era Netflix hit Enola Holmes and projected the backdrop onto a screen. I stood in front of the screen, and a sheet of glass stood in front of me. I wrote equations on the glass while the photographer, John Consoli, snapped away.
The final setup, I would never have dreamed of. Days earlier, the communications team had located an elevator lined, inside, with metal links. They’d brought colorful, neon-lit rods into the elevator and experimented with creating futuristic backdrops. On photo-shoot day, they positioned me in the back of the elevator and held the light-saber-like rods up.
But we couldn’t stop anyone from calling the elevator. We’d ride up to the third or fourth floor, and the door would open. A student would begin to step in; halt; and stare my floor-length skirt, the neon lights, and the photographer’s back.
“Feel free to get in.” John’s assistant, Gail Marie Rupert, would wave them inside. The student would shuffle inside—in most cases—and the door would close.
“What floor?” John would ask.
John would twist around, press the appropriate button, and then turn back to his camera.
Once, when the door opened, the woman who entered complimented me on my outfit. Another time, the student asked if he was really in the Iribe Center. I regard that question as evidence of success.
John Consoli took 654 photos. I found the process fascinating, as a physicist. I have a domain of expertise; and I know the feeling of searching for—working toward—pushing for—a theorem or a conceptual understanding that satisfies me, in that domain. John’s area of expertise differs from mine, so I couldn’t say what he was searching for. But I recognized his intent and concentration, as Gail warned him that time had run out and he then made an irritated noise, inched sideways, and stole a few more snapshots. I felt like I was seeing myself in a reflection—not in the glass I was writing on, but in another sphere of the creative life.
The communications team’s eagerness to engage in quantum steampunk—to experiment with it, to introduce it into photography, to make it their own—bowled me over. Quantum steampunk isn’t just a stack of papers by one research group; it’s a movement. Seeing a team invest its time, energy, and imagination in that movement felt like receiving a deep bow or curtsy. Thanks to the UMIACS communications office for bringing quantum steampunk to life.
1Who hasn’t blogged in a while. How about it, John?
Tourism websites proclaim, “There’s beautiful…and then there’s Santa Barbara.” I can’t accuse them of hyperbole, after living in Santa Barbara for several months. Santa Barbara’s beauty manifests in its whitewashed buildings, capped with red tiles; in the glint of sunlight on ocean wave; and in the pockets of tranquility enfolded in meadows and copses. An example lies about an hour’s walk from the Kavli Institute for Theoretical Physics (KITP), where I spent the late summer and early fall: an estuary. According to National Geographic, “[a]n estuary is an area where a freshwater river or stream meets the ocean.” The meeting of freshwater and saltwater echoed the meeting of disciplines at the KITP.
The KITP fosters science as a nature reserve fosters an ecosystem. Every year, the institute hosts several programs, each centered on one scientific topic. A program lasts a few weeks or months, during which scientists visit from across the world. We present our perspectives on the program topic, identify intersections of interests, collaborate, and exclaim over the ocean views afforded by our offices.
From August to October, the KITP hosted two programs about energy and information. The first program was called “Energy and Information Transport in Non-Equilibrium Quantum Systems,” or “Information,” for short. The second program was called “Non-Equilibrium Universality: From Classical to Quantum and Back,” or “Universality.” The programs’ topics and participant lists overlapped, so the KITP merged “Information” and “Universality” to form “Infoversality.” Don’t ask me which program served as the saltwater and which as the fresh.
But the mingling of minds ran deeper. Much of “Information” centered on quantum many-body physics, the study of behaviors emergent in collections of quantum particles. But the program introduced many-body quantum physicists to quantum thermodynamics and vice versa. (Quantum thermodynamicists re-envision thermodynamics, the Victorian science of energy, for quantum, small, information-processing, and far-from-equilibrium systems.) Furthermore, quantum thermodynamicists co-led the program and presented research at it. Months ago, someone advertised the program in the quantum-thermodynamics Facebook group as an activity geared toward group members.
The ocean of many-body physics was to meet the river of quantum thermodynamics, and I was thrilled as a trout swimming near a hiker who’s discovered cracker crumbs in her pocket.
A few of us live in this estuary, marrying quantum thermodynamics and many-body physics. I waded into the waters in 2016, by codesigning an engine (the star of Victorian thermodynamics) formed from a quantum material (studied in many-body physics). We can use tools from one field to solve problems in the other, draw inspiration from one to design questions in the other, and otherwise do what the United States Food and Drug Administration recently announced that we can do with COVID19 vaccines: mix and match.
It isn’t easy being interdisciplinary, so I wondered how this estuary would fare when semi-institutionalized in a program. I collected observations like seashells—some elegantly molded, some liable to cut a pedestrian’s foot, and some both.
A sand dollar washed up early in the program, as I ate lunch with a handful of many-body physicists. An experimentalist had just presented a virtual talk about nanoscale clocks, which grew from studies of autonomous quantum clocks. The latter run on their own, without needing any external system to wind or otherwise control them. You’d want such clocks if building quantum engines, computers, or drones that operate remotely. Clocks measure time, time complements energy mathematically in physics, and thermodynamics is the study of energy; so autonomous quantum clocks have taken root in quantum thermodynamics. So I found myself explaining autonomous quantum clocks over sandwiches. My fellow diners expressed interest alongside confusion.
A scallop shell, sporting multiple edges, washed up later in the program: Many-body physicists requested an introduction to quantum thermodynamics. I complied one afternoon, at a chalkboard in the KITP’s outdoor courtyard. The discussion lasted for an hour, whereas most such conversations lasted for two. But three participants peppered me with questions over the coming weeks.
A conch shell surfaced, whispering when held to an ear. One program participant, a member of one community, had believed the advertising that had portrayed the program as intended for his cohort. The portrayal didn’t match reality, to him, and he’d have preferred to dive more deeply into his own field.
I dove into a collaboration with other KITPists—a many-body project inspired by quantum thermodynamics. Keep an eye out for a paper and a dedicated blog post.
A conference talk served as a polished shell, reflecting light almost as a mirror. The talk centered on erasure, a process that unites thermodynamics with information processing: Imagine performing computations in math class. You need blank paper (or the neurological equivalent) on which to scribble. Upon computing a great deal, you have to erase the paper—to reset it to a clean state. Erasing calls for rubbing an eraser across the paper and so for expending energy. This conclusion extends beyond math class and paper: To compute—or otherwise process information—for a long time, we have to erase information-storage systems and so to expend energy. This conclusion renders erasure sacred to us thermodynamicists who study information processing. Erasure litters our papers, conferences, and conversations.
Erasure’s energy cost trades off with time: The more time you can spend on erasure, the less energy you need.1 The conference talk explored this tradeoff, absorbing the quantum thermodynamicist in me. A many-body physicist asked, at the end of the talk, why we were discussing erasure. What quantum thermodynamicists took for granted, he hadn’t heard of. He reflected back at our community an image of ourselves from an outsider’s perspective. The truest mirror might not be the flattest and least clouded.
Plants and crustaceans, mammals and birds, grow in estuaries. Call me a bent-nosed clam, but I prefer a quantum estuary to all other environments. Congratulations to the scientists who helped create a quantum estuary this summer and fall, and I look forward to the harvest.
1The least amount of energy that erasure can cost, on average over trials, is called Landauer’s bound. You’d pay this bound’s worth of energy if you erased infinitely slowly.
I’m publishing a book! Quantum Steampunk: The Physics of Yesterday’s Tomorrow is hitting bookstores next spring, and you can preorder it now.
As Quantum Frontiers regulars know, steampunk is a genre of literature, art and film. Steampunkers fuse 19th-century settings (such as Victorian England, the Wild West, and Meiji Japan) with futuristic technologies (such as dirigibles, time machines, and automata). So does my field of research, a combination of thermodynamics, quantum physics, and information processing.
Thermodynamics, the study of energy, developed during the Industrial Revolution. The field grew from practical concerns (How efficiently can engines pump water out of mines?) but wound up addressing fundamental questions (Why does time flow in only one direction?). Thermodynamics needs re-envisioning for 21st-century science, which spotlights quantum systems—electrons, protons, and other basic particles. Early thermodynamicists couldn’t even agree that atoms existed, let alone dream that quantum systems could process information in ways impossible for nonquantum systems. Over the past few decades, we’ve learned that quantum technologies can outperform their everyday counterparts in solving certain computational problems, in securing information, and in transmitting information. The study of quantum systems’ information-processing power forms a mathematical and conceptual toolkit, quantum information science. My colleagues and I leverage this toolkit to reconceptualize thermodynamics. As we combine a 19th-century framework (thermodynamics) with advanced technology (quantum information), I call our field quantum steampunk.
Glimpses of quantum steampunk have surfaced on this blog throughout the past eight years. The book is another animal, a 15-chapter closeup of the field. The book sets the stage with introductions to information processing, quantum physics, and thermodynamics. Then, we watch these three perspectives meld into one coherent whole. We tour the landscape of quantum thermodynamics—the different viewpoints and discoveries championed by different communities. These viewpoints, we find, offer a new lens onto the rest of science, including chemistry, black holes, and materials physics. Finally, we peer through a brass telescope to where quantum steampunk is headed next. Throughout the book, the science interleaves with anecdotes, history, and the story of one woman’s (my) journey into physics—and with snippets from a quantum-steampunk novel that I’ve dreamed up.
On this blog, different parts of my posts are intended for different audiences. Each post contains something for everyone, but not everyone will understand all of each post. In contrast, the book targets the general educated layperson. One of my editors majored in English, and another majored in biology, so the physics should come across clearly to everyone (and if it doesn’t, blame my editors). But the book will appeal to physicists, too. Reviewer Jay Lawrence, a professor emeritus of Dartmouth College’s physics department, wrote, “Presenting this vision [of quantum thermodynamics] in a manner accessible to laypeople discovering new interests, Quantum Steampunk will also appeal to specialists and aspiring specialists.” This book is for you.
Strange to say, I began writing Quantum Steampunk under a year ago. I was surprised to receive an email from Tiffany Gasbarrini, a senior acquisitions editor at Johns Hopkins University Press, in April 2020. Tiffany had read the article I’d written about quantum steampunk for Scientific American. She wanted to expand the press’s offerings for the general public. Would I be interested in writing a book proposal? she asked.
Not having expected such an invitation, I poked around. The press’s roster included books that caught my eye, by thinkers I wanted to meet. From Wikipedia, I learned that Johns Hopkins University Press is “the oldest continuously running university press in the United States.” Senior colleagues of mine gave the thumbs-up. So I let my imagination run.
I developed a table of contents while ruminating on long walks, which I’d begun taking at the start of the pandemic. In late July, I submitted my book proposal. As the summer ended, I began writing the manuscript.
Writing the first draft—73,000 words—took about five months. The process didn’t disrupt life much. I’m used to writing regularly; I’ve written one blog post per month here since 2013, and I wrote two novels during and after college. I simply picked up my pace. At first, I wrote only on weekends. Starting in December 2020, I wrote 1,000 words per day. The process wasn’t easy, but it felt like a morning workout—healthy and productive. That productivity fed into my science, which fed back into the book. One of my current research projects grew from the book’s epilogue. A future project, I expect, will evolve from Chapter 5.
As soon as I finished draft one—last January—Tiffany and I hunted for an illustrator. We were fortunate to find Todd Cahill, a steampunk artist. He transformed the poor sketches that I’d made into works of art.
Early this spring, I edited the manuscript. That edit was to a stroll as the next edit was to the Boston Marathon. Editor Michael Zierler coached me through the marathon. He identified concepts that needed clarification, discrepancies between explanations, and analogies that had run away with me—as well as the visions and turns of phrase that delighted him, to balance the criticism. As Michael and I toiled, 17 of my colleagues were kind enough to provide feedback. They read sections about their areas of expertise, pointed out subtleties, and confirmed facts.
Soon after Michael and I crossed the finished line, copyeditor Susan Matheson took up the baton. She hunted for typos, standardized references, and more. Come June, I was editing again—approving and commenting on her draft. Simultaneously, Tiffany designed the cover, shown above, with more artists. The marketing team reached out, and I began planning this blog post. Scratch that—I’ve been dreaming about this blog post for almost a year. But I forced myself not to spill the beans here till I told the research group I’ve been building. I shared about the book with them two Thursdays ago, and I hope that book critics respond as they did.
Every time I’ve finished a draft, my husband and I have celebrated by ordering takeout sandwiches from our favorite restaurant. Three sandwich meals are down, and we have one to go.
Having dreamed about this blog post for a year, I’m thrilled to bits to share my book with you. It’s available for preordering, and I encourage you to support your local bookstore by purchasing through bookshop.org. The book is available also through Barnes & Noble, Amazon, Waterstones, and the other usual suspects. For press inquiries, or to request a review copy, contact Kathryn Marguy at email@example.com.
Over the coming year, I’ll continue sharing about my journey into publishing—the blurbs we’ll garner for the book jacket, the first copies hot off the press, the reviews and interviews. I hope that you’ll don your duster coat and goggles (every steampunker wears goggles), hop into your steam-powered gyrocopter, and join me.
I had a relative to whom my parents referred, when I was little, as “that great-aunt of yours who walked into a glass door at your cousin’s birthday party.” I was a small child in a large family that mostly lived far away; little else distinguished this great-aunt from other relatives, in my experience. She’d intended to walk from my grandmother’s family room to the back patio. A glass door stood in the way, but she didn’t see it. So my great-aunt whammed into the glass; spent part of the party on the couch, nursing a nosebleed; and earned the epithet via which I identified her for years.
After growing up, I came to know this great-aunt as a kind, gentle woman who adored her family and was adored in return. After growing into a physicist, I came to appreciate her as one of my earliest instructors in necessary and sufficient conditions.
My great-aunt’s intended path satisfied one condition necessary for her to reach the patio: Nothing visible obstructed the path. But the path failed to satisfy a sufficient condition: The invisible obstruction—the glass door—had been neither slid nor swung open. Sufficient conditions, my great-aunt taught me, mustn’t be overlooked.
Her lesson underlies a paper I published this month, with coauthors from the Cambridge other than mine—Cambridge, England: David Arvidsson-Shukur and Jacob Chevalier Drori. The paper concerns, rather than pools and patios, quasiprobabilities, which I’ve blogged about many times [1,2,3,4,5,6,7]. Quasiprobabilities are quantum generalizations of probabilities. Probabilities describe everyday, classical phenomena, from Monopoly to March Madness to the weather in Massachusetts (and especially the weather in Massachusetts). Probabilities are real numbers (not dependent on the square-root of -1); they’re at least zero; and they compose in certain ways (the probability of sun or hail equals the probability of sun plus the probability of hail). Also, the probabilities that form a distribution, or a complete set, sum to one (if there’s a 70% chance of rain, there’s a 30% chance of no rain).
In contrast, quasiprobabilities can be negative and nonreal. We call such values nonclassical, as they’re unavailable to the probabilities that describe classical phenomena. Quasiprobabilities represent quantum states: Imagine some clump of particles in a quantum state described by some quasiprobability distribution. We can imagine measuring the clump however we please. We can calculate the possible outcomes’ probabilities from the quasiprobability distribution.
My favorite quasiprobability is an obscure fellow unbeknownst even to most quantum physicists: the Kirkwood-Dirac distribution. John Kirkwood defined it in 1933, and Paul Dirac defined it independently in 1945. Then, quantum physicists forgot about it for decades. But the quasiprobability has undergone a renaissance over the past few years: Experimentalists have measured it to infer particles’ quantum states in a new way. Also, colleagues and I have generalized the quasiprobability and discovered applications of the generalization across quantum physics, from quantum chaos to metrology (the study of how we can best measure things) to quantum thermodynamics to the foundations of quantum theory.
In some applications, nonclassical quasiprobabilities enable a system to achieve a quantum advantage—to usefully behave in a manner impossible for classical systems. Examples includemetrology: Imagine wanting to measure a parameter that characterizes some piece of equipment. You’ll perform many trials of an experiment. In each trial, you’ll prepare a system (for instance, a photon) in some quantum state, send it through the equipment, and measure one or more observables of the system. Say that you follow the protocol described in this blog post. A Kirkwood-Dirac quasiprobability distribution describes the experiment.1 From each trial, you’ll obtain information about the unknown parameter. How much information can you obtain, on average over trials? Potentially more information if some quasiprobabilities are negative than if none are. The quasiprobabilities can be negative only if the state and observables fail to commute with each other. So noncommutation—a hallmark of quantum physics—underlies exceptional metrological results, as shown by Kirkwood-Dirac quasiprobabilities.
Exceptional results are useful, and we might aim to design experiments that achieve them. We can by designing experiments described by nonclassical Kirkwood-Dirac quasiprobabilities. When can the quasiprobabilities become nonclassical? Whenever the relevant quantum state and observables fail to commute, the quantum community used to believe. This belief turns out to mirror the expectation that one could access my grandmother’s back patio from the living room whenever no visible barriers obstructed the path. As a lack of visible barriers was necessary for patio access, noncommutation is necessary for Kirkwood-Dirac nonclassicality. But noncommutation doesn’t suffice, according to my paper with David and Jacob. We identified a sufficient condition, sliding back the metaphorical glass door on Kirkwood-Dirac nonclassicality. The condition depends on simple properties of the system, state, and observables. (Experts: Examples include the Hilbert space’s dimensionality.) We also quantified and upper-bounded the amount of nonclassicality that a Kirkwood-Dirac quasiprobability can contain.
From an engineering perspective, our results can inform the design of experiments intended to achieve certain quantum advantages. From a foundational perspective, the results help illuminate the sources of certain quantum advantages. To achieve certain advantages, noncommutation doesn’t cut the mustard—but we now know a condition that does.
For another take on our paper, check out this news article in Physics Today.
1Really, a generalized Kirkwood-Dirac quasiprobability. But that phrase contains a horrendous number of syllables, so I’ll elide the “generalized.”
Intelligent beings have the ability to receive, process, store information, and based on the processed information, predict what would happen in the future and act accordingly.
We, as intelligent beings, receive, process, and store classical information. The information comes from vision, hearing, smell, and tactile sensing. The data is encoded as analog classical information through the electrical pulses sending through our nerve fibers. Our brain processes this information classically through neural circuits (at least that is our current understanding, but one should check out this blogpost). We then store this processed classical information in our hippocampus that allows us to retrieve it later to combine it with future information that we obtain. Finally, we use the stored classical information to make predictions about the future (imagine/predict the future outcomes if we perform certain action) and choose the action that would most likely be in our favor.
Such abilities have enabled us to make remarkable accomplishments: soaring in the sky by constructing accurate models of how air flows around objects, or building weak forms of intelligent beings capable of performing basic conversations and play different board games. Instead of receiving/processing/storing classical information, one could imagine some form of quantum intelligence that deals with quantum information instead of classical information. These quantum beings can receive quantum information through quantum sensors built up from tiny photons and atoms. They would then process this quantum information with quantum mechanical evolutions (such as quantum computers), and store the processed qubits in a quantum memory (protected with a surface code or toric code).
It is natural to wonder what a world of quantum intelligence would be like. While we have never encountered such a strange creature in the real world (yet), the mathematics of quantum mechanics, machine learning, and information theory allow us to peek into what such a fantastic world would be like. The physical world we live in is intrinsically quantum. So one may imagine that a quantum being is capable of making more powerful predictions than a classical being. Maybe he/she/they could better predict events that happened further away, such as tell us how a distant black hole was engulfing another? Or perhaps he/she/they could improve our lives, for example by presenting us with an entirely new approach for capturing energy from sunlight?
One may be skeptical about finding quantum intelligent beings in nature (and rightfully so). But it may not be so absurd to synthesize a weak form of quantum (artificial) intelligence in an experimental lab, or enhance our classical human intelligence with quantum devices to approximate a quantum-mechanical being. Many famous companies, like Google, IBM, Microsoft, and Amazon, as well as many academic labs and startups have been building better quantum machines/computers day by day. By combining the concepts of machine learning on classical computers with these quantum machines, the future of us interacting with some form of quantum (artificial) intelligence may not be so distant.
Before the day comes, could we peek into the world of quantum intelligence? And could one better understand how much more powerful they could be over classical intelligence?
In a recent publication , my advisor John Preskill, my good friend Richard Kueng, and I made some progress toward these questions. We consider a quantum mechanical world where classical beings could obtain classical information by measuring the world (performing POVM measurement). In contrast, quantum beings could retrieve quantum information through quantum sensors and store the data in a quantum memory. We study how much better quantum over classical beings could learn from the physical world to accurately predict the outcomes of unseen events (with the focus on the number of interactions with the physical world instead of computation time). We cast these problems in a rigorous mathematical framework and utilize high-dimensional probability and quantum information theory to understand their respective prediction power. Rigorously, one refers to a classical/quantum being as a classical/quantum model, algorithm, protocol, or procedure. This is because the actions of these classical/quantum beings are the center of the mathematical analysis.
Formally, we consider the task of learning an unknown physical evolution described by a CPTP map that takes in -qubit state and maps to -qubit state. The classical model can select an arbitrary classical input to the CPTP map and measure the output state of the CPTP map with some POVM measurement. The quantum model can access the CPTP map coherently and obtain quantum data from each access, which is equivalent to composing multiple CPTP maps with quantum computations to learn about the CPTP map. The task is to predict a property of the output state , given by , for a new classical input . And the goal is to achieve the task while accessing as few times as possible (i.e., fewer interactions or experiments in the physical world). We denote the number of interactions needed by classical and quantum models as .
In general, quantum models could learn from fewer interactions with the physical world (or experiments in the physical world) than classical models. This is because coherent quantum information can facilitate better information synthesis with information obtained from previous experiments. Nevertheless, in , we show that there is a fundamental limit to how much more efficient quantum models can be. In order to achieve a prediction error
where is the hypothesis learned from the classical/quantum model and is an arbitrary distribution over the input space , we found that the speed-up is upper bounded by , where is the number of qubits each experiment provides (the output number of qubits in the CPTP map ), and is the desired prediction error (smaller means we want to predict more accurately).
In contrast, when we want to accurately predict all unseen events, we prove that quantum models could use exponentially fewer experiments than classical models. We give a construction for predicting properties of quantum systems showing that quantum models could substantially outperform classical models. These rigorous results show that quantum intelligence shines when we seek stronger prediction performance.
We have only scratched the surface of what is possible with quantum intelligence. As the future unfolds, I am hopeful that we will discover more that can be done only by quantum intelligence, through mathematical analysis, rigorous numerical studies, and physical experiments.
A classical model that can be used to accurately predict properties of quantum systems is the classical shadow formalism  that we proposed a year ago. In many tasks, this model can be shown to be one of the strongest rivals that quantum models have to surpass.
Even if a quantum model only receives and stores classical data, the ability to process the data using a quantum-mechanical evolution can still be advantageous . However, obtaining large advantage will be harder in this case as the computational power in data can slightly boost classical machines/intelligence .
Another nice paper by Dorit Aharonov, Jordan Cotler, and Xiao-Liang Qi  also proved advantages of quantum models over classical one in some classification tasks.
 Huang, Hsin-Yuan, Richard Kueng, and John Preskill. “Predicting many properties of a quantum system from very few measurements.” Nature Physics 16: 1050-1057 (2020). https://doi.org/10.1038/s41567-020-0932-7
The autumn of my sophomore year of college was mildly hellish. I took the equivalent of three semester-long computer-science and physics courses, atop other classwork; co-led a public-speaking self-help group; and coordinated a celebrity visit to campus. I lived at my desk and in office hours, always declining my flatmates’ invitations to watch The West Wing.
Hard as I studied, my classmates enjoyed greater facility with the computer-science curriculum. They saw immediately how long an algorithm would run, while I hesitated and then computed the run time step by step. I felt behind. So I protested when my professor said, “You’re good at this.”
I now see that we were focusing on different facets of learning. I rued my lack of intuition. My classmates had gained intuition by exploring computer science in high school, then slow-cooking their experiences on a mental back burner. Their long-term exposure to the material provided familiarity—the ability to recognize a new problem as belonging to a class they’d seen examples of. I was cooking course material in a mental microwave set on “high,” as a semester’s worth of material was crammed into ten weeks at my college.
My professor wasn’t measuring my intuition. He only saw that I knew how to compute an algorithm’s run time. I’d learned the material required of me—more than I realized, being distracted by what I hadn’t learned that difficult autumn.
We can learn a staggering amount when pushed far from our comfort zones—and not only we humans can. So can simple collections of particles.
Examples include a classical spin glass. A spin glass is a collection of particles that shares some properties with a magnet. Both a magnet and a spin glass consist of tiny mini-magnets called spins. Although I’ve blogged about quantum spins before, I’ll focus on classical spins here. We can imagine a classical spin as a little arrow that points upward or downward. A bunch of spins can form a material. If the spins tend to point in the same direction, the material may be a magnet of the sort that’s sticking the faded photo of Fluffy to your fridge.
The spins may interact with each other, similarly to how electrons interact with each other. Not entirely similarly, though—electrons push each other away. In contrast, a spin may coax its neighbors into aligning or anti-aligning with it. Suppose that the interactions are random: Any given spin may force one neighbor into alignment, gently ask another neighbor to align, entreat a third neighbor to anti-align, and having nothing to say to neighbors four and five.
The spin glass can interact with the external world in two ways. First, we can stick the spins in a magnetic field, as by placing magnets above and below the glass. If aligned with the field, a spin has negative energy; and, if antialigned, positive energy. We can sculpt the field so that it varies across the spin glass. For instance, spin 1 can experience a strong upward-pointing field, while spin 2 experiences a weak downward-pointing field.
Second, say that the spins occupy a fixed-temperature environment, as I occupy a 74-degree-Fahrenheit living room. The spins can exchange heat with the environment. If releasing heat to the environment, a spin flips from having positive energy to having negative—from antialigning with the field to aligning.
Let’s perform an experiment on the spins. First, we design a magnetic field using random numbers. Whether the field points upward or downward at any given spin is random, as is the strength of the field experienced by each spin. We sculpt three of these random fields and call the trio a drive.
Let’s randomly select a field from the drive and apply it to the spin glass for a while; again, randomly select a field from the drive and apply it; and continue many times. The energy absorbed by the spins from the fields spikes, then declines.
Now, let’s create another drive of three random fields. We’ll randomly pick a field from this drive and apply it; again, randomly pick a field from this drive and apply it; and so on. Again, the energy absorbed by the spins spikes, then tails off.
Here comes the punchline. Let’s return to applying the initial fields. The energy absorbed by the glass will spike—but not as high as before. The glass responds differently to a familiar drive than to a new drive. The spin glass recognizes the original drive—has learned the first fields’ “fingerprint.” This learning happens when the fields push the glass far from equilibrium,1 as I learned when pushed during my mildly hellish autumn.
Scientists have detected many-particle learning by measuring thermodynamic observables. Examples include the energy absorbed by the spin glass—what thermodynamicists call work. But thermodynamics developed during the 1800s, to describe equilibrium systems, not to study learning.
One study of learning—the study of machine learning—has boomed over the past two decades. As described by the MIT Technology Review, “[m]achine-learning algorithms use statistics to find patterns in massive amounts of data.” Users don’t tell the algorithms how to find those patterns.
It seems natural and fitting to use machine learning to learn about the learning by many-particle systems. That’s what I did with collaborators from the group of Jeremy England, a GlaxoSmithKline physicist who studies complex behaviors of many particle systems. Weishun Zhong, Jacob Gold, Sarah Marzen, Jeremy, and I published our paper last month.
Using machine learning, we detected and measured many-particle learning more reliably and precisely than thermodynamic measures seem able to. Our technique works on multiple facets of learning, analogous to the intuition and the computational ability I encountered in my computer-science course. We illustrated our technique on a spin glass, but one can apply our approach to other systems, too. I’m exploring such applications with collaborators at the University of Maryland.
The project pushed me far from my equilibrium: I’d never worked with machine learning or many-body learning. But it’s amazing, what we can learn when pushed far from equilibrium. I first encountered this insight sophomore fall of college—and now, we can quantify it better than ever.
1Equilibrium is a quiet, restful state in which the glass’s large-scale properties change little. No net flow of anything—such as heat or particles—enter or leave the system.