How Captain Okoli got his name

About two years ago, I dreamt up a character called Captain Okoli. He features in the imaginary steampunk novel from which I drew snippets to begin the chapters of my otherwise nonfiction book. Captain Okoli is innovative, daring, and kind; he helps the imaginary novel’s heroine, Audrey, on her globe-spanning quest. 

Captain Okoli inherited his name from Chiamaka Okoli, who was a classmate and roommate of mine while we pursued our master’s degrees at the Perimeter Institute for Theoretical Physics. Unfortunately, an illness took Chiamaka’s life shortly after she completed her PhD. Captain Okoli is my tribute to her memory, but my book lacked the space for an explanation of who Chiamaka was or how Captain Okoli got his name. The Perimeter Institute offered a platform in its publication Inside the Perimeter. You can find the article—a story about an innovative, daring, and kind woman—here.

These are a few of my favorite steampunk books

As a physicist, one grows used to answering audience questions at the end of a talk one presents. As a quantum physicist, one grows used to answering questions about futuristic technologies. As a quantum-steampunk physicist, one grows used to the question “Which are your favorite steampunk books?”

Literary Hub has now published my answer.

According to its website, “Literary Hub is an organizing principle in the service of literary culture, a single, trusted, daily source for all the news, ideas and richness of contemporary literary life. There is more great literary content online than ever before, but it is scattered, easily lost—with the help of its editorial partners, Lit Hub is a site readers can rely on for smart, engaged, entertaining writing about all things books.”

My article, “Five best books about the romance of Victorian science,” appeared there last week. You’ll find fiction, nonfiction as imaginative as fiction, and crossings of the border between the two. 

My contribution to literature about the romance of Victorian science—my (mostly) nonfiction book, Quantum Steampunk: The Physics Of Yesterday’s Tomorrow—was  published two weeks ago. Where’s a hot-air-balloon emoji when you need one?

One equation to rule them all?

In lieu of composing a blog post this month, I’m publishing an article in Quanta Magazine. The article provides an introduction to fluctuation relations, souped-up variations on the second law of thermodynamics, which helps us understand why time flows in only one direction. The earliest fluctuation relations described classical systems, such as single strands of DNA. Many quantum versions have been proved since. Their proliferation contrasts with the stereotype of physicists as obsessed with unification—with slimming down a cadre of equations into one über-equation. Will one quantum fluctuation relation emerge to rule them all? Maybe, and maybe not. Maybe the multiplicity of quantum fluctuation relations reflects the richness of quantum thermodynamics.

You can read more in Quanta Magazine here and yet more in chapter 9 of my book. For recent advances in fluctuation relations, as opposed to the broad introduction there, check out earlier Quantum Frontiers posts here, here, here, here, and here.

The power of being able to say “I can explain that”

Caltech condensed-matter theorist Gil Refael explained his scientific raison dê’tre early in my grad-school career: “What really gets me going is seeing a plot [of experimental data] and being able to say, ‘I can explain that.’” The quote has stuck with me almost word for word. When I heard it, I was working deep in abstract quantum information theory and thermodynamics, proving theorems about thought experiments. Embedding myself in pure ideas has always held an aura of romance for me, so I nodded along without seconding Gil’s view.

Roughly nine years later, I concede his point.

The revelation walloped me last month, as I was polishing a paper with experimental collaborators. Members of the Institute for Quantum Optics and Quantum Information (IQOQI) in Innsbruck, Austria—Florian Kranzl, Manoj Joshi, and Christian Roos—had performed an experiment in trapped-ion guru Rainer Blatt’s lab. Their work realized an experimental proposal that I’d designed with fellow theorists near the beginning of my postdoc stint. We aimed to observe signatures of particularly quantum thermalization

Throughout the universe, small systems exchange stuff with their environments. For instance, the Earth exchanges heat and light with the rest of the solar system. After exchanging stuff for long enough, the small system equilibrates with the environment: Large-scale properties of the small system (such as its volume and energy) remain fairly constant; and as much stuff enters the small system as leaves, on average. The Earth remains far from equilibrium, which is why we aren’t dead yet

Far from equilibrium and proud of it

In many cases, in equilibrium, the small system shares properties of the environment, such as the environment’s temperature. In these cases, we say that the small system has thermalized and, if it’s quantum, has reached a thermal state.

The stuff exchanged can consist of energy, particles, electric charge, and more. Unlike classical planets, quantum systems can exchange things that participate in quantum uncertainty relations (experts: that fail to commute). Quantum uncertainty mucks up derivations of the thermal state’s mathematical form. Some of us quantum thermodynamicists discovered the mucking up—and identified exchanges of quantum-uncertain things as particularly nonclassical thermodynamics—only a few years ago. We reworked conventional thermodynamic arguments to accommodate this quantum uncertainty. The small system, we concluded, likely equilibrates to near a thermal state whose mathematical form depends on the quantum-uncertain stuff—what we termed a non-Abelian thermal state. I wanted to see this equilibration in the lab. So I proposed an experiment with theory collaborators; and Manoj, Florian, and Christian took a risk on us.

The experimentalists arrayed between six and fifteen ions in a line. Two ions formed the small system, and the rest formed the quantum environment. The ions exchanged the x-, y-, and z-components of their spin angular momentum—stuff that participates in quantum uncertainty relations. The ions began with a fairly well-defined amount of each spin component, as described in another blog post. The ions exchanged stuff for a while, and then the experimentalists measured the small system’s quantum state.

The small system equilibrated to near the non-Abelian thermal state, we found. No conventional thermal state modeled the results as accurately. Score!

My postdoc and numerical-simulation wizard Aleks Lasek modeled the experiment on his computer. The small system, he found, remained farther from the non-Abelian thermal state in his simulation than in the experiment. Aleks plotted the small system’s distance to the non-Abelian thermal state against the ion chain’s length. The points produced experimentally sat lower down than the points produced numerically. Why?

I think I can explain that, I said. The two ions exchange stuff with the rest of the ions, which serve as a quantum environment. But the two ions exchange stuff also with the wider world, such as stray electromagnetic fields. The latter exchanges may push the small system farther toward equilibrium than the extra ions alone do.

Fortunately for the development of my explanatory skills, collaborators prodded me to hone my argument. The wider world, they pointed out, effectively has a very high temperature—an infinite temperature.1 Equilibrating with that environment, the two ions would acquire an infinite temperature themselves. The two ions would approach an infinite-temperature thermal state, which differs from the non-Abelian thermal state we aimed to observe.

Fair, I said. But the extra ions probably have a fairly high temperature themselves. So the non-Abelian thermal state is probably close to the infinite-temperature thermal state. Analogously, if someone cooks goulash similarly to his father, and the father cooks goulash similarly to his grandfather, then the youngest chef cooks goulash similarly to his grandfather. If the wider world pushes the two ions to equilibrate to infinite temperature, then, because the infinite-temperature state lies near the non-Abelian thermal state, the wider world pushes the two ions to equilibrate to near the non-Abelian thermal state.

Tasty, tasty thermodynamicis

I plugged numbers into a few equations to check that the extra ions do have a high temperature. (Perhaps I should have done so before proposing the argument above, but my collaborators were kind enough not to call me out.) 

Aleks hammered the nail into the problem’s coffin by incorporating into his simulations the two ions’ interaction with an infinite-temperature wider world. His numerical data points dropped to near the experimental data points. The new plot supported my story.

I can explain that! Aleks’s results buoyed me the whole next day; I found myself smiling at random times throughout the afternoon. Not that I’d explained a grand mystery, like the unexpected hiss heard by Arno Penzias and Robert Wilson when they turned on a powerful antenna in 1964. The hiss turned out to come from the cosmic microwave background (CMB), a collection of photons that fill the visible universe. The CMB provided evidence for the then-controversial Big Bang theory of the universe’s origin. Discovering the CMB earned Penzias and Wilson a Nobel Prize. If the noise caused by the CMB was music to cosmologists’ ears, the noise in our experiment is the quiet wailing of a shy banshee. But it’s our experiment’s noise, and we understand it now.

The experience hasn’t weaned me off the romance of proving theorems about thought experiments. Theorems about thermodynamic quantum uncertainty inspired the experiment that yielded the plot that confused us. But I now second Gil’s sentiment. In the throes of an experiment, “I can explain that” can feel like a battle cry.

1Experts: The wider world effectively has an infinite temperature because (i) the dominant decoherence is dephasing relative to the \sigma_z product eigenbasis and (ii) the experimentalists rotate their qubits often, to simulate a rotationally invariant Hamiltonian evolution. So the qubits effectively undergo dephasing relative to the \sigma_x, \sigma_y, and \sigma_z eigenbases.

Building a Koi pond with Lie algebras

When I was growing up, one of my favourite places was the shabby all-you-can-eat buffet near our house. We’d walk in, my mom would approach the hostess to explain that, despite my being abnormally large for my age, I qualified for kids-eat-free, and I would peel away to stare at the Koi pond. The display of different fish rolling over one another was bewitching. Ten-year-old me would have been giddy to build my own Koi pond, and now I finally have. However, I built one using Lie algebras.

The different fish swimming in the Koi pond are, in many ways, like charges being exchanged between subsystems. A “charge” is any globally conserved quantity. Examples of charges include energy, particles, electric charge, or angular momentum. Consider a system consisting of a cup of coffee in your office. The coffee will dynamically exchange charges with your office in the form of heat energy. Still, the total energy of the coffee and office is conserved (assuming your office walls are really well insulated). In this example, we had one type of charge (heat energy) and two subsystems (coffee and office). Consider now a closed system consisting of many subsystems and many different types of charges. The closed system is like the finite Koi pond with different charges like the different fish species. The charges can move around locally, but the total number of charges is globally fixed, like how the fish swim around but can’t escape the pond. Also, the presence of one type of charge can alter another’s movement, just as a big fish might block a little one’s path. 

Unfortunately, the Koi pond analogy reaches its limit when we move to quantum charges. Classically, charges commute. This means that we can simultaneously determine the amount of each charge in our system at each given moment. In quantum mechanics, this isn’t necessarily true. In other words, classically, I can count the number of glossy fish and matt fish. But, in quantum mechanics, I can’t.

So why does this matter? Subsystems exchanging charges are prevalent in thermodynamics. Quantum thermodynamics extends thermodynamics to include small systems and quantum effects. Noncommutation underlies many important quantum phenomena. Hence, studying the exchange of noncommuting charges is pivotal in understanding quantum thermodynamics. Consequently, noncommuting charges have emerged as a rapidly growing subfield of quantum thermodynamics. Many interesting results have been discovered from no longer assuming that charges commute (such as these). Until recently, most of these discoveries have been theoretical. Bridging these discoveries to experimental reality requires Hamiltonians (functions that tell you how your system evolves in time) that move charges locally but conserve them globally. Last year it was unknown whether these Hamiltonians exist, what they look like generally, how to build them, and for what charges you could find them.

Nicole Yunger Halpern (NIST physicist, my co-advisor, and Quantum Frontiers blogger) and I developed a prescription for building Koi ponds for noncommuting charges. Our prescription allows you to systematically build Hamiltonians that overtly move noncommuting charges between subsystems while conserving the charges globally. These Hamiltonians are built using Lie algebras, abstract mathematical tools that can describe many physical quantities (including everything in the standard model of particle physics and space-time metric). Our results were recently published in npj QI. We hope that our prescription will bolster the efforts to bridge the results of noncommuting charges to experimental reality.

In the end, a little group theory was all I needed for my Koi pond. Maybe I’ll build a treehouse next with calculus or a remote control car with combinatorics.

Space-time and the city

I felt like a gum ball trying to squeeze my way out of a gum-ball machine. 

I was one of 50-ish physicists crammed into the lobby—and in the doorway, down the stairs, and onto the sidewalk—of a Manhattan hotel last December. Everyone had received a COVID vaccine, and the omicron variant hadn’t yet begun chewing up North America. Everyone had arrived on the same bus that evening, feeding on the neon-bright views of Fifth Avenue through dinnertime. Everyone wanted to check in and offload suitcases before experiencing firsthand the reason for the nickname “the city that never sleeps.” So everyone was jumbled together in what passed for a line.

We’d just passed the halfway point of the week during which I was pretending to be a string theorist. I do that whenever my research butts up against black holes, chaos, quantum gravity (the attempt to unify quantum physics with Einstein’s general theory of relativity), and alternative space-times. These topics fall under the heading “It from Qubit,” which calls for understanding puzzling physics (“It”) by analyzing how quantum systems process information (“Qubit”). The “It from Qubit” crowd convenes for one week each December, to share progress and collaborate.1 The group spends Monday through Wednesday at Princeton’s Institute for Advanced Study (IAS), dogged by photographs of Einstein, busts of Einstein, and roads named after Einstein. A bus ride later, the group spends Thursday and Friday at the Simons Foundation in New York City.

I don’t usually attend “It from Qubit” gatherings, as I’m actually a quantum information theorist and quantum thermodynamicist. Having admitted as much during the talk I presented at the IAS, I failed at pretending to be a string theorist. Happily, I adore being the most ignorant person in a roomful of experts, as the experience teaches me oodles. At lunch and dinner, I’d plunk down next to people I hadn’t spoken to and ask what they see as trending in the “It from Qubit” community. 

One buzzword, I’d first picked up on shortly before the pandemic had begun (replicas). Having lived a frenetic life, that trend seemed to be declining. Rising buzzwords (factorization and islands), I hadn’t heard in black-hole contexts before. People were still tossing around terms from when I’d first forayed into “It from Qubit” (scrambling and out-of-time-ordered correlator), but differently from then. Five years ago, the terms identified the latest craze. Now, they sounded entrenched, as though everyone expected everyone else to know and accept their significance.

One buzzword labeled my excuse for joining the workshops: complexity. Complexity wears as many meanings as the stereotypical New Yorker wears items of black clothing. Last month, guest blogger Logan Hillberry wrote about complexity that emerges in networks such as brains and social media. To “It from Qubit,” complexity quantifies the difficulty of preparing a quantum system in a desired state. Physicists have conjectured that a certain quantum state’s complexity parallels properties of gravitational systems, such as the length of a wormhole that connects two black holes. The wormhole’s length grows steadily for a time exponentially large in the gravitational system’s size. So, to support the conjecture, researchers have been trying to prove that complexity typically grows similarly. Collaborators and I proved that it does, as I explained in my talk and as I’ll explain in a future blog post. Other speakers discussed experimental complexities, as well as the relationship between complexity and a simplified version of Einstein’s equations for general relativity.

Inside the Simons Foundation on Fifth Avenue in Manhattan

I learned a bushel of physics, moonlighting as a string theorist that week. The gum-ball-machine lobby, though, retaught me something I’d learned long before the pandemic. Around the time I squeezed inside the hotel, a postdoc struck up a conversation with the others of us who were clogging the doorway. We had a decent fraction of an hour to fill; so we chatted about quantum thermodynamics, grant applications, and black holes. I asked what the postdoc was working on, he explained a property of black holes, and it reminded me of a property of thermodynamics. I’d nearly reached the front desk when I realized that, out of the sheer pleasure of jawing about physics with physicists in person, I no longer wanted to reach the front desk. The moment dangles in my memory like a crystal ornament from the lobby’s tree—pendant from the pandemic, a few inches from the vaccines suspended on one side and from omicron on the other. For that moment, in a lobby buoyed by holiday lights, wrapped in enough warmth that I’d forgotten the December chill outside, I belonged to the “It from Qubit” community as I hadn’t belonged to any community in 22 months.

Happy new year.

Presenting at the IAS was a blast. Photo credit: Jonathan Oppenheim.

1In person or virtually, pandemic-dependently.

Thanks to the organizers of the IAS workshop—Ahmed Almheiri, Adam Bouland, Brian Swingle—for the invitation to present and to the organizers of the Simons Foundation workshop—Patrick Hayden and Matt Headrick—for the invitation to attend.

A quantum-steampunk photo shoot

Shortly after becoming a Fellow of QuICS, the Joint Center for Quantum Information and Computer Science, I received an email from a university communications office. The office wanted to take professional photos of my students and postdocs and me. You’ve probably seen similar photos, in which theoretical physicists are writing equations, pointing at whiteboards, and thinking deep thoughts. No surprise there. 

A big surprise followed: Tom Ventsias, the director of communications at the University of Maryland Institute for Advanced Computer Studies (UMIACS), added, “I wanted to hear your thoughts about possibly doing a dual photo shoot for you—one more ‘traditional,’ one ‘quantum steampunk’ style.”

Steampunk, as Quantum Frontiers regulars know, is a genre of science fiction. It combines futuristic technologies, such as time machines and automata, with Victorian settings. I call my research “quantum steampunk,” as it combines the cutting-edge technology of quantum information science with the thermodynamics—the science of energy—developed during the 1800s. I’ve written a thesis called “Quantum steampunk”; authored a trade nonfiction book with the same title; and presented enough talks about quantum steampunk that, strung together, they’d give one laryngitis. But I don’t own goggles, hoop skirts, or petticoats. The most steampunk garb I’d ever donned before this autumn, I wore for a few minutes at age six or so, for dress-up photos at a theme park. I don’t even like costumes.

But I earned my PhD under the auspices of fellow Quantum Frontiers blogger John Preskill,1 whose career suggests a principle to live by: While unravelling the universe’s nature and helping to shape humanity’s intellectual future, you mustn’t take yourself too seriously. This blog has exhibited a photo of John sitting in Caltech’s information-sciences building, exuding all the gravitas of a Princeton degree, a Harvard degree, and world-impacting career—sporting a baseball glove you’d find in a high-school gym class, as though it were a Tag Heuer watch. John adores baseball, and the photographer who documented Caltech’s Institute for Quantum Information and Matter brought out the touch of whimsy like the ghost of a smile.

Let’s try it, I told Tom.

One rust-colored November afternoon, I climbed to the top of UMIACS headquarters—the Iribe Center—whose panoramic view of campus begs for photographs. Two students were talking in front of a whiteboard, and others were lunching on the sandwiches, fruit salad, and cheesecake ordered by Tom’s team. We took turns brandishing markers, gesturing meaningfully, and looking contemplative.

Then, the rest of my team dispersed, and the clock rewound 150 years.

The professionalism and creativity of Tom’s team impressed me. First, they’d purchased a steampunk hat, complete with goggles and silver wires. Recalling the baseball-glove photo, I suggested that I wear the hat while sitting at a table, writing calculations as I ordinarily would.

What hat? Quit bothering me while I’m working.

Then, the team upped the stakes. Earlier that week, Maria Herd, a member of the communications office, had driven me to the University of Maryland performing-arts center. We’d sifted through the costume repository until finding skirts, vests, and a poofy white shirt reminiscent of the 1800s. I swapped clothes near the photo-shoot area, while the communications team beamed a London street in from the past. Not really, but they nearly did: They’d found a backdrop suitable for the 2020 Victorian-era Netflix hit Enola Holmes and projected the backdrop onto a screen. I stood in front of the screen, and a sheet of glass stood in front of me. I wrote equations on the glass while the photographer, John Consoli, snapped away.

The final setup, I would never have dreamed of. Days earlier, the communications team had located an elevator lined, inside, with metal links. They’d brought colorful, neon-lit rods into the elevator and experimented with creating futuristic backdrops. On photo-shoot day, they positioned me in the back of the elevator and held the light-saber-like rods up. 

But we couldn’t stop anyone from calling the elevator. We’d ride up to the third or fourth floor, and the door would open. A student would begin to step in; halt; and stare my floor-length skirt, the neon lights, and the photographer’s back.

“Feel free to get in.” John’s assistant, Gail Marie Rupert, would wave them inside. The student would shuffle inside—in most cases—and the door would close.

“What floor?” John would ask.

“Um…one.”

John would twist around, press the appropriate button, and then turn back to his camera.

Once, when the door opened, the woman who entered complimented me on my outfit. Another time, the student asked if he was really in the Iribe Center. I regard that question as evidence of success.

John Consoli took 654 photos. I found the process fascinating, as a physicist. I have a domain of expertise; and I know the feeling of searching for—working toward—pushing for—a theorem or a conceptual understanding that satisfies me, in that domain. John’s area of expertise differs from mine, so I couldn’t say what he was searching for. But I recognized his intent and concentration, as Gail warned him that time had run out and he then made an irritated noise, inched sideways, and stole a few more snapshots. I felt like I was seeing myself in a reflection—not in the glass I was writing on, but in another sphere of the creative life.

The communications team’s eagerness to engage in quantum steampunk—to experiment with it, to introduce it into photography, to make it their own—bowled me over. Quantum steampunk isn’t just a stack of papers by one research group; it’s a movement. Seeing a team invest its time, energy, and imagination in that movement felt like receiving a deep bow or curtsy. Thanks to the UMIACS communications office for bringing quantum steampunk to life.

The Quantum-Steampunk Lab. Not pictured: Shayan Majidy.

1Who hasn’t blogged in a while. How about it, John?

Balancing the tradeoff

So much to do, so little time. Tending to one task is inevitably at the cost of another, so how does one decide how to spend their time? In the first few years of my PhD, I balanced problem sets, literature reviews, and group meetings, but at the detriment to my hobbies. I have played drums my entire life, but I largely fell out of practice in graduate school. Recently, I made time to play with a group of musicians, even landing a couple gigs in downtown Austin, Texas, “live music capital of the world.” I have found attending to my non-physics interests makes my research hours more productive and less taxing. Finding the right balance of on- versus off-time has been key to my success as my PhD enters its final year.

Of course, life within physics is also full of tradeoffs. My day job is as an experimentalist. I use tightly focused laser beams, known as optical tweezers, to levitate micrometer-sized glass spheres. I monitor a single microsphere’s motion as it undergoes collisions with air molecules, and I study the system as an environmental sensor of temperature, fluid flow, and acoustic waves; however, by night I am a computational physicist. I code simulations of interacting qubits subject to kinetic constraints, so-called quantum cellular automata (QCA). My QCA work started a few years ago for my Master’s degree, but my interest in the subject persists. I recently co-authored one paper summarizing the work so far and another detailing an experimental implementation.

The author doing his part to “keep Austin weird” by playing the drums dressed as grackle (note the beak), the central-Texas bird notorious for overrunning grocery store parking lots.
Balancing research interests: Trapping a glass microsphere with optical tweezers.
Balancing research interests: Visualizing the time evolution of four different QCA rules.

QCA, the subject of this post, are themselves tradeoff-aware systems. To see what I mean, first consider their classical counterparts cellular automata. In their simplest construction, the system is a one-dimensional string of bits. Each bit takes a value of 0 or 1 (white or black). The bitstring changes in discrete time steps based on a simultaneously-applied local update rule: Each bit, along with its two nearest-neighbors, determine the next state of the central bit. Put another way, a bit either flips, i.e., changes 0 to 1 or 1 to 0, or remains unchanged over a timestep depending on the state of that bit’s local neighborhood. Thus, by choosing a particular rule, one encodes a trade off between activity (bit flips) and inactivity (bit remains unchanged). Despite their simple construction, cellular automata dynamics are diverse; they can produce fractals and encryption-quality random numbers. One rule even has the ability to run arbitrary computer algorithms, a property known as universal computation.

Classical cellular automata. Left: rule 90 producing the fractal Sierpiński’s triangle. Middle: rule 30 can be used to generate random numbers. Right: rule 110 is capable of universal computation.

In QCA, bits are promoted to qubits. Instead of being just 0 or 1 like a bit, a qubit can be a continuous mixture of both 0 and 1, a property called superposition. In QCA, a qubit’s two neighbors being 0 or 1 determine whether or not it changes. For example, when in an active neighborhood configuration, a qubit can be coded to change from 0 to “0 plus 1” or from 1 to “0 minus 1”. This is already a head-scratcher, but things get even weirder. If a qubit’s neighbors are in a superposition, then the center qubit can become entangled with those neighbors. Entanglement correlates qubits in a way that is not possible with classical bits.

Do QCA support the emergent complexity observed in their classical cousins? What are the effects of a continuous state space, superposition, and entanglement? My colleagues and I attacked these questions by re-examining many-body physics tools through the lens of complexity science. Singing the lead, we have a workhorse of quantum and solid-state physics: two-point correlations. Singing harmony we have the bread-and-butter of network analysis: complex-network measures. The duet between the two tells the story of structured correlations in QCA dynamics.

In a bit more detail, at each QCA timestep we calculate the mutual information between all qubits i and all other qubits j. Doing so reveals how much there is to learn about one qubit by measuring another, including effects of quantum entanglement. Visualizing each qubit as a node, the mutual information can be depicted as weighted links between nodes: the more correlated two qubits are, the more strongly they are linked. The collection of nodes and links makes a network. Some QCA form unstructured, randomly-linked networks while others are highly structured. 

Complex-network measures are designed to highlight certain structural patterns within a network. Historically, these measures have been used to study diverse networked-systems like friend groups on Facebook, biomolecule pathways in metabolism, and functional-connectivity in the brain. Remarkably, the most structured QCA networks we observed quantitatively resemble those of the complex systems just mentioned despite their simple construction and quantum unitary dynamics. 

Visualizing mutual information networks. Left: A Goldilocks-QCA generated network. Right: a random network.

What’s more, the particular QCA that generate the most complex networks are those that balance the activity-inactivity trade-off. From this observation, we formulate what we call the Goldilocks principle: QCA that generate the most complexity are those that change a qubit if and only if the qubit’s neighbors contain an equal number of 1’s and 0’s. The Goldilocks rules are neither too inactive nor too active, balancing the tradeoff to be “just right.”  We demonstrated the Goldilocks principle for QCA with nearest-neighbor constraints as well as QCA with nearest-and-next-nearest-neighbor constraints.

To my delight, the scientific conclusions of my QCA research resonate with broader lessons-learned from my time as a PhD student: Life is full of trade-offs, and finding the right balance is key to achieving that “just right” feeling.

I wish you a Charles River

Three-and-a-quarter years ago, I was on a subway train juddering along the tracks. I gripped my suitcase tightly and—knowing myself—likely gripped a physics paper, too, so that I could read during the trip. I was moving, for my postdoctoral fellowship, to Cambridge from Pasadena, where I’d completed my PhD.

The Charles River separates Cambridge from Boston, at whose Logan Airport I’d arrived with a suitcase just under the societal size limit and ideas that I hoped weren’t. But as the metro car juddered onto the Longfellow Bridge, all physics papers vanished from my mind. So did concerns about how I’d find my new apartment, how much I had to accomplish before night fell (buy breakfast ingredients, retrieve boxes I’d shipped, unpack, …), and how strongly I smelled like airplane fuel.

The Charles stretched below us, sparkling with silver threads embroidered in blue, a carpet too grand for a king. On the river bobbed boats that resembled toys, their sails smaller than my paper. Boston’s skyline framed the river’s right-hand side, and Cambridge’s skyline framed the left. And what skylines they were—filled with glass and red brick; with rectangles, trapezoids, hemispheres, and turrets. I felt blessed for such a welcome to a new Cantab.

I vowed that afternoon that, every time I crossed the Charles via metro in the next three years, I’d stop reading my paper, or drafting my email, or planning my next talk. I’d look up through a window, recall the river’s beauty, and feel grateful—grateful for the privilege of living nearby and in an intellectual hub that echoes across centuries; for the freedom to pursue the ideas I dream up; and for the ability to perceive the beauty before me.

Humans have a knack for accustoming themselves to gifts. One day, we’re reveling over an acceptance letter or the latest Apple product; a year later, we’re chasing the next acceptance or cursing technology’s slowness. But an “attitude of gratitude,” as my high-school physics teacher put it, enhances our relationships, our health, and our satisfaction with life. I’m grateful for the nudge that, whenever I traveled to or from home throughout the past three years, reminded me to feel grateful.

I wish you a Charles River, this season and every season.

Entangled Fields and Post-Anthropocene Computation: Quantum Perspectives for a Healthy Planet

Natalie Klco
Institute for Quantum Information and Matter (IQIM) and Walter Burke Institute for Theoretical Physics
California Institute of Technology, Pasadena CA 91125, USA, Earth1
November 7, 2021 (COP26 Day 8)

If a quantum field is replaced with lower complexity versions of itself, the field systematically falls apart into ever smaller pieces. Rather than permeating all of space, the entanglement—a unique form of correlations tying together the quantum world—abruptly vanishes2. Where once there was global connection now devolves into fragments. If the biological fields of our planet are replaced with lower-complexity versions of themselves, analogous collapse ensues. The natural world requires and celebrates complexity.

Visual representation of complexity reduction in biological fields from (left) diverse coherent ecosystems with phenomena across many length scales to (right) fragmented precision monocultures with phenomena in limited bandwidth. Analogous complexity reductions in quantum fields cause harsh decoherence.
(Images by Ruvim Miksanskiy and NASA Earth Observatory Landsat)

Humans have a fascinating relationship with nature’s complexity—they love it when they understand it, and tend to destroy it when they don’t: Maxwellian demons reducing complexity rather than entropy, with the reduction scaling inversely with insight. This is most often not malicious at the beginning; we enjoy understanding how things work and modifying them to our desires. If we don’t understand something, an effective way to learn is to simplify it, dissect and put it back together, sometimes with fewer pieces so the purpose and importance of each piece can be deduced. As we become comfortable with the basic elements, embellishments can be added. This is how elaborate modern technology has been developed; this is how discoveries in abstract mathematics are made; this is how symphonies are composed. While valuable for developments of human creation, this perspective is dangerous when applied to societal structures that now glorify the simple homogeneous monoculture, both of the Earth and of its inhabitants.

Naïvely, uncontrollable pursuit of understanding and quickly adaptable manipulation skills, along with systematic analysis and pattern recognition capabilities to deduce large-scale and long-time trends, are logical characteristics to incorporate in an intellectually-powerful-but-otherwise-non-remarkable species within your ecosystem. Such a species would be able to recognize the occasional imbalance in nature and to provide a slight rectifying tilt. Not only could they do so, but they are designed to enjoy it! “The pleasure of finding things out” followed by the satisfaction of influencing the world external.

The naïvety of this suggested ecological role for humans overlooks the reality that our society is increasingly being driven by forces programmed to incentivize actions knowingly contrary to personal, community, and planetary best interests. Though we tend to try to put things back together, the process is often slow and filled with hubris for control. Analogous to the qubit becoming a natural tool for entanglement restoration, we reintroduce locally extirpated phenomena and species as restricted tools of ecosystem recovery: from natural airflow that better dissipates pathogens, to beavers gracefully orchestrating water distribution and the construction of entire ecosystems, to wolves creating trophic cascades that erupt in species diversity and even stabilize geographic features of the land, to earthworms aerating the soil, recycling nutrients, and supporting the diverse microbial life enveloping the Earth3. Nature is full of astoundingly fascinating examples of collaboratively created cycles, thoroughly intertwining the lives of species throughout the taxonomic ranks. Though the above naïvety identifies one theoretically plausible respectable function of the human species, we have yet to mature into any truly valuable ecological role. If humans suddenly disappeared, would any (non-domesticated) species be interested in fighting for our reintroduction? Or would there simply be…relief?

It is not a winning strategy for our understanding4 to be a necessary condition for “allowing” natural processes to occur. Though humans have become devastatingly linear creatures5, we fuel a vicious cycle of destruction, appreciation, and monetized mono-restoration. Dangerously, our efficiency in step-one often breaks this cycle, too. From free-flowing rivers to the communication networks of mycorrhizal fungi in old-growth forests, we are losing vital information of healthy ecosystems faster than they can be appreciated. How will we know what actions help to restore nature when we have lost all examples of her complex beauty? What will we do when she is gone?


As I modify my life to reflect growing concerns, the following perspective is important:

Changing the system, not perfecting our own lives, is the point. “Hypocrisy” is the price of admission in this battle. –Bill McKibben [NYT, 2016]

I will keep riding my bike for primary transportation (fun, healthy, and responsible!), adding plants to my diet (tasty, healthy, and responsible!), and showering by the light pollution through my bathroom window. I do these things not with the illusion that they are impactful, but to keep stoked an internal fire focused on brainstorming ways that a quantum physicist/musician could convince the powerful humans to make decisions that reprogram our society to calm, rather than cause, the storms spiraling our planet out of balance.


Results (so far):

Quantum physicists have been forced to realize for decades how ignoring the structure of nature can leave you struggling exponentially far from your goals. The way that nature processes information is fundamentally different than the way that we currently do…and her techniques are often exponentially superior. In the same way that we are turning to nature for solutions in simulating the quantum world, so too must we turn to nature for solutions in planetary stability. We are not going to technologically innovate ourselves out of this problem—nature has a multi-million-year head start in her R&D investments. While there are absolutely technologies (both quantum and classical) that may make the transition more rapid and comfortable, if technological solutions worked, we would have innovated ourselves out of this predicament a half-century ago when oil companies with planetary-sized spheres of influence were well-informed of the situation.

Right now, we are being asked by the Earth, still relatively kindly, to stop; to stop haphazardly taking apart her cyclic and entangled systems; to let nature heal the reductive wounds caused by our curiosity and by the endless extraction of our growth economy incompatible with a finite planet; to relish when her healthy wholeness leads to a complexity that boggles our minds; to provide planetary reprieve with a global performance of John Cage’s silent composition, 4’33”, hopefully temporally dilated; to actively stand aside. While we do so, we can peacefully ponder and reframe the vision of what our lives on this Earth should be.

In the quantum community, we have already developed the necessary mentality. We daily envision a more natural way of interacting with the world. This vision has successfully been passed through multiple generations of researchers—communicable inspiration being an essential ingredient for developments that began a century ago and may take a century more to come to full fruition. In one case, we must break from a past driven by exploitation, developed in a time when the fallacy of infinite resources was a functional approximation. In the other case, we must create an entirely new programming language built on the laws of quantum mechanics and must develop unprecedented levels of precision control necessary to compute with nature’s quantum bits. One of these challenges should be easier than the other.

In artistic performance, learning a new piece can feel like learning a new way of living. To engrain and genuinely express a new perspective, it can be helpful to work at multiple levels of abstraction. This assures that all sides of your being—intellectual, emotional, linguistic, kinesthetic, etc.—are evenly integrated, optimized for an honest portrayal of the artistic vision. In the context of planetary fragmentation decimating ecosystem coherence, quantum information provides one such valuable abstraction.

We have a story to tell that parallels, from the quantum world, our current planetary challenges. Our story is one of past destructive reduction and an ongoing pursuit of redemption reintroducing fundamental pieces of nature back into our calculations. Quantum physicists have already been through the exponentially diminished darkness and are joyously engaged in creating a future where nature’s complexities are respected and honored. We have turned phenomena famously lamented for their tortuously tangled interpretations into cherished and invaluable resources capable of achieving far more than their simplified predecessors. This endeavor is requiring (and achieving!) extensive global coordination and institutional support from local to federal levels.

Our community has learned to celebrate the complexity of the natural world. To share that vision is something important we can do. In this context, quantum physicists are natural “complexity therapists”. As society rewilds the land and reconnects our lives to nature, we can help usher in an era of corporately treasuring the invaluable resources of diverse and complex natural processes, not only for computational advantage but for the survival of all remaining life.

1. Affiliation provided for context. The views and opinions expressed herein are solely those of the author and do not reflect the official policy or position of Caltech or affiliated institutes.
2. The following reflections accompany recent research quantifying entanglement in the scalar field vacuum [1][2][3].
3. Fun Fact: Ancient Egypt imposed a death penalty for tampering with the Earth Worms!
4. and subsequent ability to secure legislative/judicial protections
5. e.g., plastic, million-year oil and ground water resources consumed in a generation, carcass removal (both plant and animal) systematically depleting nutrients from ecosystems that have specific mechanisms for nutrient retention and reintegration, etc.