Catching up with the quantum-thermo crowd

You have four hours to tour Oxford University.

What will you visit? The Ashmolean Museum, home to da Vinci drawings, samurai armor, and Egyptian mummies? The Bodleian, one of Europe’s oldest libraries? Turf Tavern, where former president Bill Clinton reportedly “didn’t inhale” marijuana?

Felix Binder showed us a cemetery.

Of course he showed us a cemetery. We were at a thermodynamics conference.

The Fifth Quantum Thermodynamics Conference took place in the City of Dreaming Spires.Participants enthused about energy, information, engines, and the flow of time. About 160 scientists attended—roughly 60 more than attended the first conference, co-organizer Janet Anders estimated.

Weak measurements and quasiprobability distributions were trending. The news delighted me, Quantum Frontiers regulars won’t be surprised to hear.

Measurements disturb quantum systems, as early-20th-century physicist Werner Heisenberg intuited. Measure a system’s position strongly, and you forfeit your ability to predict the outcomes of future momentum measurements. Weak measurements don’t disturb the system much. In exchange, weak measurements provide little information about the system. But you can recoup information by performing a weak measurement in each of many trials, then processing the outcomes.

Strong measurements lead to probability distributions: Imagine preparing a particle in some quantum state, then measuring its position strongly, in each of many trials. From the outcomes, you can infer a probability distribution $\{ p(x) \}$, wherein $p(x)$ denotes the probability that the next trial will yield position $x$.

Weak measurements lead analogously to quasiprobability distributions. Quasiprobabilities resemble probabilities but can misbehave: Probabilities are real numbers no less than zero. Quasiprobabilities can dip below zero and can assume nonreal values.

What relevance have weak measurements and quasiprobabilities to quantum thermodynamics? Thermodynamics involves work and heat. Work is energy harnessed to perform useful tasks, like propelling a train from London to Oxford. Heat is energy that jiggles systems randomly.

Quantum properties obscure the line between work and heat. (Here’s an illustration for experts: Consider an isolated quantum, such as a spin chain. Let $H(t)$ denote the Hamiltonian that evolves with the time $t \in [0, t_f]$. Consider preparing the system in an energy eigenstate $| E_i(0) \rangle$. This state has zero diagonal entropy: Measuring the energy yields $E_i(0)$ deterministically. Considering tuning $H(t)$, as by changing a magnetic field. This change constitutes work, we learn in electrodynamics class. But if $H(t)$ changes quickly, the state can acquire weight on multiple energy eigenstates. The diagonal entropy rises. The system’s energetics have gained an unreliability characteristic of heat absorption. But the system has remained isolated from any heat bath. Work mimics heat.)

Quantum thermodynamicists have defined work in terms of a two-point measurement scheme: Initialize the quantum system, such as by letting heat flow between the system and a giant, fixed-temperature heat reservoir until the system equilibrates. Measure the system’s energy strongly, and call the outcome $E_i$. Isolate the system from the reservoir. Tune the Hamiltonian, performing the quantum equivalent of propelling the London train up a hill. Measure the energy, and call the outcome $E_f$.

Any change $\Delta E$ in a system’s energy comes from heat $Q$ and/or from work $W$, by the First Law of Thermodynamics, $\Delta E = Q + W.$  Our system hasn’t exchanged energy with any heat reservoir between the measurements. So the energy change consists of work: $E_f - E_i =: W$.

Imagine performing this protocol in each of many trials. Different trials will require different amounts $W$ of work. Upon recording the amounts, you can infer a distribution $\{ p(W) \}$. $p(W)$ denotes the probability that the next trial will require an amount $W$ of work.

Measuring the system’s energy disturbs the system, squashing some of its quantum properties. (The measurement eliminates coherences, relative to the energy eigenbasis, from the state.) Quantum properties star in quantum thermodynamics. So the two-point measurement scheme doesn’t satisfy everyone.

Enter weak measurements. They can provide information about the system’s energy without disturbing the system much. Work probability distributions $\{ p(W) \}$ give way to quasiprobability distributions $\{ \tilde{p}(W) \}$.

So propose Solinas and Gasparinetti, in these papers. Other quantum thermodynamicists apply weak measurements and quasiprobabilities differently.2 I proposed applying them to characterize chaos, and the scrambling of quantum information in many-body systems, at the conference.3 Feel free to add your favorite applications to the “comments” section.

All the quantum ladies: The conference’s female participants gathered for dinner one conference night.

Wednesday afforded an afternoon for touring. Participants congregated at the college of conference co-organizer Felix Binder.3 His tour evoked, for me, the ghosts of thermo conferences past: One conference, at the University of Cambridge, had brought me to the grave of thermodynamicist Arthur Eddington. Another conference, about entropies in information theory, had convened near Canada’s Banff Cemetery. Felix’s tour began with St. Edmund Hall’s cemetery. Thermodynamics highlights equilibrium, a state in which large-scale properties—like temperature and pressure—remain constant. Some things never change.

With thanks to Felix, Janet, and the other coordinators for organizing the conference.

1Oxford derives its nickname from an elegy by Matthew Arnold. Happy National Poetry Month!

3Michele Campisi joined me in introducing out-of-time-ordered correlators (OTOCs) into the quantum-thermo conference: He, with coauthor John Goold, combined OTOCs with the two-point measurement scheme.

3Oxford University contains 38 colleges, the epicenters of undergraduates’ social, dining, and housing experiences. Graduate students and postdoctoral scholars affiliate with colleges, and senior fellows—faculty members—govern the colleges.

Rock-paper-scissors, granite-clock-idea

I have a soft spot for lamassu. Ten-foot-tall statues of these winged bull-men guarded the entrances to ancient Assyrian palaces. Show me lamassu, or apkallu—human-shaped winged deities—or other reliefs from the Neo-Assyrian capital of Nineveh, and you’ll have trouble showing me the door.

Assyrian art fills a gallery in London’s British Museum. Lamassu flank the gallery’s entrance. Carvings fill the interior: depictions of soldiers attacking, captives trudging, and kings hunting lions. The artwork’s vastness, its endurance, and the contact with a three-thousand-year-old civilization floor me. I tore myself away as the museum closed one Sunday night.

I visited the British Museum the night before visiting Jonathan Oppenheim’s research group at University College London (UCL). Jonathan combines quantum information theory with thermodynamics. He and others co-invented thermodynamic resource theories (TRTs), which Quantum Frontiers regulars will know of. TRTs are quantum-information-theoretic models for systems that exchange energy with their environments.

Energy is conjugate to time: Hamiltonians, mathematical objects that represent energy, represent also translations through time. We measure time with clocks. Little wonder that one can study quantum clocks using a model for energy exchanges.

Mischa Woods, Ralph Silva, and Jonathan used a resource theory to design an autonomous quantum clock. “Autonomous” means that the clock contains all the parts it needs to operate, needs no periodic winding-up, etc. When might we want an autonomous clock? When building quantum devices that operate independently of classical engineers. Or when performing a quantum computation: Computers must perform logical gates at specific times.

Wolfgang Pauli and others studied quantum clocks, the authors recall. How, Pauli asked, would an ideal clock look? Its Hamiltonian, $\hat{H}_{\rm C}$, would have eigenstates $| E \rangle$. The labels $E$ denote possible amounts of energy.

The Hamiltonian would be conjugate to a “time operator” $\hat{t}$. Let $| \theta \rangle$ denote an eigenstate of $\hat{t}$. This “time state” would equal an even superposition over the $| E \rangle$’s. The clock would occupy the state $| \theta \rangle$ at time $t_\theta$.

Imagine measuring the clock, to learn the time, or controlling another system with the clock. The interaction would disturb the clock, changing the clock’s state. The disturbance wouldn’t mar the clock’s timekeeping, if the clock were ideal. What would enable an ideal clock to withstand the disturbances? The ability to have any amount of energy: $E$ must stretch from $- \infty$ to $\infty$. Such clocks can’t exist.

Approximations to them can. Mischa, Ralph, and Jonathan designed a finite-size clock, then characterized how accurately the clock mimics the ideal. (Experts: The clock corresponds to a Hilbert space of finite dimensionality $d$. The clock begins in a Gaussian state that peaks at one time state $| \theta \rangle$. The finite-width Gaussian offers more stability than a clock state.)

Disturbances degrade our ability to distinguish instants by measuring the clock. Imagine gazing at a kitchen clock through blurry lenses: You couldn’t distinguish 6:00 from 5:59 or 6:01. Disturbances also hinder the clock’s ability to implement processes, such as gates in a computation, at desired instants.

Mischa & co. quantified these degradations. The errors made by the clock, they found, decay inverse-exponentially with the clock’s size: Grow the clock a little, and the errors shrink a lot.

Time has degraded the lamassu, but only a little. You can distinguish feathers in their wings and strands in their beards. People portray such artifacts as having “withstood the flow of time,” or “evaded,” or “resisted.” Such portrayals have never appealed to me. I prefer to think of the lamassu as surviving not because they clash with time, but because they harmonize with it. The prospect of harmonizing with time—whatever that means—has enticed me throughout my life. The prospect partially underlies my research into time—perhaps childishly, foolishly—I recognize if I remove my blurry lenses before gazing in the mirror.

The creation of lasting works, like lamassu, has enticed me throughout my life. I’ve scrapbooked, archived, and recorded, and tended memories as though they were Great-Grandma’s cookbook. Ancient civilizations began alluring me at age six, partially due to artifacts’ longevity. No wonder I study the second law of thermodynamics.

Yet doing theoretical physics makes no sense from another perspective. The ancient Egyptians sculpted granite, when they could afford it. Gudea, king of the ancient city-state of Lagash, immortalized himself in diorite. I fashion ideas, which lack substance. Imagine playing, rather than rock-paper-scissors, granite-diorite-idea. The idea wouldn’t stand a chance.

Would it? Because an idea lacks substance, it can manifest in many forms. Plato’s cave allegory has manifested as a story, as classroom lectures, on handwritten pages, on word processors and websites, in cartloads of novels, in the film The Matrix, in one of the four most memorable advertisements I received from colleges as a high-school junior, and elsewhere. Plato’s allegory has survived since about the fourth century BCE. King Ashurbanipal’s lion-hunt reliefs have survived for only about 200 years longer.

The lion-hunt reliefs—and lamassu—exude a grandness, a majesty that’s attracted me as their longevity has. The nature of time and the perfect clock have as much grandness. Leaving the British Museum’s Assyrian gallery at 6 PM one Sunday, I couldn’t have asked for a more fitting location, 24 hours later, than in a theoretical-physics conversation.

With thanks to Jonathan, to Álvaro Martín-Alhambra, and to Mischa for their hospitality at UCL; to Ada Cohen for the “Art history of ancient Egypt and the ancient Near East” course for which I’d been hankering for years; to my brother, for transmitting the ancient-civilizations bug; and to my parents, who fed the infection with museum visits.

Click here for a follow-up to the quantum-clock paper.

What makes extraordinary science extraordinary?

My article for this month appears on Sean Carroll’s blog, Preposterous UniverseSean is a theoretical physicist who practices cosmology at Caltech. He interfaces with philosophy, which tinges the question I confront: What distinguishes extraordinary science from good science? The topic seemed an opportunity to take Sean up on an invitation to guest-post on Preposterous Universe. Head there for my article. Thanks to Sean for hosting!

Gently yoking yin to yang

The architecture at the University of California, Berkeley mystified me. California Hall evokes a Spanish mission. The main library consists of white stone pillared by ionic columns. A sea-green building scintillates in the sunlight like a scarab. The buildings straddle the map of styles.

So do Berkeley’s quantum scientists, information-theory users, and statistical mechanics.

The chemists rove from abstract quantum information (QI) theory to experiments. Physicists experiment with superconducting qubits, trapped ions, and numerical simulations. Computer scientists invent algorithms for quantum computers to perform.

Few activities light me up more than bouncing from quantum group to info-theory group to stat-mech group, hunting commonalities. I was honored to bounce from group to group at Berkeley this September.

What a trampoline Berkeley has.

The groups fan out across campus and science, but I found compatibility. Including a collaboration that illuminated quantum incompatibility.

Quantum incompatibility originated in studies by Werner Heisenberg. He and colleagues cofounded quantum mechanics during the early 20th century. Measuring one property of a quantum system, Heisenberg intuited, can affect another property.

The most famous example involves position and momentum. Say that I hand you an electron. The electron occupies some quantum state represented by $| \Psi \rangle$. Suppose that you measure the electron’s position. The measurement outputs one of many possible values $x$ (unless $| \Psi \rangle$ has an unusual form, the form a Dirac delta function).

But we can’t say that the electron occupies any particular point $x = x_0$ in space. Measurement devices have limited precision. You can measure the position only to within some error $\varepsilon$: $x = x_0 \pm \varepsilon$.

Suppose that, immediately afterward, you measure the electron’s momentum. This measurement, too, outputs one of many possible values. What probability $q(p) dp$ does the measurement have of outputting some value $p$? We can calculate $q(p) dp$, knowing the mathematical form of $| \Psi \rangle$ and knowing the values of $x_0$ and $\varepsilon$.

$q(p)$ is a probability density, which you can think of as a set of probabilities. The density can vary with $p$. Suppose that $q(p)$ varies little: The probabilities spread evenly across the possible $p$ values. You have no idea which value your momentum measurement will output. Suppose, instead, that $q(p)$ peaks sharply at some value $p = p_0$. You can likely predict the momentum measurement’s outcome.

The certainty about the momentum measurement trades off with the precision $\varepsilon$ of the position measurement. The smaller the $\varepsilon$ (the more precisely you measured the position), the greater the momentum’s unpredictability. We call position and momentum complementary, or incompatible.

You can’t measure incompatible properties, with high precision, simultaneously. Imagine trying to do so. Upon measuring the momentum, you ascribe a tiny range of momentum values $p$ to the electron. If you measured the momentum again, an instant later, you could likely predict that measurement’s outcome: The second measurement’s $q(p)$ would peak sharply (encode high predictability). But, in the first instant, you measure also the position. Hence, by the discussion above, $q(p)$ would spread out widely. But we just concluded that $q(p)$ would peak sharply. This contradiction illustrates that you can’t measure position and momentum, precisely, at the same time.

But you can simultaneously measure incompatible properties weakly. A weak measurement has an enormous $\varepsilon$. A weak position measurement barely spreads out $q(p)$. If you want more details, ask a Quantum Frontiers regular; I’ve been harping on weak measurements for months.

Blame Berkeley for my harping this month. Irfan Siddiqi’s and Birgitta Whaley’s groups collaborated on weak measurements of incompatible observables. They tracked how the measured quantum state $| \Psi (t) \rangle$ evolved in time (represented by $t$).

Irfan’s group manipulates superconducting qubits.1 The qubits sit in the physics building, a white-stone specimen stamped with an egg-and-dart motif. Across the street sit chemists, including members of Birgitta’s group. The experimental physicists and theoretical chemists teamed up to study a quantum lack of teaming up.

The experiment involved one superconducting qubit. The qubit has properties analogous to position and momentum: A ball, called the Bloch ball, represents the set of states that the qubit can occupy. Imagine an arrow pointing from the sphere’s center to any point in the ball. This Bloch vector represents the qubit’s state. Consider an arrow that points upward from the center to the surface. This arrow represents the qubit state $| 0 \rangle$. $| 0 \rangle$ is the quantum analog of the possible value 0 of a bit, or unit of information. The analogous downward-pointing arrow represents the qubit state $| 1 \rangle$, analogous to 1.

Infinitely many axes intersect the sphere. Different axes represent different observables that Irfan’s group can measure. Nonparallel axes represent incompatible observables. For example, the $x$-axis represents an observable $\sigma_x$ analogous to position. The $y$-axis represents an observable $\sigma_y$ analogous to momentum.

Siddiqi lab, decorated with the trademark for the paper’s tug-of-war between incompatible observables. Photo credit: Leigh Martin, one of the paper’s leading authors.

Irfan’s group stuck their superconducting qubit in a cavity, or box. The cavity contained light that interacted with the qubit. The interactions transferred information from the qubit to the light: The light measured the qubit’s state. The experimentalists controlled the interactions, controlling the axes “along which” the light was measured. The experimentalists weakly measured along two axes simultaneously.

Suppose that the axes coincided—say, at the $x$-axis $\hat{x}$. The qubit would collapse to the state $| \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle + | 1 \rangle )$, represented by the arrow that points along $\hat{x}$ to the sphere’s surface, or to the state $| \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle - | 1 \rangle )$, represented by the opposite arrow.

(Projection of) the Bloch Ball after the measurement. The system can access the colored points. The lighter a point, the greater the late-time state’s weight on the point.

Let $\hat{x}'$ denote an axis near $\hat{x}$—say, 18° away. Suppose that the group weakly measured along $\hat{x}$ and $\hat{x}'$. The state would partially collapse. The system would access points in the region straddled by $\hat{x}$ and $\hat{x}'$, as well as points straddled by $- \hat{x}$ and $- \hat{x}'$.

Finally, suppose that the group weakly measured along $\hat{x}$ and $\hat{y}$. These axes stand in for position and momentum. The state would, loosely speaking, swirl around the Bloch ball.

The Berkeley experiment illuminates foundations of quantum theory. Incompatible observables, physics students learn, can’t be measured simultaneously. This experiment blasts our expectations, using weak measurements. But the experiment doesn’t just destroy. It rebuilds the blast zone, by showing how $| \Psi (t) \rangle$ evolves.

“Position” and “momentum” can hang together. So can experimentalists and theorists, physicists and chemists. So, somehow, can the California mission and the ionic columns. Maybe I’ll understand the scarab building when we understand quantum theory.2

With thanks to Birgitta’s group, Irfan’s group, and the rest of Berkeley’s quantum/stat-mech/info-theory community for its hospitality. The Bloch-sphere figures come from http://www.nature.com/articles/nature19762.

1The qubit is the quantum analog of a bit. The bit is the basic unit of information. A bit can be in one of two possible states, which we can label as 0 and 1. Qubits can manifest in many physical systems, including superconducting circuits. Such circuits are tiny quantum circuits through which current can flow, without resistance, forever.

2Soda Hall dazzled but startled me.

The word dominates chapter one of Richard Holmes’s book The Age of WonderHolmes writes biographies of Romantic-Era writers: Mary Wollstonecraft, Percy Shelley, and Samuel Taylor Coleridge populate his bibliography. They have cameos in Age. But their scientific counterparts star.

“Their natural-philosopher” counterparts, I should say. The word “scientist” emerged as the Romantic Era closed. Romanticism, a literary and artistic movement, flourished between the 1700s and the 1800s. Romantics championed self-expression, individuality, and emotion over convention and artificiality. Romantics wondered at, and drew inspiration from, the natural world. So, Holmes argues, did Romantic-Era natural philosophers. They explored, searched, and innovated with Wollstonecraft’s, Shelley’s, and Coleridge’s zest.

Holmes depicts Wilhelm and Caroline Herschel, a German brother and sister, discovering the planet Uranus. Humphry Davy, an amateur poet from Penzance, inventing a lamp that saved miners’ lives. Michael Faraday, a working-class Londoner, inspired by Davy’s chemistry lectures.

So Holmes entitled chapter one.

Banks studied natural history as a young English gentleman during the 1760s. He then sailed around the world, a botanist on exploratory expeditions. The second expedition brought Banks aboard the HMS Endeavor. Captain James Cook steered the ship to Brazil, Tahiti, Australia, and New Zealand. Banks brought a few colleagues onboard. They studied the native flora, fauna, skies, and tribes.

Banks, with fellow botanist Daniel Solander, accumulated over 30,000 plant samples. Artist Sydney Parkinson drew the plants during the voyage. Parkinson’s drawings underlay 743 copper engravings that Banks commissioned upon returning to England. Banks planned to publish the engravings as the book Florilegium. He never succeeded. Two institutions executed Banks’s plan more than 200 years later.

Banks’s Florilegium crowns an exhibition at the University of California at Santa Barbara (UCSB). UCSB’s Special Research Collections will host “Botanical Illustrations and Scientific Discovery—Joseph Banks and the Exploration of the South Pacific, 1768–1771” until May 2018. The exhibition features maps of Banks’s journeys, biographical sketches of Banks and Cook, contemporary art inspired by the engravings, and the Florilegium.

The exhibition spotlights “plants that have subsequently become important ornamental plants on the UCSB campus, throughout Santa Barbara, and beyond.” One sees, roaming Santa Barbara, slivers of Banks’s paradise.

In Santa Barbara resides the Kavli Institute for Theoretical Physics (KITP). The KITP is hosting a program about the physics of quantum information (QI). QI scientists are congregating from across the world. Everyone visits for a few weeks or months, meeting some participants and missing others (those who have left or will arrive later). Participants attend and present tutorials, explore beyond their areas of expertise, and initiate research collaborations.

A conference capstoned the program, one week this October. Several speakers had founded subfields of physics: quantum error correction (how to fix errors that dog quantum computers), quantum computational complexity (how quickly quantum computers can solve hard problems), topological quantum computation, AdS/CFT (a parallel between certain gravitational systems and certain quantum systems), and more. Swaths of science exist because of these thinkers.

One evening that week, I visited the Joseph Banks exhibition.

I’d thought that, by “paradise,” Holmes had meant “physical attractions”: lush flowers, vibrant colors, fresh fish, and warm sand. Another meaning occurred to me, after the conference talks, as I stood before a glass case in the library.

Joseph Banks, disembarking from the Endeavour, didn’t disembark onto just an island. He disembarked onto terra incognita. Never had he or his colleagues seen the blossoms, seed pods, or sprouts before him. Swaths of science awaited. What could the natural philosopher have craved more?

QI scientists of a certain age reminisce about the 1990s, the cowboy days of QI. When impactful theorems, protocols, and experiments abounded. When they dangled, like ripe fruit, just above your head. All you had to do was look up, reach out, and prove a pineapple.

Typical 1990s quantum-information scientist

That generation left mine few simple theorems to prove. But QI hasn’t suffered extinction. Its frontiers have advanced into other fields of science. Researchers are gaining insight into thermodynamics, quantum gravity, condensed matter, and chemistry from QI. The KITP conference highlighted connections with quantum gravity.

What could a natural philosopher crave more?

Artwork commissioned by the UCSB library: “Sprawling Neobiotic Chimera (After Banks’ Florilegium),” by Rose Briccetti

Most KITP talks are recorded and released online. You can access talks from the conference here. My talk, about quantum chaos and thermalization, appears here.

With gratitude to the KITP, and to the program organizers and the conference organizers, for the opportunity to participate.

Decoding (the allure of) the apparent horizon

I took 32 hours to unravel why Netta Engelhardt’s talk had struck me.

We were participating in Quantum Information in Quantum Gravity III, a workshop hosted by the University of British Columbia (UBC) in Vancouver. Netta studies quantum gravity as a Princeton postdoc. She discussed a feature of black holes—an apparent horizon—I’d not heard of. After hearing of it, I had to grasp it. I peppered Netta with questions three times in the following day. I didn’t understand why, for 32 hours.

After 26 hours, I understood apparent horizons like so.

Imagine standing beside a glass sphere, an empty round shell. Imagine light radiating from a point source in the sphere’s center. Think of the point source as a minuscule flash light. Light rays spill from the point source.

Which paths do the rays follow through space? They fan outward from the sphere’s center, hit the glass, and fan out more. Imagine turning your back to the sphere and looking outward. Light rays diverge as they pass you.

At least, rays diverge in flat space-time. We live in nearly flat space-time. We wouldn’t if we neighbored a supermassive object, like a black hole. Mass curves space-time, as described by Einstein’s theory of general relativity.

Imagine standing beside the sphere near a black hole. Let the sphere have roughly the black hole’s diameter—around 10 kilometers, according to astrophysical observations. You can’t see much of the sphere. So—imagine—you recruit your high-school-physics classmates. You array yourselves around the sphere, planning to observe light and compare observations. Imagine turning your back to the sphere. Light rays would converge, or flow toward each other. You’d know yourself to be far from Kansas.

Picture you, your classmates, and the sphere falling into the black hole. When would everyone agree that the rays switch from diverging to converging? Sometime after you passed the event horizon, the point of no return.1 Before you reached the singularity, the black hole’s center, where space-time warps infinitely. The rays would switch when you reached an in-between region, the apparent horizon.

Imagine pausing at the apparent horizon with your sphere, facing away from the sphere. Light rays would neither diverge nor converge; they’d point straight. Continue toward the singularity, and the rays would converge. Reverse away from the singularity, and the rays would diverge.

UBC near twilight

Rays diverged from the horizon beyond UBC at twilight. Twilight suits UBC as marble suits the Parthenon; and UBC’s twilight suits musing. You can reflect while gazing on reflections in glass buildings, or reflections in a pool by a rose garden. Your mind can roam as you roam paths lined by elms, oaks, and willows. I wandered while wondering why the sphere intrigued me.

Science thrives on instrumentation. Galileo improved the telescope, which unveiled Jupiter’s moons. Alexander von Humboldt measured temperatures and pressures with thermometers and barometers, charting South America during the 1700s. The Large Hadron Collider revealed the Higgs particle’s mass in 2012.

The sphere reminded me of a thermometer. As thermometers register temperature, so does the sphere register space-time curvature. Not that you’d need a sphere to distinguish a black hole from Kansas. Nor do you need a thermometer to distinguish Vancouver from a Brazilian jungle. But thermometers quantify the distinction. A sphere would sharpen your observations’ precision.

A sphere and a light source—free of supercolliders, superconductors, and superfridges. The instrument boasts not only profundity, but also simplicity.

Alexander von Humboldt

Netta proved a profound theorem about apparent horizons, with coauthor Aron Wall. Jakob Bekenstein and Stephen Hawking had studied event horizons during the 1970s. An event horizon’s area, Bekenstein and Hawking showed, is proportional to the black hole’s thermodynamic entropy. Netta and Aron proved a proportionality between another area and another entropy.

They calculated an apparent horizon’s area, $A$. The math that represents their black hole represents also a quantum system, by a duality called AdS/CFT. The quantum system can occupy any of several states. Different states encode different information about the black hole. Consider the information needed to describe, fully and only, the region outside the apparent horizon. Some quantum state $\rho$ encodes this information. $\rho$ encodes no information about the region behind the apparent horizon, closer to the black hole. How would you quantify this lack of information? With the von Neumann entropy $S(\rho)$. This entropy is proportional to the apparent horizon’s area: $S( \rho ) \propto A$.

Netta and Aron entitled their paper “Decoding the apparent horizon.” Decoding the apparent horizon’s allure took me 32 hours and took me to an edge of campus. But I didn’t mind. Edges and horizons suited my visit as twilight suits UBC. Where can we learn, if not at edges, as where quantum information meets other fields?

With gratitude to Mark van Raamsdonk and UBC for hosting Quantum Information in Quantum Gravity III; to Mark, the other organizers, and the “It from Qubit” Simons Foundation collaboration for the opportunity to participate; and to Netta Engelhardt for sharing her expertise.

1Nothing that draws closer to a black hole than the event horizon can turn around and leave, according to general relativity. The black hole’s gravity pulls too strongly. Quantum mechanics implies that information leaves, though, in Hawking radiation.

Taming wave functions with neural networks

Note from Nicole Yunger Halpern: One sunny Saturday this spring, I heard Sam Greydanus present about his undergraduate thesis. Sam was about to graduate from Dartmouth with a major in physics. He had worked with quantum-computation theorist Professor James Whitfield. The presentation — about applying neural networks to quantum computation — so intrigued me that I asked him to share his research on Quantum Frontiers. Sam generously agreed; this is his story.

Wave functions in the wild

The wave function, $\psi$, is a mixed blessing. At first, it causes unsuspecting undergrads (me) some angst via the Schrodinger’s cat paradox. This angst morphs into full-fledged panic when they encounter concepts such as nonlocality and Bell’s theorem (which, by the way, is surprisingly hard to verify experimentally). The real trouble with $\psi$, though, is that it grows exponentially with the number of entangled particles in a system. We couldn’t even hope to write the wavefunction of 100 entangled particles, much less perform computations on it…but there’s a lot to gain from doing just that.

The thing is, we (a couple of luckless physicists) love $\psi$. Manipulating wave functions can give us ultra-precise timekeeping, secure encryption, and polynomial-time factoring of integers (read: break RSA). Harnessing quantum effects can also produce better machine learning, better physics simulations, and even quantum teleportation.

Taming the beast

Though $\psi$ grows exponentially with the number of particles in a system, most physical wave functions can be described with a lot less information. Two algorithms for doing this are the Density Matrix Renormalization Group (DMRG) and Quantum Monte Carlo (QMC).

Density Matrix Renormalization Group (DMRG). Imagine we want to learn about trees, but studying a full-grown, 50-foot tall tree in the lab is too unwieldy. One idea is to keep the tree small, like a bonsai tree. DMRG is an algorithm which, like a bonsai gardener, prunes the wave function while preserving its most important components. It produces a compressed version of the wave function called a Matrix Product State (MPS). One issue with DMRG is that it doesn’t extend particularly well to 2D and 3D systems.

Quantum Monte Carlo (QMC). Another way to study the concept of “tree” in a lab (bear with me on this metaphor) would be to study a bunch of leaf, seed, and bark samples. Quantum Monte Carlo algorithms do this with wave functions, taking “samples” of a wave function (pure states) and using the properties and frequencies of these samples to build a picture of the wave function as a whole. The difficulty with QMC is that it treats the wave function as a black box. We might ask, “how does flipping the spin of the third electron affect the total energy?” and QMC wouldn’t have much of a physical answer.

Brains $\gg$ Brawn

Neural Quantum States (NQS). Some state spaces are far too large for even Monte Carlo to sample adequately. Suppose now we’re studying a forest full of different species of trees. If one type of tree vastly outnumbers the others, choosing samples from random trees isn’t an efficient way to map biodiversity. Somehow, we need to make the sampling process “smarter”. Last year, Google DeepMind used a technique called deep reinforcement learning to do just that – and achieved fame for defeating the world champion human Go player. A recent Science paper by Carleo and Troyer (2017) used the same technique to make QMC “smarter” and effectively compress wave functions with neural networks. This approach, called “Neural Quantum States (NQS)”, produced several state-of-the-art results.

The general idea of my thesis.

My thesis. My undergraduate thesis centered upon much the same idea. In fact, I had to abandon some of my initial work after reading the NQS paper. I then focused on using machine learning techniques to obtain MPS coefficients. Like Carleo and Troyer, I used neural networks to approximate  $\psi$. Unlike Carleo and Troyer, I trained my model to output a set of Matrix Product State coefficients which have physical meaning (MPS coefficients always correspond to a certain state and site, e.g. “spin up, electron number 3”).

Cool – but does it work?

Yes – for small systems. In my thesis, I considered a toy system of 4 spin-$\frac{1}{2}$ particles interacting via the Heisenberg Hamiltonian. Solving this system is not difficult so I was able to focus on fitting the two disparate parts – machine learning and Matrix Product States – together.

Success! My model solved for ground states with arbitrary precision. Even more interestingly, I used it to automatically obtain MPS coefficients. Shown below, for example, is a visualization of my model’s coefficients for the GHZ state, compared with coefficients taken from the literature.

A visual comparison of a 4-site Matrix Product State for the GHZ state a) listed in the literature b) obtained from my neural network model. Colored squares correspond to real-valued elements of 2×2 matrices.

Limitations. The careful reader might point out that, according to the schema of my model (above), I still have to write out the full wave function. To scale my model up, I instead trained it variationally over a subspace of the Hamiltonian (just as the authors of the NQS paper did). Results are decent for larger (10-20 particle) systems, but the training itself is still unstable. I’ll finish ironing out the details soon, so keep an eye on arXiv* :).

Outside the ivory tower

A quantum computer developed by Joint Quantum Institute, U. Maryland.

Quantum computing is a field that’s poised to take on commercial relevance. Taming the wave function is one of the big hurdles we need to clear before this happens. Hopefully my findings will have a small role to play in making this happen.

On a more personal note, thank you for reading about my work. As a recent undergrad, I’m still new to research and I’d love to hear constructive comments or criticisms. If you found this post interesting, check out my research blog.

*arXiv is an online library for electronic preprints of scientific papers