Gently yoking yin to yang

The architecture at the University of California, Berkeley mystified me. California Hall evokes a Spanish mission. The main library consists of white stone pillared by ionic columns. A sea-green building scintillates in the sunlight like a scarab. The buildings straddle the map of styles.

Architecture.001

So do Berkeley’s quantum scientists, information-theory users, and statistical mechanics.

The chemists rove from abstract quantum information (QI) theory to experiments. Physicists experiment with superconducting qubits, trapped ions, and numerical simulations. Computer scientists invent algorithms for quantum computers to perform.

Few activities light me up more than bouncing from quantum group to info-theory group to stat-mech group, hunting commonalities. I was honored to bounce from group to group at Berkeley this September.

What a trampoline Berkeley has.

The groups fan out across campus and science, but I found compatibility. Including a collaboration that illuminated quantum incompatibility.

Quantum incompatibility originated in studies by Werner Heisenberg. He and colleagues cofounded quantum mechanics during the early 20th century. Measuring one property of a quantum system, Heisenberg intuited, can affect another property.

The most famous example involves position and momentum. Say that I hand you an electron. The electron occupies some quantum state represented by | \Psi \rangle. Suppose that you measure the electron’s position. The measurement outputs one of many possible values x (unless | \Psi \rangle has an unusual form, the form a Dirac delta function).

But we can’t say that the electron occupies any particular point x = x_0 in space. Measurement devices have limited precision. You can measure the position only to within some error \varepsilon: x = x_0 \pm \varepsilon.

Suppose that, immediately afterward, you measure the electron’s momentum. This measurement, too, outputs one of many possible values. What probability q(p) dp does the measurement have of outputting some value p? We can calculate q(p) dp, knowing the mathematical form of | \Psi \rangle and knowing the values of x_0 and \varepsilon.

q(p) is a probability density, which you can think of as a set of probabilities. The density can vary with p. Suppose that q(p) varies little: The probabilities spread evenly across the possible p values. You have no idea which value your momentum measurement will output. Suppose, instead, that q(p) peaks sharply at some value p = p_0. You can likely predict the momentum measurement’s outcome.

The certainty about the momentum measurement trades off with the precision \varepsilon of the position measurement. The smaller the \varepsilon (the more precisely you measured the position), the greater the momentum’s unpredictability. We call position and momentum complementary, or incompatible.

You can’t measure incompatible properties, with high precision, simultaneously. Imagine trying to do so. Upon measuring the momentum, you ascribe a tiny range of momentum values p to the electron. If you measured the momentum again, an instant later, you could likely predict that measurement’s outcome: The second measurement’s q(p) would peak sharply (encode high predictability). But, in the first instant, you measure also the position. Hence, by the discussion above, q(p) would spread out widely. But we just concluded that q(p) would peak sharply. This contradiction illustrates that you can’t measure position and momentum, precisely, at the same time.

But you can simultaneously measure incompatible properties weakly. A weak measurement has an enormous \varepsilon. A weak position measurement barely spreads out q(p). If you want more details, ask a Quantum Frontiers regular; I’ve been harping on weak measurements for months.

Blame Berkeley for my harping this month. Irfan Siddiqi’s and Birgitta Whaley’s groups collaborated on weak measurements of incompatible observables. They tracked how the measured quantum state | \Psi (t) \rangle evolved in time (represented by t).

Irfan’s group manipulates superconducting qubits.1 The qubits sit in the physics building, a white-stone specimen stamped with an egg-and-dart motif. Across the street sit chemists, including members of Birgitta’s group. The experimental physicists and theoretical chemists teamed up to study a quantum lack of teaming up.

Phys. & chem. bldgs

The experiment involved one superconducting qubit. The qubit has properties analogous to position and momentum: A ball, called the Bloch ball, represents the set of states that the qubit can occupy. Imagine an arrow pointing from the sphere’s center to any point in the ball. This Bloch vector represents the qubit’s state. Consider an arrow that points upward from the center to the surface. This arrow represents the qubit state | 0 \rangle. | 0 \rangle is the quantum analog of the possible value 0 of a bit, or unit of information. The analogous downward-pointing arrow represents the qubit state | 1 \rangle, analogous to 1.

Infinitely many axes intersect the sphere. Different axes represent different observables that Irfan’s group can measure. Nonparallel axes represent incompatible observables. For example, the x-axis represents an observable \sigma_x analogous to position. The y-axis represents an observable \sigma_y analogous to momentum.

Tug-of-war

Siddiqi lab, decorated with the trademark for the paper’s tug-of-war between incompatible observables. Photo credit: Leigh Martin, one of the paper’s leading authors.

Irfan’s group stuck their superconducting qubit in a cavity, or box. The cavity contained light that interacted with the qubit. The interactions transferred information from the qubit to the light: The light measured the qubit’s state. The experimentalists controlled the interactions, controlling the axes “along which” the light was measured. The experimentalists weakly measured along two axes simultaneously.

Suppose that the axes coincided—say, at the x-axis \hat{x}. The qubit would collapse to the state | \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle + | 1 \rangle ), represented by the arrow that points along \hat{x} to the sphere’s surface, or to the state | \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle - | 1 \rangle ), represented by the opposite arrow.

0 deg

(Projection of) the Bloch Ball after the measurement. The system can access the colored points. The lighter a point, the greater the late-time state’s weight on the point.

Let \hat{x}' denote an axis near \hat{x}—say, 18° away. Suppose that the group weakly measured along \hat{x} and \hat{x}'. The state would partially collapse. The system would access points in the region straddled by \hat{x} and \hat{x}', as well as points straddled by - \hat{x} and - \hat{x}'.

18 deg

Finally, suppose that the group weakly measured along \hat{x} and \hat{y}. These axes stand in for position and momentum. The state would, loosely speaking, swirl around the Bloch ball.

90 deg

The Berkeley experiment illuminates foundations of quantum theory. Incompatible observables, physics students learn, can’t be measured simultaneously. This experiment blasts our expectations, using weak measurements. But the experiment doesn’t just destroy. It rebuilds the blast zone, by showing how | \Psi (t) \rangle evolves.

“Position” and “momentum” can hang together. So can experimentalists and theorists, physicists and chemists. So, somehow, can the California mission and the ionic columns. Maybe I’ll understand the scarab building when we understand quantum theory.2

With thanks to Birgitta’s group, Irfan’s group, and the rest of Berkeley’s quantum/stat-mech/info-theory community for its hospitality. The Bloch-sphere figures come from http://www.nature.com/articles/nature19762.

1The qubit is the quantum analog of a bit. The bit is the basic unit of information. A bit can be in one of two possible states, which we can label as 0 and 1. Qubits can manifest in many physical systems, including superconducting circuits. Such circuits are tiny quantum circuits through which current can flow, without resistance, forever.

2Soda Hall dazzled but startled me.

Paradise

The word dominates chapter one of Richard Holmes’s book The Age of WonderHolmes writes biographies of Romantic-Era writers: Mary Wollstonecraft, Percy Shelley, and Samuel Taylor Coleridge populate his bibliography. They have cameos in Age. But their scientific counterparts star.

“Their natural-philosopher” counterparts, I should say. The word “scientist” emerged as the Romantic Era closed. Romanticism, a literary and artistic movement, flourished between the 1700s and the 1800s. Romantics championed self-expression, individuality, and emotion over convention and artificiality. Romantics wondered at, and drew inspiration from, the natural world. So, Holmes argues, did Romantic-Era natural philosophers. They explored, searched, and innovated with Wollstonecraft’s, Shelley’s, and Coleridge’s zest.

Age of Wonder

Holmes depicts Wilhelm and Caroline Herschel, a German brother and sister, discovering the planet Uranus. Humphry Davy, an amateur poet from Penzance, inventing a lamp that saved miners’ lives. Michael Faraday, a working-class Londoner, inspired by Davy’s chemistry lectures.

Joseph Banks in paradise.

So Holmes entitled chapter one.

Banks studied natural history as a young English gentleman during the 1760s. He then sailed around the world, a botanist on exploratory expeditions. The second expedition brought Banks aboard the HMS Endeavor. Captain James Cook steered the ship to Brazil, Tahiti, Australia, and New Zealand. Banks brought a few colleagues onboard. They studied the native flora, fauna, skies, and tribes.

Banks, with fellow botanist Daniel Solander, accumulated over 30,000 plant samples. Artist Sydney Parkinson drew the plants during the voyage. Parkinson’s drawings underlay 743 copper engravings that Banks commissioned upon returning to England. Banks planned to publish the engravings as the book Florilegium. He never succeeded. Two institutions executed Banks’s plan more than 200 years later.

Banks’s Florilegium crowns an exhibition at the University of California at Santa Barbara (UCSB). UCSB’s Special Research Collections will host “Botanical Illustrations and Scientific Discovery—Joseph Banks and the Exploration of the South Pacific, 1768–1771” until May 2018. The exhibition features maps of Banks’s journeys, biographical sketches of Banks and Cook, contemporary art inspired by the engravings, and the Florilegium.

online poster

The exhibition spotlights “plants that have subsequently become important ornamental plants on the UCSB campus, throughout Santa Barbara, and beyond.” One sees, roaming Santa Barbara, slivers of Banks’s paradise.

2 bouganvilleas

In Santa Barbara resides the Kavli Institute for Theoretical Physics (KITP). The KITP is hosting a program about the physics of quantum information (QI). QI scientists are congregating from across the world. Everyone visits for a few weeks or months, meeting some participants and missing others (those who have left or will arrive later). Participants attend and present tutorials, explore beyond their areas of expertise, and initiate research collaborations.

A conference capstoned the program, one week this October. Several speakers had founded subfields of physics: quantum error correction (how to fix errors that dog quantum computers), quantum computational complexity (how quickly quantum computers can solve hard problems), topological quantum computation, AdS/CFT (a parallel between certain gravitational systems and certain quantum systems), and more. Swaths of science exist because of these thinkers.

KITP

One evening that week, I visited the Joseph Banks exhibition.

Joseph Banks in paradise.

I’d thought that, by “paradise,” Holmes had meant “physical attractions”: lush flowers, vibrant colors, fresh fish, and warm sand. Another meaning occurred to me, after the conference talks, as I stood before a glass case in the library.

Joseph Banks, disembarking from the Endeavour, didn’t disembark onto just an island. He disembarked onto terra incognita. Never had he or his colleagues seen the blossoms, seed pods, or sprouts before him. Swaths of science awaited. What could the natural philosopher have craved more?

QI scientists of a certain age reminisce about the 1990s, the cowboy days of QI. When impactful theorems, protocols, and experiments abounded. When they dangled, like ripe fruit, just above your head. All you had to do was look up, reach out, and prove a pineapple.

Cowboy

Typical 1990s quantum-information scientist

That generation left mine few simple theorems to prove. But QI hasn’t suffered extinction. Its frontiers have advanced into other fields of science. Researchers are gaining insight into thermodynamics, quantum gravity, condensed matter, and chemistry from QI. The KITP conference highlighted connections with quantum gravity.

…in paradise.

What could a natural philosopher crave more?

Contemporary

Artwork commissioned by the UCSB library: “Sprawling Neobiotic Chimera (After Banks’ Florilegium),” by Rose Briccetti

Most KITP talks are recorded and released online. You can access talks from the conference here. My talk, about quantum chaos and thermalization, appears here. 

With gratitude to the KITP, and to the program organizers and the conference organizers, for the opportunity to participate. 

Decoding (the allure of) the apparent horizon

I took 32 hours to unravel why Netta Engelhardt’s talk had struck me.

We were participating in Quantum Information in Quantum Gravity III, a workshop hosted by the University of British Columbia (UBC) in Vancouver. Netta studies quantum gravity as a Princeton postdoc. She discussed a feature of black holes—an apparent horizon—I’d not heard of. After hearing of it, I had to grasp it. I peppered Netta with questions three times in the following day. I didn’t understand why, for 32 hours.

After 26 hours, I understood apparent horizons like so.

Imagine standing beside a glass sphere, an empty round shell. Imagine light radiating from a point source in the sphere’s center. Think of the point source as a minuscule flash light. Light rays spill from the point source.

Which paths do the rays follow through space? They fan outward from the sphere’s center, hit the glass, and fan out more. Imagine turning your back to the sphere and looking outward. Light rays diverge as they pass you.

At least, rays diverge in flat space-time. We live in nearly flat space-time. We wouldn’t if we neighbored a supermassive object, like a black hole. Mass curves space-time, as described by Einstein’s theory of general relativity.

Sphere 2

Imagine standing beside the sphere near a black hole. Let the sphere have roughly the black hole’s diameter—around 10 kilometers, according to astrophysical observations. You can’t see much of the sphere. So—imagine—you recruit your high-school-physics classmates. You array yourselves around the sphere, planning to observe light and compare observations. Imagine turning your back to the sphere. Light rays would converge, or flow toward each other. You’d know yourself to be far from Kansas.

Picture you, your classmates, and the sphere falling into the black hole. When would everyone agree that the rays switch from diverging to converging? Sometime after you passed the event horizon, the point of no return.1 Before you reached the singularity, the black hole’s center, where space-time warps infinitely. The rays would switch when you reached an in-between region, the apparent horizon.

Imagine pausing at the apparent horizon with your sphere, facing away from the sphere. Light rays would neither diverge nor converge; they’d point straight. Continue toward the singularity, and the rays would converge. Reverse away from the singularity, and the rays would diverge.

Rose garden 2

UBC near twilight

Rays diverged from the horizon beyond UBC at twilight. Twilight suits UBC as marble suits the Parthenon; and UBC’s twilight suits musing. You can reflect while gazing on reflections in glass buildings, or reflections in a pool by a rose garden. Your mind can roam as you roam paths lined by elms, oaks, and willows. I wandered while wondering why the sphere intrigued me.

Science thrives on instrumentation. Galileo improved the telescope, which unveiled Jupiter’s moons. Alexander von Humboldt measured temperatures and pressures with thermometers and barometers, charting South America during the 1700s. The Large Hadron Collider revealed the Higgs particle’s mass in 2012.

The sphere reminded me of a thermometer. As thermometers register temperature, so does the sphere register space-time curvature. Not that you’d need a sphere to distinguish a black hole from Kansas. Nor do you need a thermometer to distinguish Vancouver from a Brazilian jungle. But thermometers quantify the distinction. A sphere would sharpen your observations’ precision.

A sphere and a light source—free of supercolliders, superconductors, and superfridges. The instrument boasts not only profundity, but also simplicity.

von Humboldt.001

Alexander von Humboldt

Netta proved a profound theorem about apparent horizons, with coauthor Aron Wall. Jakob Bekenstein and Stephen Hawking had studied event horizons during the 1970s. An event horizon’s area, Bekenstein and Hawking showed, is proportional to the black hole’s thermodynamic entropy. Netta and Aron proved a proportionality between another area and another entropy.

They calculated an apparent horizon’s area, A. The math that represents their black hole represents also a quantum system, by a duality called AdS/CFT. The quantum system can occupy any of several states. Different states encode different information about the black hole. Consider the information needed to describe, fully and only, the region outside the apparent horizon. Some quantum state \rho encodes this information. \rho encodes no information about the region behind the apparent horizon, closer to the black hole. How would you quantify this lack of information? With the von Neumann entropy S(\rho). This entropy is proportional to the apparent horizon’s area: S( \rho )  \propto  A.

Netta and Aron entitled their paper “Decoding the apparent horizon.” Decoding the apparent horizon’s allure took me 32 hours and took me to an edge of campus. But I didn’t mind. Edges and horizons suited my visit as twilight suits UBC. Where can we learn, if not at edges, as where quantum information meets other fields?

 

With gratitude to Mark van Raamsdonk and UBC for hosting Quantum Information in Quantum Gravity III; to Mark, the other organizers, and the “It from Qubit” Simons Foundation collaboration for the opportunity to participate; and to Netta Engelhardt for sharing her expertise.

1Nothing that draws closer to a black hole than the event horizon can turn around and leave, according to general relativity. The black hole’s gravity pulls too strongly. Quantum mechanics implies that information leaves, though, in Hawking radiation.

Taming wave functions with neural networks

Note from Nicole Yunger Halpern: One sunny Saturday this spring, I heard Sam Greydanus present about his undergraduate thesis. Sam was about to graduate from Dartmouth with a major in physics. He had worked with quantum-computation theorist Professor James Whitfield. The presentation — about applying neural networks to quantum computation — so intrigued me that I asked him to share his research on Quantum Frontiers. Sam generously agreed; this is his story.

Wave functions in the wild

ski_interference

The wave function, \psi , is a mixed blessing. At first, it causes unsuspecting undergrads (me) some angst via the Schrodinger’s cat paradox. This angst morphs into full-fledged panic when they encounter concepts such as nonlocality and Bell’s theorem (which, by the way, is surprisingly hard to verify experimentally). The real trouble with \psi , though, is that it grows exponentially with the number of entangled particles in a system. We couldn’t even hope to write the wavefunction of 100 entangled particles, much less perform computations on it…but there’s a lot to gain from doing just that.

The thing is, we (a couple of luckless physicists) love \psi . Manipulating wave functions can give us ultra-precise timekeeping, secure encryption, and polynomial-time factoring of integers (read: break RSA). Harnessing quantum effects can also produce better machine learning, better physics simulations, and even quantum teleportation.

Taming the beast

Though \psi grows exponentially with the number of particles in a system, most physical wave functions can be described with a lot less information. Two algorithms for doing this are the Density Matrix Renormalization Group (DMRG) and Quantum Monte Carlo (QMC).

bonsai

Density Matrix Renormalization Group (DMRG). Imagine we want to learn about trees, but studying a full-grown, 50-foot tall tree in the lab is too unwieldy. One idea is to keep the tree small, like a bonsai tree. DMRG is an algorithm which, like a bonsai gardener, prunes the wave function while preserving its most important components. It produces a compressed version of the wave function called a Matrix Product State (MPS). One issue with DMRG is that it doesn’t extend particularly well to 2D and 3D systems.

Screen Shot 2017-07-29 at 12.01.23 AM

Quantum Monte Carlo (QMC). Another way to study the concept of “tree” in a lab (bear with me on this metaphor) would be to study a bunch of leaf, seed, and bark samples. Quantum Monte Carlo algorithms do this with wave functions, taking “samples” of a wave function (pure states) and using the properties and frequencies of these samples to build a picture of the wave function as a whole. The difficulty with QMC is that it treats the wave function as a black box. We might ask, “how does flipping the spin of the third electron affect the total energy?” and QMC wouldn’t have much of a physical answer.

Brains \gg Brawn

Neural Quantum States (NQS). Some state spaces are far too large for even Monte Carlo to sample adequately. Suppose now we’re studying a forest full of different species of trees. If one type of tree vastly outnumbers the others, choosing samples from random trees isn’t an efficient way to map biodiversity. Somehow, we need to make the sampling process “smarter”. Last year, Google DeepMind used a technique called deep reinforcement learning to do just that – and achieved fame for defeating the world champion human Go player. A recent Science paper by Carleo and Troyer (2017) used the same technique to make QMC “smarter” and effectively compress wave functions with neural networks. This approach, called “Neural Quantum States (NQS)”, produced several state-of-the-art results.

mps-learn-schema

The general idea of my thesis.

My thesis. My undergraduate thesis centered upon much the same idea. In fact, I had to abandon some of my initial work after reading the NQS paper. I then focused on using machine learning techniques to obtain MPS coefficients. Like Carleo and Troyer, I used neural networks to approximate  \psi . Unlike Carleo and Troyer, I trained my model to output a set of Matrix Product State coefficients which have physical meaning (MPS coefficients always correspond to a certain state and site, e.g. “spin up, electron number 3”).

Cool – but does it work?

Yes – for small systems. In my thesis, I considered a toy system of 4 spin-\frac{1}{2} particles interacting via the Heisenberg Hamiltonian. Solving this system is not difficult so I was able to focus on fitting the two disparate parts – machine learning and Matrix Product States – together.

Success! My model solved for ground states with arbitrary precision. Even more interestingly, I used it to automatically obtain MPS coefficients. Shown below, for example, is a visualization of my model’s coefficients for the GHZ state, compared with coefficients taken from the literature.

Screen Shot 2017-07-28 at 11.46.45 PM

A visual comparison of a 4-site Matrix Product State for the GHZ state a) listed in the literature b) obtained from my neural network model. Colored squares correspond to real-valued elements of 2×2 matrices.

Limitations. The careful reader might point out that, according to the schema of my model (above), I still have to write out the full wave function. To scale my model up, I instead trained it variationally over a subspace of the Hamiltonian (just as the authors of the NQS paper did). Results are decent for larger (10-20 particle) systems, but the training itself is still unstable. I’ll finish ironing out the details soon, so keep an eye on arXiv* :).

Outside the ivory tower

qcomputer

A quantum computer developed by Joint Quantum Institute, U. Maryland.

Quantum computing is a field that’s poised to take on commercial relevance. Taming the wave function is one of the big hurdles we need to clear before this happens. Hopefully my findings will have a small role to play in making this happen.

On a more personal note, thank you for reading about my work. As a recent undergrad, I’m still new to research and I’d love to hear constructive comments or criticisms. If you found this post interesting, check out my research blog.

*arXiv is an online library for electronic preprints of scientific papers

The sign problem(s)

The thirteen-month-old had mastered the word “dada” by the time I met her. Her parents were teaching her to communicate other concepts through sign language. Picture her, dark-haired and bibbed, in a high chair. Banana and mango slices litter the tray in front of her. More fruit litters the floor in front of the tray. The baby lifts her arms and flaps her hands.

Dada looks up from scrubbing the floor.

“Look,” he calls to Mummy, “she’s using sign language! All done.” He performs the gesture that his daughter seems to have aped: He raises his hands and rotates his forearms about his ulnas, axes perpendicular to the floor. “All done!”

The baby looks down, seizes another morsel, and stuffs it into her mouth.

“Never mind,” Dada amends. “You’re not done, are you?”

His daughter had a sign(-language) problem.

Banana

So does Dada, MIT professor Aram Harrow. Aram studies quantum information theory. His interests range from complexity to matrices, from resource theories to entropies. He’s blogged for The Quantum Pontiff, and he studies—including with IQIM postdoc Elizabeth Crossonthe quantum sign problem.

Imagine calculating properties of a chunk of fermionic quantum matter. The chunk consists of sites, each inhabited by one particle or by none. Translate as “no site can house more than one particle” the jargon “the particles are fermions.”

The chunk can have certain amounts of energy. Each amount E_j corresponds to some particle configuration indexed by j: If the system has some amount E_1 of energy, particles occupy certain sites and might not occupy others. If the system has a different amount E_2 \neq E_1 of energy, particles occupy different sites. A Hamiltonian, a mathematical object denoted by H, encodes the energies E_j and the configurations. We represent H with a matrix, a square grid of numbers.

Suppose that the chunk has a temperature T = \frac{ 1 }{ k_{\rm B} \beta }. We could calculate the system’s heat capacity, the energy required to raise the chunk’s temperature by one Kelvin. We could calculate the free energy, how much work the chunk could perform in powering a motor or lifting a weight. To calculate those properties, we calculate the system’s partition function, Z.

How? We would list the configurations j. With each configuration, we would associate the weight e^{ - \beta E_j }. We would sum the weights: Z = e^{ - \beta E_1 }  +  e^{ - \beta E_2}  +  \ldots  =  \sum_j e^{ - \beta E_j}.

Easier—like feeding a 13-month-old—said than done. Let N denote the number of qubits in the chunk. If N is large, the number of configurations is gigantic. Our computers can’t process so many configurations. This inability underlies quantum computing’s promise of speeding up certain calculations.

We don’t have quantum computers, and we can’t calculate Z. Can we  approximate Z?

Yes, if H “lacks the sign problem.” The math that models our system models also a classical system. If our system has D dimensions, the classical system has D+1 dimensions. Suppose, for example, that our sites form a line. The classical system forms a square.

We replace the weights e^{ - \beta E_j } with different weights—numbers formed from a matrix that represents H. If H lacks the sign problem, the new weights are nonnegative and behave like probabilities. Many mathematical tools suit probabilities. Aram and Elizabeth apply such tools to Z, here and here, as do many other researchers.

We call Hamiltonians that lack the sign problem “stoquastic,” which I think fanquastic.Stay tuned for a blog post about stoquasticity by Elizabeth.

What if H has the sign problem? The new weights can assume negative and nonreal values. The weights behave unlike probabilities; we can’t apply those tools. We find ourselves knee-deep in banana and mango chunks.

Mango chunks

Solutions to the sign problem remain elusive. Theorists keep trying to mitigate the problem, though. Aram, Elizabeth, and others are improving calculations of properties of sign-problem-free systems. One scientist-in-the-making has achieved a breakthrough: Aram’s daughter now rotates her hands upon finishing meals and when she wants to leave her car seat or stroller.

One sign problem down; one to go.

Mess

With gratitude to Aram’s family for its hospitality and to Elizabeth Crosson for sharing her expertise.

1For experts: A local Hamiltonian is stoquastic relative to the computational basis if each local term is represented, relative to the computational basis, by a matrix whose off-diagonal entries are real and nonpositive.

The world of hackers and secrets

I’m Evgeny Mozgunov, and some of you may remember my earlier posts on Quantum Frontiers. I’ve recently graduated with a PhD after 6 years in the quantum information group at Caltech. As I’m navigating the job market in quantum physics, it was only a matter of time before I got dragged into a race between startups. Those who can promise impressive quantum speedups for practical tasks get a lot of money from venture capitalists. Maybe there’s something about my mind and getting paid: when I’m paid to do something, I suddenly start coming up with solutions that never occurred to me while I was wandering around as a student. And this time, I’ve noticed a possibility of impressing the public with quantum speedups that nobody has ever used before.

Three former members of John Preskill’s group, Gorjan Alagic, Stacey Jeffery and Stephen Jordan, have already proposed this idea (Circuit Obfuscation Using Braids, p.10), but none of the startups seems to have picked it up. You only need a small quantum computer. Imagine you are in the audience. I ask you to come up with a number. Don’t tell it out loud: instead, write it on a secret piece of paper, and take a little time to do a few mathematical operations based on the number. Then announce the result of those operations. Once you are done, people will automatically be split into two categories. Those with access to a small quantum computer (like the one at IBM) will be able to put on a magic hat (the computer…) and recover your number. But the rest of the audience will be left in awe, with no clue as to how this is even possible. There’s nothing they could do to guess your number based only on the result you announced, unless you’re willing to wait for a few days and they have access to the world’s supercomputing powers.

So far I’ve described the general setting of encryption – a cipher is announced, the key to the cipher is destroyed, and only those who can break the code can decipher.  For instance, if RSA encryption is used for the magic show above, indeed only people with a big quantum computer will be able to recover the secret number. To complete my story, I need to describe what the result that you announce (the cipher) looks like:

A sequence of instructions for a small quantum computer that is equivalent to a simple instruction for spitting out your number. However, the announced sequence of instructions is obfuscated, such that you can’t just read off the number from it.

You really need to feed the sequence into a quantum computer, and see what it outputs. Obfuscation is more general than encryption, but here we’re going to use it as a method of encryption.

Alagic et al. taught us how to do something called obfuscation by compiling for a quantum computer: much like when you compile a .c file in your CS class, you can’t really understand the .out file. Of course you can just execute the .out file, but not if it describes a quantum circuit, unless you have access to a quantum computer. The proposed classical compiler turns either a classical or a quantum algorithm into a hard-to-read quantum circuit that looks like braidsBraid_1000.gif. Unfortunately, any obfuscation by compiling scheme has the problem that whoever understands the compiler well enough will be able to actually read the .out file (or notice a pattern in braids reduced to a compact “normal” form), and guess your number without resorting to a quantum computer. Surprisingly, even though Alagic et al.’s scheme doesn’t claim any protection under this attack, it still satisfies one of the theoretical definitions of obfuscation: if two people write two different sets of instructions to perform the same operation, and then each obfuscate their own set of instructions by a restricted set of tricks, then it should be impossible to tell from the end result which program was obtained by whom.

forQF2.jpg

Theoretical obfuscation can be illustrated by these video game Nier cosplayers: when they put on their wig and blindfold, they look like the same person. The character named 2B is an android, whose body is disposable, and whose mind is a set of instructions stored on a server. Other characters try to hack her mind as the story progresses.

Quantum scientists can have their own little world of hackers and secrets, organized in the following way: some researchers present their obfuscated code outputting a secret message, and other researchers become hackers trying to break it. Thanks to another result by Alagic et al, we know that hard-to-break obfuscated circuits secure against classical computers exist. But we don’t know how the obfuscator that produces those worst-case instances reliably looks like, so a bit of crowdsourcing to find it is in order. It’s a free-for-all, where all tools and tricks are allowed. In fact, even you can enter! All you need to know is a universal gate set H,T = R(π/4),CNOT and good old matrix multiplication. Come up with a product of these matrices that multiplies to a bunch of X‘s (X=HT⁴H), but such that only you know on which qubits the X are applied. This code will spit out your secret bitstring on an input of all 0’es. Publish it and wait until some hacker breaks it!

Here’s mine, can anyone see what’s my secret bitstring?

Obfuscated circuit.png

One can run it on a 5 qubit quantum computer in less than 1ms. But if you try to multiply the corresponding 32×32 matrices on your laptop, it takes more than 1ms. Quantum speedup right there. Of course I didn’t prove that there’s no better way of finding out my secret than multiplying matrices. In fact, had I used only even powers of the matrix T in the picture above, then there is a classical algorithm available in open source (Aaronson, Gottesman) that recovers the number without having to multiply large matrices.

I’m in luck: startups and venture capitalists never cared about theoretical proofs, it only has to work until it fails. I think they should give millions to me instead of D-wave. Seriously, there’s plenty of applications for practical obfuscation, besides magic shows. One can set up a social network where posts are gibberish except for those who have a quantum computer (that would be a good conspiracy theory some years from now). One can verify when a private company claims to sell a small quantum computer.

I’d like to end on a more general note: small quantum computers are already faster than classical hardware at multiplying certain kinds of matrices. This has already been proven for a restricted class of quantum computers and a task called boson sampling. If there’s a competition in matrix multiplication somewhere in the world, we can already win.

The power of information

Sara Imari Walker studies ants. Her entomologist colleague Gabriele Valentini cultivates ant swarms. Gabriele coaxes a swarm from its nest, hides the nest, and offers two alternative nests. Gabriele observe the ants’ responses, then analyzes their data with Sara.

Sara doesn’t usually study ants. She trained in physics, information theory, and astrobiology. (Astrobiology is the study of life; life’s origins; and conditions amenable to life, on Earth and anywhere else life may exist.) Sara analyzes how information reaches, propagates through, and manifests in the swarm.

Some ants inspect one nest; some, the other. Few ants encounter both choices. Yet most of the ants choose simultaneously. (How does Gabriele know when an ant chooses? Decided ants carry other ants toward the chosen nest. Undecided ants don’t.)

Gabriele and Sara plotted each ant’s status (decided or undecided) at each instant. All the ants’ lines start in the “undecided” region, high up in the graph. Most lines drop to the “decided” region together. Physicists call such dramatic, large-scale changes in many-particle systems “phase transitions.” The swarm transitions from the “undecided” phase to the “decided,” as moisture transitions from vapor to downpour.

Sara presentation

Sara versus the ants

Look from afar, and you’ll see evidence of a hive mind: The lines clump and slump together. Look more closely, and you’ll find lags between ants’ decisions. Gabriele and Sara grouped the ants according to their behaviors. Sara explained the grouping at a workshop this spring.

The green lines, she said, are undecided ants.

My stomach dropped like Gabriele and Sara’s ant lines.

People call data “cold” and “hard.” Critics lambast scientists for not appealing to emotions. Politicians weave anecdotes into their numbers, to convince audiences to care.

But when Sara spoke, I looked at her green lines and thought, “That’s me.”

I’ve blogged about my indecisiveness. Postdoc Ning Bao and I formulated a quantum voting scheme in which voters can superpose—form quantum combinations of—options. Usually, when John Preskill polls our research group, I abstain from voting. Politics, and questions like “Does building a quantum computer require only engineering or also science?”,1 have many facets. I want to view such questions from many angles, to pace around the questions as around a sculpture, to hear other onlookers, to test my impressions on them, and to cogitate before choosing.2 However many perspectives I’ve gathered, I’m missing others worth seeing. I commiserated with the green-line ants.

Sculpture-question.001

I first met Sara in the building behind the statue. Sara earned her PhD in Dartmouth College’s physics department, with Professor Marcelo Gleiser.

Sara presented about ants at a workshop hosted by the Beyond Center for Fundamental Concepts in Science at Arizona State University (ASU). The organizers, Paul Davies of Beyond and Andrew Briggs of Oxford, entitled the workshop “The Power of Information.” Participants represented information theory, thermodynamics and statistical mechanics, biology, and philosophy.

Paul and Andrew posed questions to guide us: What status does information have? Is information “a real thing” “out there in the world”? Or is information only a mental construct? What roles can information play in causation?

We paced around these questions as around a Chinese viewing stone. We sat on a bench in front of those questions, stared, debated, and cogitated. We taught each other about ants, artificial atoms, nanoscale machines, and models for information processing.

Stone.001

Chinese viewing stone in Yuyuan Garden in Shanghai

I wonder if I’ll acquire opinions about Paul and Andrew’s questions. Maybe I’ll meander from “undecided” to “decided” over a career. Maybe I’ll phase-transition like Sara’s ants. Maybe I’ll remain near the top of her diagram, a green holdout.

I know little about information’s power. But Sara’s plot revealed one power of information: Information can move us—from homeless to belonging, from ambivalent to decided, from a plot’s top to its bottom, from passive listener to finding yourself in a green curve.

 

With thanks to Sara Imari Walker, Paul Davies, Andrew Briggs, Katherine Smith, and the Beyond Center for their hospitality and thoughts.

 

1By “only engineering,” I mean not “merely engineering” pejoratively, but “engineering and no other discipline.”

2I feel compelled to perform these activities before choosing. I try to. Psychological experiments, however, suggest that I might decide before realizing that I’ve decided.