Theoretical physics has not gone to the dogs.

I was surprised to learn, last week, that my profession has gone to the dogs. I’d introduced myself to a nonscientist as a theoretical physicist.

“I think,” he said, “that theoretical physics has lost its way in symmetry and beauty and math. It’s too far from experiments to be science.”

The accusation triggered an identity crisis. I lost my faith in my work, bit my nails to the quick, and enrolled in workshops about machine learning and Chinese.

Or I might have, if all theoretical physicists pursued quantum gravity.

Quantum-gravity physicists attempt to reconcile two physical theories, quantum mechanics and general relativity. Quantum theory manifests on small length scales, such as atoms’ and electrons’. General relativity manifests in massive systems, such as the solar system. A few settings unite smallness with massiveness, such as black holes and the universe’s origin. Understanding these settings requires a unification of quantum theory and general relativity.

Try to unify the theories, and you’ll find yourself writing equations that contain infinities. Such infinities can’t describe physical reality, but they’ve withstood decades of onslaughts. For guidance, many quantum-gravity theorists appeal to mathematical symmetries. Symmetries, they reason, helped 20th-century particle theorists predict experimental outcomes with accuracies better than any achieved with any other scientific theory. Perhaps symmetries can extend particle physics to a theory of quantum gravity.

Some physicists have criticized certain approaches to quantum gravity, certain approaches to high-energy physics more generally, and the high-energy community’s philosophy and sociology. Much criticism has centered on string theory, according to which our space-time has up to 26 dimensions, most too small for you to notice. Critics include Lee Smolin, the author of The Trouble with Physics, Peter Woit, who blogs on Not Even Wrong, and Sabine Hossenfelder, who published Lost in Math this year. This article contains no criticism of their crusade. I see merit in arguments of theirs, as in arguments of string theorists.

Science requires criticism to progress. So thank goodness that Smolin, Woit, Hossenfelder, and others are criticizing string theory. Thank goodness that the criticized respond. Thank goodness that debate rages, like the occasional wildfire needed to maintain a forest’s health.

The debate might appear to impugn the integrity of theoretical physics. But quantum gravity constitutes one pot in the greenhouse of theoretical physics. Theoretical physicists study lasers, star formation, atomic clocks, biological cells, gravitational waves, artificial materials, and more. Theoretical physicists are explaining, guiding, and collaborating on experiments. So many successes have piled up recently, I had trouble picking examples for this article. 

One example—fluctuation relations—I’ve blogged about beforeThese equalities generalize the second law of thermodynamics, which illuminates why time flows in just one direction. Fluctuation relations also provide a route to measuring an energetic quantity applied in pharmacology, biology, and chemistry. Experimentalists have shown, over the past 15 years, that fluctuation relations govern RNA, DNA, electronic systems, and trapped ions (artificial atoms). 

Second, experimentalists are exercising, over quantum systems, control that physicists didn’t dream of decades ago. Harvard physicists can position over 50 atoms however they please, using tweezers formed from light. Google has built a noisy quantum computer of 72 superconducting qubits, circuits through which charge flows without resistance. Also trapped ions, defects in diamonds, photonics, and topological materials are breaking barriers. These experiments advance partially due to motivation from theorists and partially through collaborations with theorists. In turn, experimental data guide theorists’ explanations and our proposals of experiments.

In one example, theorists teamed with experimentalists to probe quantum correlations spread across space and time. In another example, theorists posited a mechanism by which superconducting qubits interact with a hot environment. Other illustrations from the past five years include discrete time crystals, manybody scars, magic-angle materials, and quantum chaos. 

These collaborations even offer hope for steering quantum gravity with experiments. Certain quantum-gravity systems share properties with certain many-particle quantum systems. This similarity, we call “the AdS/CFT duality.” Experimentalists have many-particle quantum systems and are stretching those systems toward the AdS/CFT regime. Experimental results, with the duality, might illuminate where quantum-gravity theorists should and shouldn’t search. Perhaps no such experiments will take place for decades. Perhaps AdS/CFT can’t shed light on our universe. But theorists and experimentalists are partnering to try.

These illustrations demonstrate that theoretical physics, on the whole, remains healthy, grounded, and thriving. This thriving is failing to register with part of the public. Evidence thwacked me in the face last week, as explained at the start of this article. The Wall Street Journal published another example last month: John Horgan wrote that “physics, which should serve as the bedrock of science, is in some respects the most troubled field of” science. The evidence presented consists of one neighborhood in the theoretical fraction of the metropolis of physics: string and multiverse models.

Horgan’s article reflects decades of experience in science journalism, a field I respect. I sympathize, moreover, with those who interface so much with quantum gravity, the subfield appears to eclipse the rest of theoretical physics. Horgan was reviewing books by Stephen Hawking and Martin Rees, who discuss string and multiverse models. Smolin, Woit, Hossenfelder, and others garner much press, which they deserve: They provoke debate and articulate their messages eloquently. Such press can blot out, say, profiles of the theoretical astrophysicists licking their lips over gravitational-wave data.

If any theory bears flaws, those flaws need correcting. But most theoretical physicists don’t pursue quantum gravity, let alone string theory. Any flaws of string theory do not mar all theoretical physics. These points need a megaphone, because misconceptions about theoretical physics endanger society. First, companies need workers who have technical skills and critical reasoning. Both come from training in theoretical physics. Besmirching theoretical physics can divert students from programs that can benefit the economy and nurture thoughtful citizens.1 

Second, some nonscientists are attempting to discredit the scientific community for political gain. Misconceptions about theoretical physics can appear to support these nonscientists’ claims. The ensuing confusion can lead astray voters and parents who face choices about vaccination, global health, national security, and budget allocations.

Last week, I heard that my profession has wandered too far from experiments. Hours earlier, I’d skyped with an experimentalist with whom I’m collaborating. A disconnect separates the reality of theoretical physicists from impressions harbored by part of the public. Let’s clear up the misconceptions. Theoretical physics, as a whole, remains healthy, grounded, and thriving.

 

 

1Nurturing thoughtful citizens takes also humanities, social-sciences, language, and arts programs.

A Roman in a Modern Court

Yesterday I spent some time wondering how to explain the modern economy to an ancient Roman brought forward from the first millennium BCE. For now I’ll assume language isn’t a barrier, but not much more. Here’s my rough take:

“There have been five really important things that were discovered since when you left and now.

First, every living thing has a tiny blueprint inside it. We learned how to rewrite those, and now we can make crops that resist pests, grow healthy, and take minimal effort to cultivate. The same tool also let us make creatures that manufacture medicine, as well as animals different from anything that existed before. Food became cheap because of this.

Second, we learned that hot air and steam expand. This means you can burn oil or coal and use that to push air around, which in turn can push against solid objects. With this we’ve made vehicles that can go the span of the Empire from Rome to Londinium and back in hours rather than weeks. Similar mechanisms can be used to work farms, forge metal, and so on. Manufactured goods became cheap as a result.

Third, we discovered an invisible fluid that lives in metals. It flows unimaginably quickly and with minimal force through even very narrow channels, so by pushing on it in one city it may be made to move almost instantly in another. That lets you work with energy as a kind of commodity, rather than something that hooks up and is generated specifically for each device.

Fourth, we found that this fluid can be pushed around by light, including a kind human eyes can’t see. This lets a device make light in one place and push on the fluid in a different device with no metal in between. Communication became fast, cheap, and easy.

Finally, and this one takes some explaining, our machines can make decisions. Imagine you had a channel for water with a fork. You can insert a blade to control which route the water takes. If you attach that blade to a lever you can change the direction of the flow. If you dip that lever in another channel of water, then what flows in one channel can set which way another channel goes. It turns out that that’s all you need to make simple decisions like “If water is in this channel, flow down that other one.”, which can then be turned into useful statements like “Put water in this channel if you’re attacked. It’ll redirect the other channel and release boiling oil.” With enough of these water switches you can do really complicated things like tracking money, searching for patterns, predicting the weather, and so on. While water is hard to work with, you can make these channels and switches almost perfect for the invisible fluid, and you can make them tiny, vastly smaller than the width of a hair. A device that fits in your hand might have more switches than there are grains in a cubic meter of sand. The number of switches we’ve made so far outnumbers all the grains of sand on Earth, and we’re just getting started.”

“Methinks, I know one kind like you.”

I was expecting to pore over a poem handwritten by one of history’s most influential chemists. Sir Humphry Davy lived in Britain around the turn of the 19th century. He invented a lamp that saved miners’ lives, discovered and isolated chemical elements, coined the term “laughing gas,” and inspired younger researchers through public lectures.

Davy

Humphry Davy

Davy wrote not only scientific papers, but also poetry. He befriended contemporaries known today as “Romantic poets,” including Samuel Taylor Coleridge. English literature and the history of science rank among the specialties of the Huntington Library in San Marino, CA. The Huntington collects manuscripts and rare books, and I secured a reader card this July. I aspired to find a poem by Davy.

Bingo: The online catalogue contained an entry entitled “To the glow worm.” I requested the manuscript and settled into the hushed, wood-paneled reading room.

Davy had written scarcely legibly, in black ink, on a page that had creased and torn. I glanced over the lines, then realized that the manuscript folder contained two other pages. The pages had stuck together, so I gently flipped the lot over.

Davy poem

Poem “To the glow worm,” by Humphry Davy

A line at the top of the back page seized the wheel of my attention.

“Methinks, I know one kind like you.”

The line’s intimacy arrested me. I heard a speaker contemplating someone whom he or she had met recently, turning the person over in the speaker’s mind, gaining purchase on the person’s identity. “I know you,” I heard the speaker saying, and I saw the speaker wagging a finger at the person. “I know your type…I think.”

The line’s final six words suggested impulsiveness. How can you know someone you’re still wrapping your head around? I felt inclined to suggest a spoonful of circumspection. But perhaps the speaker was reflecting more than I’d allowed: “Methinks” suggested temperance, an acknowledgement of uncertainty.

I backpedaled to the folder’s cover. “Includes verse and letter by Lady Davy,” it read. Jane Apreece, a wealthy widow, acquired the title Lady Davy upon marrying Sir Humphry. She enjoyed a reputation for social savvy, fashionableness, and sharpness. I’d intruded on her poem, a response to Davy’s. Apreece’s pages begged for a transcription, which I struggled through until the reading room closed 45 minutes later. Dan Lewis, the Huntington’s Dibner Senior Curator of the History of Science and Technology, later improved upon my attempt (parenthesized text ours):

Methinks, I know one kind like you,

Thine(?) to peace, & Nature true;

Kindled by Feeling’s purest flame,

In Storm, or Calm, for ages(?) the same.

Bestowing most its brilliant Light,

Amidst the tranquil shades of Night;

And prompt to solace, raise, & cheer(?),

The heart, subdued by Doubt or Care.

Though not of busy Life afraid

Yet loving best, the pastoral Shade;

Shedding a Ray, more clear & pure,

A Ray, which longer shall endure,

As Friendships light must ever prove

More steadfast than the Flame of Love.

Light recurs throughout the verse: The speaker refers to two flames, to a “Ray,” and to a “brilliant Light // Amidst the tranquil shades of Night.” Comparisons with light suit a scientist, who reveals aspects of nature never witnessed before. (I expect that the speaker directs the apostrophe toward Davy.) Comparisons with light suit Davy not only professionally, but also, to Apreece, personally: Each member of the couple inspired the other to learn. Their poems reflect their intellectual symbiosis: Apreece’s references to light complement the glow worm, which Davy called “lively living lamp of night.”

The final two lines arrested me as the first line did. The speaker contrasts “Friendship[’]s light” with “the Flame of Love.” Finite resources can’t sustain flames, which consume candles, wood, and oxygen. Once its fuel disappears, flame proves less than “steadfast.” Similarly, love can’t survive on passion’s flames. Love should rest on friendship, which sheds the “light” extolled throughout the poem. Light enhances our vision, providing the wisdom needed to sustain love throughout life’s vicissitudes. 

These two lines reveal the temperance hinted at by the “Methinks.” The speaker argues for levelheadedness, for balancing emotion with sustainability. Spoonful of circumspection retracted.

IMG_2354

The clock struck 4:45, and readers began returning their manuscripts and books to the circulation desk. I stood up—and pricked myself on a thorn of realization. The catalogue dated the manuscript to “perhaps [ . . . ] 1811 – they [Davy and Apreece] were married in 1812.” The lovers exchanged these poems without knowing that their marriage would sour years later. I’d read about their relationship—as about Davy’s science and poetry—in Richard Holmes’s The Age of Wonder. 

At least the Davys reunited when Sir Humphry’s last illness struck. At least they remained together until he died. At least a reader can step, through the manuscript, into the couple’s patch of happiness. One can hope see more clearly for their—a scientist’s, a societal navigator’s, and two human beings’—light.

Jane letter + p.1 of poem.JPG

Letter and poem by Jane Apreece (p. 1). The top segment constitutes a letter written “by Lady Davy to a ‘Miss Talbot’ (1852, January 2),” according to the catalogue.

Jane poem, p. 2.JPG

Poem by Jane Apreece (p. 2)

If anyone has insights or has corrections to the transcription, please comment. I haven’t transcribed Davy’s poem, which might illuminate Lady Davy’s response.

With thanks to the Huntington Library of San Marino, CA, for the use of its collection. With thanks to Dan Lewis for improving upon my transcription and for prodding, for five years, toward a reader card.

Doctrine of the (measurement) mean

Don’t invite me to dinner the night before an academic year begins.

You’ll find me in an armchair or sitting on my bed, laptop on my lap, journaling. I initiated the tradition the night before beginning college. I take stock of the past year, my present state, and hopes for the coming year.

Much of the exercise fosters what my high-school physics teacher called “an attitude of gratitude”: I reflect on cities I’ve visited, projects firing me up, family events attended, and subfields sampled. Other paragraphs, I want off my chest: Have I pushed this collaborator too hard or that project too little? Miscommunicated or misunderstood? Strayed too far into heuristics or into mathematical formalisms?

If only the “too much” errors, I end up thinking, could cancel the “too little.”

In one quantum-information context, they can.

Seesaw

Imagine that you’ve fabricated the material that will topple steel and graphene; let’s call it a supermetatopoconsulator. How, you wonder, do charge, energy, and particles move through this material? You’ll learn by measuring correlators.

A correlator signals how much, if you poke this piece here, that piece there responds. At least, a two-point correlator does: \langle A(0) B(\tau) \rangle. A(0) represents the poke, which occurs at time t = 0. B(\tau) represents the observable measured there at t = \tau. The \langle . \rangle encapsulates which state \rho the system started in.

Condensed-matter, quantum-optics, and particle experimentalists have measured two-point correlators for years. But consider the three-point correlator \langle A(0) B(\tau) C (\tau' ) \rangle, or a k-point \langle \underbrace{ A(0) \ldots M (\tau^{(k)}) }_k \rangle, for any k \geq 2. Higher-point correlators relate more-complicated relationships amongst events. Four-pointcorrelators associated with multiple times signal quantum chaos and information scrambling. Quantum information scrambles upon spreading across a system through many-body entanglement. Could you measure arbitrary-point, arbitrary-time correlators?

New material

Supermetatopoconsulator (artist’s conception)

Yes, collaborators and I have written, using weak measurements. Weak measurements barely disturb the system being measured. But they extract little information about the measured system. So, to measure a correlator, you’d have to perform many trials. Moreover, your postdocs and students might have little experience with weak measurements. They might not want to learn the techniques required, to recalibrate their detectors, etc. Could you measure these correlators easily?

Yes, if the material consists of qubits,2 according to a paper I published with Justin Dressel, José Raúl González Alsonso, and Mordecai Waegell this summer. You could build such a system from, e.g., superconducting circuits, trapped ions, or quantum dots.

You can measure \langle \underbrace{ A(0) B (\tau') C (\tau'') \ldots M (\tau^{(k)}) }_k \rangle, we show, by measuring A at t = 0, waiting until t = \tau', measuring B, and so on until measuring M at t = \tau^{(k)}. The t-values needn’t increase sequentially: \tau'' could be less than \tau', for instance. You’d have to effectively reverse the flow of time experienced by the qubits. Experimentalists can do so by, for example, flipping magnetic fields upside-down.

Each measurement requires an ancilla, or helper qubit. The ancilla acts as a detector that records the measurement’s outcome. Suppose that A is an observable of qubit #1 of the system of interest. You bring an ancilla to qubit 1, entangle the qubits (force them to interact), and look at the ancilla. (Experts: You perform a controlled rotation on the ancilla, conditioning on the system qubit.)

Each trial yields k measurement outcomes. They form a sequence S, such as (1, 1, 1, -1, -1, \ldots). You should compute a number \alpha, according to a formula we provide, from each measurement outcome and from the measurement’s settings. These numbers form a new sequence S' = \mathbf{(} \alpha_S(1), \alpha_S(1), \ldots \mathbf{)}. Why bother? So that you can force errors to cancel.

Multiply the \alpha’s together, \alpha_S(1) \times \alpha_S(1) \times \ldots, and average the product over the possible sequences S. This average equals the correlator \langle \underbrace{ A(0) \ldots M (\tau^{(k)}) }_k \rangle. Congratulations; you’ve characterized transport in your supermetatopoconsulator.

Success

When measuring, you can couple the ancillas to the system weakly or strongly, disturbing the system a little or a lot. Wouldn’t strong measurements perturb the state \rho whose properties you hope to measure? Wouldn’t the perturbations by measurements one through \ell throw off measurement \ell + 1?

Yes. But the errors introduced by those perturbations cancel in the average. The reason stems from how we construct \alpha’s: Our formula makes some products positive and some negative. The positive and negative terms sum to zero.

Balance 2

The cancellation offers hope for my journal assessment: Errors can come out in the wash. Not of their own accord, not without forethought. But errors can cancel out in the wash—if you soap your \alpha’s with care.

 

1and six-point, eight-point, etc.

2Rather, each measured observable must square to the identity, e.g., A^2 = 1. Qubit Pauli operators satisfy this requirement.

 

With apologies to Aristotle.

I get knocked down…

“You’ll have to have a thick skin.”

Marcelo Gleiser, a college mentor of mine, emailed the warning. I’d sent a list of physics PhD programs and requested advice about which to attend. Marcelo’s and my department had fostered encouragement and consideration.

Suit up, Marcelo was saying.

Criticism fuels science, as Oxford physicist David Deutsch has written. We have choices about how we criticize. Some criticism styles reflect consideration for the criticized work’s creator. Tufts University philosopher Daniel Dennett has devised guidelines for “criticizing with kindness”:1

1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.

2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

3. You should mention anything you have learned from your target.

4. Only then are you permitted to say so much as a word of rebuttal or criticism.

Scientists skip to step four often—when refereeing papers submitted to journals, when posing questions during seminars, when emailing collaborators, when colleagues sketch ideas at a blackboard. Why? Listening and criticizing require time, thought, and effort—three of a scientist’s most valuable resources. Should any scientist spend those resources on an idea of mine, s/he deserves my gratitude. Spending empathy atop time, thought, and effort can feel supererogatory. Nor do all scientists prioritize empathy and kindness. Others of us prioritize empathy but—as I have over the past five years—grown so used to its latency, I forget to demonstrate it.

Doing science requires facing not only criticism, but also “That doesn’t make sense,” “Who cares?” “Of course not,” and other morale boosters.

Doing science requires resilience.

Resilience

So do measurements of quantum information (QI) scrambling. Scrambling is a subtle, late, quantum stage of equilibration2 in many-body systems. Example systems include chains of spins,3 such as in ultracold atoms, that interact with each other strongly. Exotic examples include black holes in anti-de Sitter space.4

Imagine whacking one side of a chain of interacting spins. Information about the whack will disseminate throughout the chain via entanglement.5 After a long interval (the scrambling time, t_*), spins across the systems will share many-body entanglement. No measurement of any few, close-together spins can disclose much about the whack. Information will have scrambled across the system.

QI scrambling has the subtlety of an assassin treading a Persian carpet at midnight. Can we observe scrambling?

Carpet

A Stanford team proposed a scheme for detecting scrambling using interferometry.6 Justin Dressel, Brian Swingle, and I proposed a scheme based on weak measurements, which refrain from disturbing the measured system much. Other teams have proposed alternatives.

Many schemes rely on effective time reversal: The experimentalist must perform the quantum analog of inverting particles’ momenta. One must negate the Hamiltonian \hat{H}, the observable that governs how the system evolves: \hat{H} \mapsto - \hat{H}.

At least, the experimentalist must try. The experimentalist will likely map \hat{H} to - \hat{H} + \varepsilon. The small error \varepsilon could wreak havoc: QI scrambling relates to chaos, exemplified by the butterfly effect. Tiny perturbations, such as the flap of a butterfly’s wings, can snowball in chaotic systems, as by generating tornadoes. Will the \varepsilon snowball, obscuring observations of scrambling?

Snowball

It needn’t, Brian and I wrote in a recent paper. You can divide out much of the error until t_*.

You can detect scrambling by measuring an out-of-time-ordered correlator (OTOC), an object I’ve effused about elsewhere. Let’s denote the time-t correlator by F(t). You can infer an approximation \tilde{F}(t) to F(t) upon implementing an \varepsilon-ridden interferometry or weak-measurement protocol. Remove some steps from that protocol, Brian and I say. Infer a simpler, easier-to-measure object \tilde{F}_{\rm simple}(t). Divide the two measurement outcomes to approximate the OTOC:

F(t)  \approx \frac{ \tilde{F}(t) }{ \tilde{F}_{\rm simple}(t) }.

OTOC measurements exhibit resilience to error.

Arm 2

Physicists need resilience. Brian criticizes with such grace, he could serve as the poster child for Daniel Dennett’s guidelines. But not every scientist could. How can we withstand kindness-lite criticism?

By drawing confidence from what we’ve achieved, with help from mentors like Marcelo. I couldn’t tell what about me—if anything—could serve as a rock on which to plant a foot, as an undergrad. Mentors identified what I had too little experience to appreciate. You question what you don’t understand, they said. You assimilate perspectives from textbooks, lectures, practice problems, and past experiences. You scrutinize details while keeping an eye on the big picture. So don’t let so-and-so intimidate you.

I still lack my mentors’ experience, but I’ve imbibed a drop of their insight. I savor calculations that I nail, congratulate myself upon nullifying referees’ concerns, and celebrate the theorems I prove.

I’ve also created an email folder entitled “Nice messages.” In go “I loved your new paper; combining those topics was creative,” “Well done on the seminar; I’m now thinking of exploring that field,” and other rarities. The folder affords an umbrella when physics clouds gather.

Finally, I try to express appreciation of others’ work.7 Science thrives on criticism, but scientists do science. And scientists are human—undergrads, postdocs, senior researchers, and everyone else.

Doing science—and attempting to negate Hamiltonians—we get knocked down. But we can get up again.

 

Around the time Brian and I released “Resilience” two other groups proposed related renormalizations. Check out their schemes here and here.

1Thanks to Sean Carroll for alerting me to this gem of Dennett’s.

2A system equilibrates as its large-scale properties, like energy, flatline.

3Angular-momentum-like quantum properties

4Certain space-times different from ours

5Correlations, shareable by quantum systems, stronger than any achievable by classical systems

6The cancellation (as by a crest of one wave and a trough of another) of components of a quantum state, or the addition of components (as two waves’ crests)

7Appreciation of specific qualities. “Nice job” can reflect a speaker’s belief but often reflects a desire to buoy a receiver whose work has few merits to elaborate on. I applaud that desire and recommend reinvesting it. “Nice job” carries little content, which evaporates under repetition. Specificity provides content: “Your idea is alluringly simple but could reverberate across multiple fields” has gristle.

The Curious Behavior of Topological Insulators

IQIM hosts a Summer Research Institute that invites high school Physics teachers to work directly with staff, students, and researchers in the lab.  Last summer I worked with Marcus Teague, a highly intelligent and very patient Caltech Staff Scientist in the Yeh Group, to help set up an experiment for studying exotic material samples under circularly polarized light.  I had researched, ordered, and assembled parts for the optics and vacuum chamber.  As I returned to Caltech this summer, I was eager to learn how the Yeh Group had proceeded with the study.

Yeh group 2017

Yeh group (2017): I am the one on the front-left of the picture, next to Dr. Yeh and in front of Kyle Chen. Benjamin Fackrell, another physics teacher interning at the Yeh lab, is all the way to the right.

The optics equipment I had researched, ordered, and helped to set up last summer is being used currently to study topological insulator (TI) samples that Kyle Chien-Chang Chen, a doctoral candidate, has worked on in the Yeh Lab.  Yes, a high school Physics teacher played a small role in their real research! It is exciting and humbling to have a connection to real-time research.

7234_ZOQuartWavplatMount_1

Quartz quarter-wave plates are important elements in many experiments involving light. They convert linearly polarized light to circularly polarized light.

Kyle receives a variety of TI samples from UCLA; the current sample up for review is Bismuth Antimony Telluride \mathrm{(BiSb)}_2\mathrm{Te}_3.  Depending on the particular sample and the type of testing, Kyle has a variety of procedures to prep the samples for study.  And this summer, Kyle has help from visiting Canadian student Adrian Llanos. Below are figures of some of the monolayer and bilayer structures for topological insulators studied in the lab.

2016 0808 sample from UCLA

Pictures of samples from UCLA

Under normal conditions, a topological insulator (TI) is only conductive on the surface. The center of a TI sample is an insulator. But when the surface states open an energy gap, the surface of the TI becomes insulating. The energy gap is the amount of energy necessary to remove an electron from the top valence band to become free to move about.  This gap is the result of the interaction between the conduction band and valence band surface states from the opposing surfaces of a thin film. The resistance of the conducting surface actually increases. The Yeh group is hoping that the circularly polarized light can help align the spin of the Chromium electrons, part of the bilayer of the TI.  At the same time, light has other effects, like photo-doping, which excites more electrons into the conduction bands and thus reduces the resistance. The conductivity of the surface of the TI changes as the preferentially chosen spin up or spin down is manipulated by the circularly polarized light or by the changing magnetic field.

PPMS

A physical property measurement system.

This interesting experiment on TI samples is taking place within a device called a Physical Property Measurement System (PPMS).  The PPMS is able to house the TI sample and the optics equipment to generate circularly polarized light, while allowing the researchers to vary the temperature and magnetic field.  The Yeh Group is able to artificially turn up the magnetic field or the circularly polarized light in order to control the resistance and current signal within the sample.  The properties of surface conductivity are studied up to 8 Tesla (over one-hundred thousand times the Earth’s magnetic field), and from room temperature (just under 300 Kelvin) to just below 2 Kelvin (colder than outer space).

right-hand-rule

Right-Hand-Rule used to determine the direction of the magnetic (Lorentz) force.

In the presence of a magnetic field, when a current is applied to a conductor, the electrons will experience a force at a right angle to the magnetic field, following the right-hand rule (or the Physics gang sign, as we affectionately call it in my classroom).  This causes the electrons to curve perpendicular to their original path and perpendicular to the magnetic field. The build up of electrons on one end of the conductor creates a potential difference. This potential difference perpendicular to the original current is known as the ordinary Hall Effect.  The ratio of the induced voltage to the applied current is known as the Hall Resistance.

Under very low temperatures, the Quantum Hall Effect is observed. As the magnetic field is changed, the Hall Voltage increases in set quantum amounts, as opposed to gradually. Likewise, the Hall Resistance is quantized.  It is a such an interesting phenomenon!

For a transport measurement of the TI samples, Kyle usually uses a Hall Bar Geometry in order to measure the Hall Effect accurately. Since the sample is sufficiently large, he can simply solder it for measurement.

hall_resitance_featured

Transport Measurements of TI Samples follow the same setup as Quantum Hall measurements on graphene: Current runs through electrodes attached to the North/South ends of the sample, while electron flow is measured longitudinally, as well as along the East/West ends (Hall conductance).

What is really curious is that the Bismuth Antimony Telluride samples are exhibiting the Hall Effect even when no external magnetic field is applied!  When the sample is measured, there is a Hall Resistance despite no external magnetic field. Hence the sample itself must be magnetic.  This phenomenon is called the Anomalous Hall Effect.

According to Kyle, there is no fancy way to measure the magnetization directly; it is only a matter of measuring a sample’s Hall Resistance. The Hall Resistance should be zero when there is no Anomalous Hall Effect, and when there is ferromagnetism (spins want to align in the direction of their neighbors), you see a non-zero value.  What is really interesting is that they assume ferromagnetism would break the time-reversal symmetry and thus open a gap at the surface states.  A very strange behavior that is also observed is that the longitudinal resistance increases gradually.  

Running PPMS

Running PPMS

Typically the quantum Hall Resistance increases in quantum increments.  Even if the surface gap is open, the sample is not insulating because the gap is small (<0.3 eV); hence, under these conditions this TI is behaving much more like a semiconductor!

Next, the group will examine these samples using the Scanning Tunneling Microscope (STM).  The STM will be able to provide local topological information by examining 1 micron by 1 micron areas.  In comparison, the PPMS research with these samples is telling the story of the global behavior of the sample.  The combination of information from the PPMS and STM research will provide a more holistic story of the behavior of these unique samples.

I am thrilled to see how the group has used what we started with last summer to find interesting new results.  I am fascinated to see what they learn in the coming months with the different samples and STM testing. And I am quite excited to share these applications with my students in the upcoming new school year.  Another summer packed with learning!

The Quantum Wave in Computing

Summer is a great time for academics. Imagine: three full months off! Hit the beach. Tune that golf pitch. Hike the sierras. Go on a cruise. Watch soccer with the brazilenos (there’s been better years for that one). Catch the sunset by the Sydney opera house. Take a nap.

IMG_20180619_145256

A visiting researcher taking full advantage of the Simons Institute’s world-class relaxation facilities. And yes, I bet you he’s proving a theorem at the same time.

Think that’s outrageous? We have it even better. Not only do we get to travel the globe worry-free, but we prove theorems while doing it. For some of us summer is the only time of year when we manage to prove theorems. Ideas accumulate during the year, blossom during the conferences and workshops that mark the start of the summer, and hatch during the few weeks that many of us set aside as “quiet time” to finally “wrap things up”.

I recently had the pleasure of contributing to the general well-being of my academic colleagues by helping to co-organize (with Andrew Childs, Ignacio Cirac, and Umesh Vazirani) a 2-month long program on “Challenges in Quantum Computation” at the Simons Institute in Berkeley. In this post I report on the program and describe one of the highlights discussed during it: Mahadev’s very recent breakthrough on classical verification of quantum computation.

Challenges in Quantum Computation

The Simons Institute has been in place on the UC Berkeley campus since the Fall of 2013, and in fact one of their first programs was on “Quantum Hamiltonian Complexity”, in Spring 2014 (see my account of one of the semester’s workshops here). Since then the institute has been hosting a pair of semester-long programs at a time, in all areas of theoretical computer science and neighboring fields. Our “summer cluster” had a slightly different flavor: shorter, smaller, it doubled up as the prelude to a full semester-long program scheduled for Spring 2020 (provisional title: The Quantum Wave in Computing, a title inspired from Umesh Vazirani’s recent tutorial at STOC’18 in Los Angeles) — (my interpretation of) the idea being that the ongoing surge in experimental capabilities supports a much broader overhaul of some of the central questions of computer science, from the more applied (such as, programming languages and compilers), to the most theoretical (such as, what complexity classes play the most central role).

This summer’s program hosted a couple dozen participants at a time. Some stayed for the full 2 months, while others visited for shorter times. The Simons Institute is a fantastic place for collaborative research. The three-story building is entirely devoted to us. There are pleasant yet not-too-comfortable shared offices, but the highlight is the two large communal rooms meant for organized and spontaneous discussion. Filled with whiteboards, bright daylight, comfy couches, a constant supply of tea, coffee, and cookies, and eager theorists!

After a couple weeks of settling down the program kicked off with an invigorating workshop. Our goal for the workshop was to frame the theoretical questions raised by the sudden jump in the capabilities of experimental quantum devices that we are all witnessing. There were talks describing progress in experiments (superconducting qubits, ion traps, and cold atoms were represented), suggesting applications for the new devices (from quantum simulation & quantum chemistry to quantum optimization and machine learning through “quantum supremacy” and randomness generation), and laying the theoretical framework for trustworthy interaction with the quantum devices (interactive proofs, testing, and verifiable delegation). We had an outstanding line-up of speakers. All talks (except the panel discussions, unfortunately) were recorded, and you can watch them here.

The workshop was followed by five additional weeks of “residency”, that allowed long-term participants to digest and develop the ideas presented during the workshop. In my experience these few additional weeks, right after the workshop, make all the difference. It is the same difference as between a quick conference call and a leisurely afternoon at the whiteboard: while the former may feel productive and bring the adrenaline levels up, the latter is more suited to in-depth exploration and unexpected discoveries.

There would be much to say about the ideas discussed during the workshop and following weeks. I will describe a single one of these ideas — in my opinion, one of the most outstanding ideas to have emerged at the interface of quantum computing and theoretical computer science in recent years! The result, “Classical Verification of Quantum Computations”, is by Urmila Mahadev, a Ph.D.~student at UC Berkeley (I think she just graduated). Urmila gave a wonderful talk on her result at the workshop, and I highly recommend watching the recorded video. In the remainder of this post I’ll provide an overview of the result. I also wrote a slightly more technical introduction that eager readers will find here.

A cryptographic leash on quantum systems

Mahadev’s result is already famous: announced on the blog of Scott Aaronson, it has earned her a long-standing 25$ prize, awarded for “solving the problem of proving the results of an arbitrary quantum computation to a classical skeptic”. Or, in complexity-theoretic linguo, for showing that “every language in the class BQP admits an interactive protocol where the prover is in BQP and the verifier is in BPP”. What does this mean?

Verifying quantum computations in the high complexity regime

On his blog Scott Aaronson traces the question back to a talk given by Daniel Gottesman in 2004. An eloquent formulation appears in a subsequent paper by Dorit Aharonov and Umesh Vazirani, aptly titled “Is Quantum Mechanics Falsifiable? A computational perspective on the foundations of Quantum Mechanics”.

Here is the problem. As readers of this blog are well aware, Feynman’s idea of a quantum computer, and the subsequent formalization by Bernstein and Vazirani of the Quantum Turing Machine, layed the theoretical foundation for the construction of computing devices whose inner functioning is based on the laws of quantum physics. Most readers also probably realize that we currently believe that these quantum devices will have the ability to efficiently solve computational problems (the class of which is denoted BQP) that are thought to be beyond the reach of classical computers (represented by the class BPP). A prominent example is factoring, but there are many others. The most elementary example is arguably Feynman’s original proposal: a quantum computer can be used to simulate the evolution of any quantum mechanical system “in real time”. In contrast, the best classical simulations available can take exponential time to converge even on concrete examples of practical interest. This places a computational impediment to scientific progress: the work of many physicists, chemists, and biologists, would be greatly sped up if only they could perform simulations at will.

So this hypothetical quantum device claims (or will likely claim) that it has the ability to efficiently solve computational problems for which there is no known efficient classical algorithm. Not only this but, as is widely believed in complexity-theoretic circles (a belief recently strenghtened by the proof of an oracle separation between BQP and PH by Tal and Raz), for some of these problems, even given the answer, there does not exist a classical proof that the answer is correct. The quantum device’s claim cannot be verified! This seems to place the future of science at the mercy of an ingenuous charlatan, with good enough design & marketing skills, that would convince us that it is providing the solution to exponentially complex problems by throwing stardust in our eyes. (Wait, did this happen already?)

Today is the most exciting time in quantum computing since the discovery of Shor’s algorithm for factoring: while we’re not quite ready to run that particular algorithm yet, experimental capabilities have ramped up to the point where we are just about to probe the “high-complexity” regime of quantum mechanics, by making predictions that cannot be emulated, or even verified, using the most powerful classical supercomputers available. What confidence will we have that the predictions have been obtained correctly? Note that this question is different from the question of testing the validity of the theory of quantum mechanics itself. The result presented here assumes the validity of quantum mechanics. What it offers is a method to test, assuming the correctness of quantum mechanics, that a device performs the calculation that it claims to have performed. If the device has supra-quantum powers, all bets are off. Even assuming the correctness of quantum mechanics, however, the device may, intentionally or not (e.g. due to faulty hardware), mislead the experimentalist. This is the scenario that Mahadev’s result aims to counter.

Interactive proofs

The first key idea is to use the power of interaction. The situation can be framed as follows: given a certain computation, such that a device (henceforth called “prover”) has the ability to perform the computation, but another entity, the classical physicist (henceforth called “verifier”) does not, is there a way for the verifier to extract the right answer from the prover with high confidence — given that the prover may not be trusted, and may attempt to use its superior computing power to mislead the verifier instead of performing the required computation?

The simplest scenario would be one where the verifier can execute the computation herself, and check the prover’s outcome. The second simplest scenario is one where the verifier cannot execute the computation, but there is a short proof that the prover can provide that allows her to fully certify the outcome. These two scenario correspond to problems in BPP and NP respectively; an example of the latter is factoring. As argued earlier, not all quantum computations (BQP) are believed to fall within these two classes. Both direct computation and proof verification are ruled out. What can we do? Use interaction!

The framework of interactive proofs originates in complexity theory in the 1990s. An interactive proof is a protocol through which a verifier (typically a computationally bounded entity, such as the physicist and her classical laptop) interacts with a more powerful, but generally untrusted, prover (such as the experimental quantum device). The goal of the protocol is for the verifier to certify the validity of a certain computational statement.

Here is a classical example (the expert — or impatient — reader may safely skip this). The example is for a problem that lies in co-NP, but is not believed to lie in NP. Suppose that both the verifier and prover have access to two graphs, {G} and {H}, such that the verifier wishes to raise an “ACCEPT” flag if and only if the two graphs are not isomorphic. In general this is a hard decision to make, because the verifier would have to check all possible mappings from one graph to the other, of which there are exponentially many. Here is how the verifier can extract the correct answer by interacting with a powerful, untrusted prover. First, the verifier flips a fair coin. If the coin comes up heads, she selects a random relabeling of the vertices of {G}. If the coin comes up tail, she selects a random relabeling of the vertices of {H}. The verifier then sends the relabeled graph to the prover, and asks the prover to guess which graph the verifier has hidden. If the prover provides the correct answer (easy to check), the verifier concludes that the graphs were not isomorphic. Otherwise, she concludes that they were isomorphic. It is not hard to see that, if the graphs are indeed not isomorphic, the prover always has a means to correctly identify the hidden graph, and convince the verifier to make the right decision. But if the graphs are isomorphic, then there is no way for the prover to distinguish the random relabelings (since the distributions obtained by randomly relabeling each graph are identical), and so the verifier makes the right decision with probability 1/2. Repeating the protocol a few times, with a different choice of relabeling each time, quickly drives the probability of making an error to {0}.

A deep result from the 1990s exactly charaterizes the class of computational problems (languages) that a classical polynomial-time verifier can decide in this model: IP = PSPACE. In words, any problem whose solution can be found in polynomial space has an interactive proof in which the verifier only needs polynomial time. Now observe that PSPACE contains NP, and much more: in fact PSPACE contains BQP as well (and even QMA)! (See this nice recent article in Quanta for a gentle introduction to these complexity classes, and more.) Thus any problem that can be decided on a quantum computer can also be decided without a quantum computer, by interacting with a powerful entity, the prover, that can convince the verifier of the right answer without being able to induce her in error (in spite of the prover’s greater power).

Are we not done? We’re not! The problem is that the result PSPACE = IP, even when specialized to BQP, requires (for all we know) a prover whose power matches that of PSPACE (almost: see e.g. this recent result for a slighlty more efficient prover). And as much as our experimental quantum device inches towards the power of BQP, we certainly wouldn’t dare ask it to perform a PSPACE-hard computation. So even though in principle there do exist interactive proofs for BQP-complete languages, these interactive proofs require a prover whose computational power goes much beyond what we believe is physically achievable. But that’s useless (for us): back to square zero.

Interactive proofs with quantum provers

Prior to Mahadev’s result, a sequence of beautiful results in the late 2000’s introduced a clever extension of the model of interactive proofs by allowing the verifier to make use of a very limited quantum computer. For example, the verifier may have the ability to prepare single qubits in two possible bases of her choice, one qubit at a time, and send them to the prover. Or the verifier may have the ability to receive single qubits from the prover, one at a time, and measure them in one of two bases of her choice. In both cases it was shown that the verifier could combine such limited quantum capacity with the possibility to interact with a quantum polynomial-time prover to verify arbitrary polynomial-time quantum computation. The idea for the protocols crucially relied on the ability of the verifier to prepare qubits in a way that any deviation by the prover from the presecribed honest behavior would be detected (e.g. by encoding information in mutually unbiased bases unknown to the prover). For a decade the question remained open: can a completely classical verifier certify the computation performed by a quantum prover?

Mahadev’s result brings a positive resolution to this question. Mahadev describes a protocol with the following properties. First, as expected, for any quantum computation, there is a quantum prover that will convince the classical verifier of the right outcome for the computation. This property is called completeness of the protocol. Second, no prover can convince the classical verifier to accept a wrong outcome. This property is called soundness of the protocol. In Mahadev’s result the latter property comes with a twist: soundness holds provided the prover cannot break post-quantum cryptography. In contrast, the earlier results mentioned in the previous paragraph obtained protocols that were sound against an arbitrarily powerful prover. The additional cryptographic assumption gives Mahadev’s result a “win-win” flavor: either the protocol is sound, or someone in the quantum cloud has figured out how to break an increasingly standard cryptographic assumption (namely, post-quantum security of the Learning With Errors problem) — in all cases, a verified quantum feat!

In the remaining of the post I will give a high-level overview of Mahadev’s protocol and its analysis. For more detail, see the accompanying blog post.

The protocol is constructed in two steps. The first step builds on insights from works preceding this one. This step reduces the problem of verifying the outcome of an arbitrary quantum computation to a seemingly much simpler problem, that nevertheless encapsulates all the subtlety of the verification task. The problem is the following — in keeping with the terminology employed by Mahadev, I’ll call it the qubit commitment problem. Suppose that a prover claims to have prepared a single-qubit state of its choice; call it {| \psi \rangle} ({| \psi \rangle} is not known to the verifier). Suppose the verifier challenges the prover for the outcome of a measurement performed on {| \psi \rangle}, either in the computational basis (the eigenbasis of the Pauli Z), or in the Hadamard basis (the eigenbasis of the Pauli X). Which basis to use is the verifier’s choice, but of course only one basis can be asked. Does there exist a protocol that guarantees that, at the end of the protocol, the verifier will be able to produce a bit that matches the true outcome of a measurement of {| \psi \rangle} in the chosen basis? (More precisely, it should be that the verifier’s final bit has the same distribution as the outcome of a measurement of {| \psi \rangle} in the chosen basis.)

The reduction that accomplishes this first step combines Kitaev’s circuit-to-Hamiltonian construction with some gadgetry from perturbation theory, and I will not describe it here. An important property of the reduction is that it is ultimately sufficient that the verifier has the guarantee that the measurement outcomes she obtains in either case, computational or Hadamard, are consistent with measurement outcomes for the correct measurements performed on some quantum state. In principle the state does not need to be related to anything the prover does (though of course the analysis will eventually define that state from the prover), it only needs to exist. Specifically, we wish to rule out situations where e.g. the prover claims that both outcomes are deterministically “0”, a claim that would violate the uncertainty principle. (For the sake of the argument, let’s ignore that in the case of a single qubit the space of outcomes allowed by quantum mechanics can be explicitly mapped out — in the actual protocol, the prover commits to {n} qubits, not just one.)

Committing to a qubit

The second step of the protocol construction introduces a key idea. In order to accomplish the sought-after commitment, the verifier is going to engage in an initial commitment phase with the prover. In this phase, the prover is required to provide classical information to the verifier, that “commits” it to a specific qubit. This committed qubit is the state on which the prover will later perform the measurement asked by the verifier. The classical information obtained in the commitment phase will bind the prover to reporting the correct outcome, for both of the verifier’s basis choice — or risk being caught cheating.bit_commit_cartoon

How does this work? Commitments to bits, or even qubits, are an old story in cryptography. The standard method for committing to a bit {b} is based on the use of a one-way permutation {f}, together with a hardcore predicate {h} for {f} (i.e.~an efficiently computable function {h: \{0,1\}^n\rightarrow \{0,1\}} such that given {f(x)}, it is hard to predict {h(x)}). The construction goes as follows. The committer selects a uniformly random string {r} and sends {(y,m)=(f(r),h(r)\oplus b)}. To unveil the commitment {b}, it is enough to reveal a string {r} such that {f(r)=y}; the receiver can then compute {h(r)} and decode {b=h(r)\oplus m}. The point is that since {f} is a permutation, the value {y} uniquely “commits” the sender to an {r}, and thus to a {b}; however, given {y=f(r)} for an unknown {r} the hardcore predicate {h(r)} looks uniformly random, thus {(y,m)} reveals no information about {b} to the receiver.

What is new in Mahadev’s scheme is not only that the commitment is to a qubit, instead of a bit, but even more importabtly that the commitment is provided by classical information, which is necessary to obtain a classical protocol. (Commitments to qubits, using qubits, can be obtained by combining the quantum one-time pad with the commitment scheme described above.) To explain how this is achieved we’ll need a slightly more advanced crypographic primitive: a pair of injective trapdoor one-way functions {f_0,f_1:\{0,1\}^n\rightarrow\{0,1\}^n}. This means that it is easy to evaluate both functions on any input, but that given a value {y} in their common range, it is hard to find a preimage of {y} under either function — except if one is given the trapdoor information. (Note that this is an over-simplification of the actual primitive used by Mahadev, which has additional properties, including that of being “claw-free”.)

The commitment phase of the protocol works as follows. Starting from a state {| \psi \rangle=\alpha| 0 \rangle+\beta| 1 \rangle} of its choice, the prover is supposed to perform the following steps. First, the prover creates a uniform superposition over the common domain of {f_0} and {f_1}. Then it evaluates either function, {f_0} or {f_1}, in an additional register, by controlling on the qubit of {| \psi \rangle}. Finally, the prover measures the register that contains the image of {f_0} or {f_1}. This achieves the following sequence of transformations:

\displaystyle \begin{array}{rcl} \alpha| 0 \rangle+\beta| 1 \rangle &\mapsto& (\alpha| 0 \rangle + \beta| 1 \rangle) \otimes \Big(2^{-n/2} \sum_{x\in\{0,1\}^n} | x \rangle\Big) \\ &\mapsto & 2^{-n/2} \sum_x \alpha | 0 \rangle| x \rangle| f_0(x) \rangle + \beta | 1 \rangle| f_1(x) \rangle\\ &\mapsto & \big(\alpha| 0 \rangle| x_0 \rangle+\beta| 1 \rangle| x_1 \rangle\big)| y \rangle\;, \end{array}

where {y\in\{0,1\}^n} is the measured image. The string {y} is called the prover’s commitment string. It is required to report it to the verifier.

In what sense is {y} a commitment to the state {| \psi \rangle}? The key point is that, once it has measured {y}, the prover has “lost control” over its qubit — it has effectively handed over that control to the verifier. For example, the prover no longer has the ability to perform an arbitrary rotation on its qubit. Why? The prover knows {y} (it had to report it to the verifier) but not {x_0} and {x_1} (this is the claw-free assumption on the pair {(f_0,f_1)}). What this means — though of course it has to be shown — is that the prover can no longer recover the state {| \psi \rangle}! It does not have the ability to “uncompute” {x_0} and {x_1}. Thus the qubit has been “set in cryptographic stone”. In contrast, the verifier can use the trapdoor information to recover {x_0} and {x_1}. This gives her extra leverage on the prover. This asymmetry, introduced by cryptography, is what eventually allows the verifier to extract a truthful measurement outcome from the prover (or detect lying).

It is such a wonderful idea! It stuns me every time Urmila explains it. Proving it is of course rather delicate. In this post I make an attempt at going into the idea in a little more depth. The best resource remains Urmila’s paper, as well as her talk at the Simons Institute.

Open questions

What is great about this result is not that it closes a decades-old open question, but that by introducing a truly novel idea it opens up a whole new field of investigation. Some of the ideas that led to the result were already fleshed out by Mahadev in her work on homomorphic encryption for quantum circuits, and I expect many more results to continue building on these ideas.

An obvious outstanding question is whether the cryptography is needed at all: could there be a scheme achieving the same result as Mahadev’s, but without computational assumptions on the prover? It is known that if such a scheme exists, it is unlikely to have the property of being blind, meaning that the prover learns nothing about the computation that the verifier wishes it to execute (aside from an upper bound on its length); see this paper for “implausibility” results in this direction. Mahadev’s protocol relies on “post-hoc” verification, and is not blind. Urmila points out that it is likely the protocol could be made blind by composing it with her protocol for homomorphic encryption. Could there be a different protocol, that would not go through post-hoc verification, but instead directly guide the prover through the evaluation of a universal circuit on an encrypted input, gate by gate, as did some previous works?