About Nicole Yunger Halpern

I'm a theoretical physicist and an ITAMP Postdoctoral Fellow at the Harvard-Smithsonian Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP). Catch me at ITAMP, Harvard physics, or MIT. Before moving here, I completed a physics PhD at Caltech's Institute for Quantum Information and Matter, under John Preskill's auspices. I write one article per month for Quantum Frontiers. My research consists of what I call "quantum steampunk" (https://quantumfrontiers.com/2018/07/29/so-long-and-thanks-for-all-the-fourier-transforms/): I re-envision 19th-century thermodynamics with 21st-century quantum information theory, and I use the combination as a new lens through which to view various fields of science.

If the (quantum-metrology) key fits…

My maternal grandfather gave me an antique key when I was in middle school. I loved the workmanship: The handle consisted of intertwined loops. I loved the key’s gold color and how the key weighed on my palm. Even more, I loved the thought that the key opened something. I accompanied my mother to antique shops, where I tried unlocking chests, boxes, and drawers.

Z

My grandfather’s antique key

I found myself holding another such key, metaphorically, during the autumn of 2018. MIT’s string theorists had requested a seminar, so I presented about quasiprobabilities. Quasiprobabilities represent quantum states similarly to how probabilities represent a swarm of classical particles. Consider the steam rising from asphalt on a summer day. Calculating every steam particle’s position and momentum would require too much computation for you or me to perform. But we can predict the probability that, if we measure every particle’s position and momentum, we’ll obtain such-and-such outcomes. Probabilities are real numbers between zero and one. Quasiprobabilities can assume negative and nonreal values. We call these values “nonclassical,” because they’re verboten to the probabilities that describe classical systems, such as steam. I’d defined a quasiprobability, with collaborators, to describe quantum chaos. 

k2

David Arvidsson-Shukur was sitting in the audience. David is a postdoctoral fellow at the University of Cambridge and a visiting scholar in the other Cambridge (at MIT). He has a Swedish-and-southern-English accent that I’ve heard only once before and, I learned over the next two years, an academic intensity matched by his kindliness.1 Also, David has a name even longer than mine: David Roland Miran Arvidsson-Shukur. We didn’t know then, but we were destined to journey together, as postdoctoral knights-errant, on a quest for quantum truth.

David studies the foundations of quantum theory: What distinguishes quantum theory from classical? David suspected that a variation on my quasiprobability could unlock a problem in metrology, the study of measurements.

k1

Suppose that you’ve built a quantum computer. It consists of gates—uses of, e.g., magnets or lasers to implement logical operations. A classical gate implements operations such as “add 11.” A quantum gate can implement an operation that involves some number \theta more general than 11. You can try to build your gate correctly, but it might effect the wrong \theta value. You need to measure \theta.

How? You prepare some quantum state | \psi \rangle and operate on it with the gate. \theta imprints itself on the state, which becomes | \psi (\theta) \rangle. Measure some observable \hat{O}. You repeat this protocol in each of many trials. The measurement yields different outcomes in different trials, according to quantum theory. The average amount of information that you learn about \theta per trial is called the Fisher information.

1

Let’s change this protocol. After operating with the gate, measure another observable, \hat{F}, and postselect: If the \hat{F} measurement yields a desirable outcome f, measure \hat{O}. If the \hat{F}-measurement doesn’t yield the desirable outcome, abort the trial, and begin again. If you choose \hat{F} and f adroitly, you’ll measure \hat{O} only when the trial will provide oodles of information about \theta. You’ll save yourself many \hat{O} measurements that would have benefited you little.2

2

Why does postselection help us? We could understand easily if the system were classical: The postselection would effectively improve the input state. To illustrate, let’s suppose that (i) a magnetic field implemented the gate and (ii) the input were metal or rubber. The magnetic field wouldn’t affect the rubber; measuring \hat{O} in rubber trials would provide no information about the field. So you could spare yourself \hat{O} measurements by postselecting on the system’s consisting of metal.

Magnet

Postselection on a quantum system can defy this explanation. Consider optimizing your input state | \psi \rangle, beginning each trial with the quantum equivalent of metal. Postselection could still increase the average amount of information information provided about \theta per trial. Postselection can enhance quantum metrology even when postselection can’t enhance the classical analogue.

David suspected that he could prove this result, using, as a mathematical tool, the quasiprobability that collaborators and I had defined. We fulfilled his prediction, with Hugo Lepage, Aleks Lasek, Seth Lloyd, and Crispin Barnes. Nature Communications published our paper last month. The work bridges the foundations of quantum theory with quantum metrology and quantum information theory—and, through that quasiprobability, string theory. David’s and my quantum quest continues, so keep an eye out for more theory from us, as well as a photonic experiment based on our first paper.

k3

I still have my grandfather’s antique key. I never found a drawer, chest, or box that it opened. But I don’t mind. I have other mysteries to help unlock.

 

1The morning after my wedding this June, my husband and I found a bouquet ordered by David on our doorstep.

2Postselection has a catch: The \hat{F} measurement has a tiny probability of yielding the desirable outcome. But, sometimes, measuring \hat{O} costs more than preparing | \psi \rangle, performing the gate, and postselecting. For example, suppose that the system is a photon. A photodetector will measure \hat{O}. Some photodetectors have a dead time: After firing, they take a while to reset, to be able to fire again. The dead time can outweigh the cost of the beginning of the experiment.

A quantum walk down memory lane

In elementary and middle school, I felt an affinity for the class three years above mine. Five of my peers had siblings in that year. I carpooled with a student in that class, which partnered with mine in holiday activities and Grandparents’ Day revues. Two students in that class stood out. They won academic-achievement awards, represented our school in science fairs and speech competitions, and enrolled in rigorous high-school programs.

Those students came to mind as I grew to know David Limmer. David is an assistant professor of chemistry at the University of California, Berkeley. He studies statistical mechanics far from equilibrium, using information theory. Though a theorist ardent about mathematics, he partners with experimentalists. He can pass as a physicist and keeps an eye on topics as far afield as black holes. According to his faculty page, I discovered while writing this article, he’s even three years older than I. 

I met David in the final year of my PhD. I was looking ahead to postdocking, as his postdoc fellowship was fading into memory. The more we talked, the more I thought, I’d like to be like him.

Playground

I had the good fortune to collaborate with David on a paper published by Physical Review A this spring (as an Editors’ Suggestion!). The project has featured in Quantum Frontiers as the inspiration for a rewriting of “I’m a little teapot.” 

We studied a molecule prevalent across nature and technologies. Such molecules feature in your eyes, solar-fuel-storage devices, and more. The molecule has two clumps of atoms. One clump may rotate relative to the other if the molecule absorbs light. The rotation switches the molecule from a “closed” configuration to an “open” configuration.

Molecular switch

These molecular switches are small, quantum, and far from equilibrium; so modeling them is difficult. Making assumptions offers traction, but many of the assumptions disagreed with David. He wanted general, thermodynamic-style bounds on the probability that one of these molecular switches would switch. Then, he ran into me.

I traffic in mathematical models, developed in quantum information theory, called resource theories. We use resource theories to calculate which states can transform into which in thermodynamics, as a dime can transform into ten pennies at a bank. David and I modeled his molecule in a resource theory, then bounded the molecule’s probability of switching from “closed” to “open.” I accidentally composed a theme song for the molecule; you can sing along with this post.

That post didn’t mention what David and I discovered about quantum clocks. But what better backdrop for a mental trip to elementary school or to three years into the future?

I’ve blogged about autonomous quantum clocks (and ancient Assyria) before. Autonomous quantum clocks differ from quantum clocks of another type—the most precise clocks in the world. Scientists operate the latter clocks with lasers; autonomous quantum clocks need no operators. Autonomy benefits you if you want for a machine, such as a computer or a drone, to operate independently. An autonomous clock in the machine ensures that, say, the computer applies the right logical gate at the right time.

What’s an autonomous quantum clock? First, what’s a clock? A clock has a degree of freedom (e.g., a pair of hands) that represents the time and that moves steadily. When the clock’s hands point to 12 PM, you’re preparing lunch; when the clock’s hands point to 6 PM, you’re reading Quantum Frontiers. An autonomous quantum clock has a degree of freedom that represents the time fairly accurately and moves fairly steadily. (The quantum uncertainty principle prevents a perfect quantum clock from existing.)

Suppose that the autonomous quantum clock constitutes one part of a machine, such as a quantum computer, that the clock guides. When the clock is in one quantum state, the rest of the machine undergoes one operation, such as one quantum logical gate. (Experts: The rest of the machine evolves under one Hamiltonian.) When the clock is in another state, the rest of the machine undergoes another operation (evolves under another Hamiltonian).

Clock 2

Physicists have been modeling quantum clocks using the resource theory with which David and I modeled our molecule. The math with which we represented our molecule, I realized, coincided with the math that represents an autonomous quantum clock.

Think of the molecular switch as a machine that operates (mostly) independently and that contains an autonomous quantum clock. The rotating clump of atoms constitutes the clock hand. As a hand rotates down a clock face, so do the nuclei rotate downward. The hand effectively points to 12 PM when the switch occupies its “closed” position. The hand effectively points to 6 PM when the switch occupies its “open” position.

The nuclei account for most of the molecule’s weight; electrons account for little. They flit about the landscape shaped by the atomic clumps’ positions. The landscape governs the electrons’ behavior. So the electrons form the rest of the quantum machine controlled by the nuclear clock.

Clock 1

Experimentalists can create and manipulate these molecular switches easily. For instance, experimentalists can set the atomic clump moving—can “wind up” the clock—with ultrafast lasers. In contrast, the only other autonomous quantum clocks that I’d read about live in theory land. Can these molecules bridge theory to experiment? Reach out if you have ideas!

And check out David’s theory lab on Berkeley’s website and on Twitter. We all need older siblings to look up to.

Eleven risks of marrying a quantum information scientist

Some of you may have wondered whether I have a life. I do. He’s a computer scientist, and we got married earlier this month. 

Marrying a quantum information scientist comes with dangers not advertised in any Brides magazine (I assume; I’ve never opened a copy of Brides magazine). Never mind the perils of gathering together Auntie So-and-so and Cousin Such-and-such, who’ve quarreled since you were six; or spending tens of thousands of dollars on one day; or assembling two handfuls of humans during a pandemic. Beware the risks of marrying someone who unconsciously types “entropy” when trying to type “entry,” twice in a row.

1) She’ll introduce you to friends as “a classical computer scientist.” They’d assume, otherwise, that he does quantum computer science. Of course. Wouldn’t you?

Flowers

2) The quantum punning will commence months before the wedding. One colleague wrote, “Many congratulations! Now you know the true meaning of entanglement.” Quantum particles can share entanglement. If you measure entangled particles, your outcomes can exhibit correlations stronger than any produceable by classical particles. As a card from another colleague read, “May you stay forever entangled, with no decoherence.”

I’d rather not dedicate much of a wedding article to decoherence, but suppose that two particles are maximally entangled (can generate the strongest correlations possible). Suppose that particle 2 heats up or suffers bombardment by other particles. The state of particle 2 decoheres as the entanglement between 1 and 2 frays. Equivalently, particle 2 entangles with its environment, and particle 2 can entangle only so much: The more entanglement 2 shares with the environment, the less entanglement 2 can share with 1. Physicists call entanglement—ba-duh-bummonogamous. 

The matron-of-honor toast featured another entanglement joke, as well as five more physics puns.1 (She isn’t a scientist, but she did her research.) She’ll be on Zoom till Thursday; try the virtual veal.

Veil

3) When you ask what sort of engagement ring she’d like, she’ll mention black diamonds. Experimentalists and engineers are building quantum computers from systems of many types, including diamond. Diamond consists of carbon atoms arranged in a lattice. Imagine expelling two neighboring carbon atoms and replacing one with a nitrogen atom. You’ll create a nitrogen-vacancy center whose electrons you can control with light. Such centers color the diamond black but let you process quantum information.

If I’d asked my fiancé for a quantum computer, we’d have had to wait 20 years to marry. He gave me an heirloom stone instead.

Rings

4) When a wedding-gown shopkeeper asks which sort of train she’d prefer, she’ll inquire about Maglevs. I dislike shopping, as the best man knows better than most people. In middle school, while our classmates spent their weekends at the mall, we stayed home and read books. But I filled out gown shops’ questionnaires. 

“They want to know what kinds of material I like,” I told the best man over the phone, “and what styles, and what type of train. I had to pick from four types of train. I didn’t even know there were four types of train!”

“Steam?” guessed the best man. “Diesel?”

His suggestions appealed to me as a quantum thermodynamicist. Thermodynamics is the physics of energy, which engines process. Quantum thermodynamicists study how quantum phenomena, such as entanglement, can improve engines. 

“Get the Maglev train,” the best man added. “Low emissions.”

“Ooh,” I said, “that’s superconducting.” Superconductors are quantum systems in which charge can flow forever, without dissipating. Labs at Yale, at IBM, and elsewhere are building quantum computers from superconductors. A superconductor consists of electrons that pair up with help from their positively charged surroundings—Cooper pairs. Separating Cooper-paired electrons requires an enormous amount of energy. What other type of train would better suit a wedding?

I set down my phone more at ease. Later, pandemic-era business closures constrained me to wearing a knee-length dress that I’d worn at graduations. I didn’t mind dodging the train.

Dress

5) When you ask what style of wedding dress she’ll wear, she’ll say that she likes her clothing as she likes her equations. Elegant in their simplicity.

6) You’ll plan your wedding for wedding season only because the rest of the year conflicts with more seminars, conferences, and colloquia. The quantum-information-theory conference of the year takes place in January. We wanted to visit Australia in late summer, and Germany in autumn, for conferences. A quantum-thermodynamics conference takes place early in the spring, and the academic year ends in May. Happy is the June bride; happier is the June bride who isn’t preparing a talk.

7) An MIT chaplain will marry you. Who else would sanctify the union of a physicist and a computer scientist?

8) You’ll acquire more in-laws than you bargained for. Biological parents more than suffice for most spouses. My husband has to contend with academic in-laws, as my PhD supervisor is called my “academic father.”

In-laws

Academic in-laws of my husband’s attending the wedding via Zoom.

9) Your wedding can double as a conference. Had our wedding taken place in person, collaborations would have flourished during the cocktail hour. Papers would have followed; their acknowledgements sections would have nodded at the wedding; and I’d have requested copies of all manuscripts for our records—which might have included our wedding album.

10) You’ll have trouble identifying a honeymoon destination where she won’t be tempted to give a seminar. I thought that my then-fiancé would enjoy Vienna, but it boasts a quantum institute. So do Innsbruck and Delft. A colleague-friend works in Budapest, and I owe Berlin a professional visit. The list grew—or, rather, our options shrank. But he turned out not to mind my giving a seminar. The pandemic then cancelled our trip, so we’ll stay abroad for a week after some postpandemic European conference (hint hint).

11) Your wedding will feature on the blog of Caltech’s Institute for Quantum Information and Matter. Never mind The New York Times. Where else would you expect to find a quantum information physicist? I feel fortunate to have found someone with whom I wouldn’t rather be anywhere else.

IMG_0818

 

1“I know that if Nicole picked him to stand by her side, he must be a FEYNMAN and not a BOZON.”

Up we go! or From abstract theory to experimental proposal

Mr. Mole is trapped indoors, alone. Spring is awakening outside, but he’s confined to his burrow. Birds are twittering, and rabbits are chattering, but he has only himself for company.

Sound familiar? 

Spring—crocuses, daffodils, and hyacinths budding; leaves unfurling; and birds warbling—burst upon Cambridge, Massachusetts last month. The city’s shutdown vied with the season’s vivaciousness. I relieved the tension by rereading The Wind in the Willows, which I’ve read every spring since 2017. 

Project Gutenberg offers free access to Kenneth Grahame’s 1908 novel. He wrote the book for children, but never mind that. Many masterpieces of literature happen to have been written for children.

Book cover

One line in the novel demanded, last year, that I memorize it. On page one, Mole is cleaning his house beneath the Earth’s surface. He’s been dusting and whitewashing for hours when the spring calls to him. Life is pulsating on the ground and in the air above him, and he can’t resist joining the party. Mole throws down his cleaning supplies and tunnels upward through the soil: “he scraped and scratched and scrabbled and scrooged, and then he scrooged again and scrabbled and scratched and scraped.”

The quotation appealed to me not only because of its alliteration and chiasmus. Mole’s journey reminded me of research. 

Take a paper that I published last month with Michael Beverland of Microsoft Research and Amir Kalev of the Joint Center for Quantum Information and Computer Science (now of the Information Sciences Institute at the University of Southern California). We translated a discovery from the abstract, mathematical language of quantum-information-theoretic thermodynamics into an experimental proposal. We had to scrabble, but we kept on scrooging.

Mole 1

Over four years ago, other collaborators and I uncovered a thermodynamics problem, as did two other groups at the same time. Thermodynamicists often consider small systems that interact with large environments, like a magnolia flower releasing its perfume into the air. The two systems—magnolia flower and air—exchange things, such as energy and scent particles. The total amount of energy in the flower and the air remains constant, as does the total number of perfume particles. So we call the energy and the perfume-particle number conserved quantities. 

We represent quantum conserved quantities with matrices Q_1 and Q_2. We nearly always assume that, in this thermodynamic problem, those matrices commute with each other: Q_1 Q_2 = Q_2 Q_1. Almost no one mentions this assumption; we make it without realizing. Eliminating this assumption invalidates a derivation of the state reached by the small system after a long time. But why assume that the matrices commute? Noncommutation typifies quantum physics and underlies quantum error correction and quantum cryptography.

What if the little system exchanges with the large system thermodynamic quantities represented by matrices that don’t commute with each other?

Magnolia

Colleagues and I began answering this question, four years ago. The small system, we argued, thermalizes to near a quantum state that contains noncommuting matrices. We termed that state, e^{ - \sum_\alpha \beta_\alpha Q_\alpha } / Z, the non-Abelian thermal state. The Q_\alpha’s represent conserved quantities, and the \beta_\alpha’s resemble temperatures. The real number Z ensures that, if you measure any property of the state, you’ll obtain some outcome. Our arguments relied on abstract mathematics, resource theories, and more quantum information theory.

Over the past four years, noncommuting conserved quantities have propagated across quantum-information-theoretic thermodynamics.1 Watching the idea take root has been exhilarating, but the quantum information theory didn’t satisfy me. I wanted to see a real physical system thermalize to near the non-Abelian thermal state.

Michael and Amir joined the mission to propose an experiment. We kept nosing toward a solution, then dislodging a rock that would shower dirt on us and block our path. But we scrabbled onward.

Toad

Imagine a line of ions trapped by lasers. Each ion contains the physical manifestation of a qubit—a quantum two-level system, the basic unit of quantum information. You can think of a qubit as having a quantum analogue of angular momentum, called spin. The spin has three components, one per direction of space. These spin components are represented by matrices Q_x = S_x, Q_y = S_y, and Q_z = S_z that don’t commute with each other. 

A couple of qubits can form the small system, analogous to the magnolia flower. The rest of the qubits form the large system, analogous to the air. I constructed a Hamiltonian—a matrix that dictates how the qubits evolve—that transfers quanta of all the spin’s components between the small system and the large. (Experts: The Heisenberg Hamiltonian transfers quanta of all the spin components between two qubits while conserving S_{x, y, z}^{\rm tot}.)

The Hamiltonian led to our first scrape: I constructed an integrable Hamiltonian, by accident. Integrable Hamiltonians can’t thermalize systems. A system thermalizes by losing information about its initial conditions, evolving to a state with an exponential form, such as e^{ - \sum_\alpha \beta_\alpha Q_\alpha } / Z. We clawed at the dirt and uncovered a solution: My Hamiltonian coupled together nearest-neighbor qubits. If the Hamiltonian coupled also next-nearest-neighbor qubits, or if the ions formed a 2D or 3D array, the Hamiltonian would be nonintegrable.

Oars

We had to scratch at every stage—while formulating the setup, preparation procedure, evolution, measurement, and prediction. But we managed; Physical Review E published our paper last month. We showed how a quantum system can evolve to the non-Abelian thermal state. Trapped ions, ultracold atoms, and quantum dots can realize our experimental proposal. We imported noncommuting conserved quantities in thermodynamics from quantum information theory to condensed matter and atomic, molecular, and optical physics.

As Grahame wrote, the Mole kept “working busily with his little paws and muttering to himself, ‘Up we go! Up we go!’ till at last, pop! his snout came out into the sunlight and he found himself rolling in the warm grass of a great meadow.”

Mole 2

1See our latest paper’s introduction for references. https://journals.aps.org/pre/abstract/10.1103/PhysRevE.101.042117

Quantum steampunk invades Scientific American

London, at an hour that made Rosalind glad she’d nicked her brother’s black cloak instead of wearing her scarlet one. The factory alongside her had quit belching smoke for the night, but it would start again soon. A noise caused her to draw back against the brick wall. Glancing up, she gasped. An oblong hulk was drifting across the sky. The darkness obscured the details, but she didn’t need to see; a brass-colored lock would be painted across the side. Mellator had launched his dirigible.

A variation on the paragraph above began the article that I sent to Scientific American last year. Clara Moskowitz, an editor, asked which novel I’d quoted the paragraph from. I’d made the text up, I confessed. 

Engine

Most of my publications, which wind up in physics journals, don’t read like novels. But I couldn’t resist when Clara invited me to write a feature about quantum steampunk, the confluence of quantum information and thermodynamics. Quantum Frontiers regulars will anticipate paragraphs two and three of the article:

Welcome to steampunk. This genre has expanded across literature, art and film over the past several decades. Its stories tend to take place near nascent factories and in grimy cities, in Industrial Age England and the Wild West—in real-life settings where technologies were burgeoning. Yet steampunk characters extend these inventions into futuristic technologies, including automata and time machines. The juxtaposition of old and new creates an atmosphere of romanticism and adventure. Little wonder that steampunk fans buy top hats and petticoats, adorn themselves in brass and glass, and flock to steampunk conventions. 

These fans dream the adventure. But physicists today who work at the intersection of three fields—quantum physics, information theory and thermodynamics—live it. Just as steampunk blends science-fiction technology with Victorian style, a modern field of physics that I call “quantum steampunk” unites 21st-century technology with 19th-century scientific principles. 

The Scientific American graphics team dazzled me. For years, I’ve been hankering to work with artists on visualizing quantum steampunk. I had an opportunity after describing an example of quantum steampunk in the article. The example consists of a quantum many-body engine that I designed with members Christopher White, Sarang Gopalakrishnan, and Gil Refael of Caltech’s Institute for Quantum Information and Matter. Our engine is a many-particle system ratcheted between two phases accessible to quantum matter, analogous to liquid and solid. The engine can be realized with, e.g., ultracold atoms or trapped ions. Lasers would trap and control the particles. Clara, the artists, and I drew the engine, traded comments, and revised the figure tens of times. In early drafts, the lasers resembled the sketches in atomic physicists’ Powerpoints. Before the final draft, the lasers transformed into brass-and-glass beauties. They evoke the scientific instruments crafted through the early 1900s, before chunky gray aesthetics dulled labs.

MBL-mobile

Scientific American published the feature this month; you can read it in print or, here, online. Many thanks to Clara for the invitation, for shepherding the article into print, and for her enthusiasm. To repurpose the end of the article, “You’re reading about this confluence of old and new on Quantum Frontiers. But you might as well be holding a novel by H. G. Wells or Jules Verne.”

 

Figures courtesy of the Scientific American graphics team.

In the hour of darkness and peril and need

I recited the poem “Paul Revere’s Ride” to myself while walking across campus last week. 

A few hours earlier, I’d cancelled the seminar that I’d been slated to cohost two days later. In a few hours, I’d cancel the rest of the seminars in the series. Undergraduates would begin vacating their dorms within a day. Labs would shut down, and postdocs would receive instructions to work from home.

I memorized “Paul Revere’s Ride” after moving to Cambridge, following tradition: As a research assistant at Lancaster University in the UK, I memorized e. e. cummings’s “anyone lived in a pretty how town.” At Caltech, I memorized “Kubla Khan.” Another home called for another poem. “Paul Revere’s Ride” brooked no competition: Campus’s red bricks run into Boston, where Revere’s story began during the 1700s. 

Henry Wadsworth Longfellow, who lived a few blocks from Harvard, composed the poem. It centers on the British assault against the American colonies, at Lexington and Concord, on the eve of the Revolutionary War. A patriot learned of the British troops’ movements one night. He communicated the information to faraway colleagues by hanging lamps in a church’s belfry. His colleagues rode throughout the night, to “spread the alarm / through every Middlesex village and farm.” The riders included Paul Revere, a Boston silversmith.

The Boston-area bricks share their color with Harvard’s crest, crimson. So do the protrusions on the coronavirus’s surface in colored pictures. 

Screen Shot 2020-03-13 at 6.40.04 PM

I couldn’t have designed a virus to suit Harvard’s website better.

The yard that I was crossing was about to “de-densify,” the red-brick buildings were about to empty, and my home was about to lock its doors. I’d watch regulations multiply, emails keep pace, and masks appear. Revere’s messenger friend, too, stood back and observed his home:

he climbed to the tower of the church,
Up the wooden stairs, with stealthy tread,
To the belfry-chamber overhead, [ . . . ]
By the trembling ladder, steep and tall,
To the highest window in the wall,
Where he paused to listen and look down
A moment on the roofs of the town,
And the moonlight flowing over all.

I commiserated also with Revere, waiting on tenterhooks for his message:

Meanwhile, impatient to mount and ride,
Booted and spurred, with a heavy stride,
On the opposite shore walked Paul Revere.
Now he patted his horse’s side,
Now gazed on the landscape far and near,
Then impetuous stamped the earth,
And turned and tightened his saddle-girth…

The lamps ended the wait, and Revere rode off. His mission carried a sense of urgency, yet led him to serenity that I hadn’t expected:

He has left the village and mounted the steep,
And beneath him, tranquil and broad and deep,
Is the Mystic, meeting the ocean tides…

The poem’s final stanza kicks. Its message carries as much relevance to the 21st century as Longfellow, writing about the 1700s during the 1800s, could have dreamed:

So through the night rode Paul Revere;
And so through the night went his cry of alarm
To every Middlesex village and farm,—
A cry of defiance, and not of fear,
A voice in the darkness, a knock at the door,
And a word that shall echo forevermore!
For, borne on the night-wind of the Past,
Through all our history, to the last,
In the hour of darkness and peril and need,
The people will waken and listen to hear
The hurrying hoof-beats of that steed,
And the midnight message of Paul Revere.

Reciting poetry clears my head. I can recite on autopilot, while processing other information or admiring my surroundings. But the poem usually wins my attention at last. The rhythm and rhyme sweep me along, narrowing my focus. Reciting “Paul Revere’s Ride” takes me 5-10 minutes. After finishing that morning, I repeated the poem, and began repeating it again, until arriving at my institute on the edge of Harvard’s campus.

Isolation can benefit theorists. Many of us need quiet to study, capture proofs, and disentangle ideas. Many of us need collaboration; but email, Skype, Google hangouts, and Zoom connect us. Many of us share and gain ideas through travel; but I can forfeit a  little car sickness, air turbulence, and waiting in lines. Many of us need results from experimentalist collaborators, but experimental results often take long to gather in the absence of pandemics. Many of us are introverts who enjoy a little self-isolation.

 

April is National Poetry Month in the United States. I often celebrate by intertwining physics with poetry in my April blog post. Next month, though, I’ll have other news to report. Besides, my walk demonstrated, we need poetry now. 

Paul Revere found tranquility on the eve of a storm. Maybe, when the night clears and doors reopen, science born of the quiet will flood journals. Aren’t we fortunate, as physicists, to lead lives steeped in a kind of poetry?

Sense, sensibility, and superconductors

Jonathan Monroe disagreed with his PhD supervisor—with respect. They needed to measure a superconducting qubit, a tiny circuit in which current can flow forever. The qubit emits light, which carries information about the qubit’s state. Jonathan and Kater intensify the light using an amplifier. They’d fabricated many amplifiers, but none had worked. Jonathan suggested changing their strategy—with a politeness to which Emily Post couldn’t have objected. Jonathan’s supervisor, Kater Murch, suggested repeating the protocol they’d performed many times.

“That’s the definition of insanity,” Kater admitted, “but I think experiment needs to involve some of that.”

I watched the exchange via Skype, with more interest than I’d have watched the Oscars with. Someday, I hope, I’ll be able to weigh in on such a debate, despite working as a theorist. Someday, I’ll have partnered with enough experimentalists to develop insight.

I’m partnering with Jonathan and Kater on an experiment that coauthors and I proposed in a paper blogged about here. The experiment centers on an uncertainty relation, an inequality of the sort immortalized by Werner Heisenberg in 1927. Uncertainty relations imply that, if you measure a quantum particle’s position, the particle’s momentum ceases to have a well-defined value. If you measure the momentum, the particle ceases to have a well-defined position. Our uncertainty relation involves weak measurements. Weakly measuring a particle’s position doesn’t disturb the momentum much and vice versa. We can interpret the uncertainty in information-processing terms, because we cast the inequality in terms of entropies. Entropies, described here, are functions that quantify how efficiently we can process information, such as by compressing data. Jonathan and Kater are checking our inequality, and exploring its implications, with a superconducting qubit.

With chip

I had too little experience to side with Jonathan or with Kater. So I watched, and I contemplated how their opinions would sound if expressed about theory. Do I try one strategy again and again, hoping to change my results without changing my approach? 

At the Perimeter Institute for Theoretical Physics, Masters students had to swallow half-a-year of course material in weeks. I questioned whether I’d ever understand some of the material. But some of that material resurfaced during my PhD. Again, I attended lectures about Einstein’s theory of general relativity. Again, I worked problems about observers in free-fall. Again, I calculated covariant derivatives. The material sank in. I decided never to question, again, whether I could understand a concept. I might not understand a concept today, or tomorrow, or next week. But if I dedicate enough time and effort, I chose to believe, I’ll learn.

My decision rested on experience and on classes, taught by educational psychologists, that I’d taken in college. I’d studied how brains change during learning and how breaks enhance the changes. Sense, I thought, underlay my decision—though expecting outcomes to change, while strategies remain static, sounds insane.

Old cover

Does sense underlie Kater’s suggestion, likened to insanity, to keep fabricating amplifiers as before? He’s expressed cynicism many times during our collaboration: Experiment needs to involve some insanity. The experiment probably won’t work for a long time. Plenty more things will likely break. 

Jonathan and I agree with him. Experiments have a reputation for breaking, and Kater has a reputation for knowing experiments. Yet Jonathan—with professionalism and politeness—remains optimistic that other methods will prevail, that we’ll meet our goals early. I hope that Jonathan remains optimistic, and I fancy that Kater hopes, too. He prophesies gloom with a quarter of a smile, and his record speaks against him: A few months ago, I met a theorist who’d collaborated with Kater years before. The theorist marveled at the speed with which Kater had operated. A theorist would propose an experiment, and boom—the proposal would work.

Sea monsters

Perhaps luck smiled upon the implementation. But luck dovetails with the sense that underlies Kater’s opinion: Experiments involve factors that you can’t control. Implement a protocol once, and it might fail because the temperature has risen too high. Implement the protocol again, and it might fail because a truck drove by your building, vibrating the tabletop. Implement the protocol again, and it might fail because you bumped into a knob. Implement the protocol a fourth time, and it might succeed. If you repeat a protocol many times, your environment might change, changing your results.

Sense underlies also Jonathan’s objections to Kater’s opinions. We boost our chances of succeeding if we keep trying. We derive energy to keep trying from creativity and optimism. So rebelling against our PhD supervisors’ sense is sensible. I wondered, watching the Skype conversation, whether Kater the student had objected to prophesies of doom as Jonathan did. Kater exudes the soberness of a tenured professor but the irreverence of a Californian who wears his hair slightly long and who tattooed his wedding band on. Science thrives on the soberness and the irreverence.

Green cover

Who won Jonathan and Kater’s argument? Both, I think. Last week, they reported having fabricated amplifiers that work. The lab followed a protocol similar to their old one, but with more conscientiousness. 

I’m looking forward to watching who wins the debate about how long the rest of the experiment takes. Either way, check out Jonathan’s talk about our experiment if you attend the American Physical Society’s March Meeting. Jonathan will speak on Thursday, March 5, at 12:03, in room 106. Also, keep an eye out for our paper—which will debut once Jonathan coaxes the amplifier into synching with his qubit.

On the merits of flatworm reproduction

On my right sat a quantum engineer. She was facing a melanoma specialist who works at a medical school. Leftward of us sat a networks expert, a flatworm enthusiast, and a condensed-matter theorist.

Farther down sat a woman who slices up mouse brains. 

Welcome to “Coherent Spins in Biology,” a conference that took place at the University of California, Los Angeles (UCLA) this past December. Two southern Californians organized the workshop: Clarice Aiello heads UCLA’s Quantum Biology Tech lab. Thorsten Ritz, of the University of California, Irvine, cofounded a branch of quantum biology.

Clarice logo

Quantum biology served as the conference’s backdrop. According to conventional wisdom, quantum phenomena can’t influence biology significantly: Biological systems have high temperatures, many particles, and fluids. Quantum phenomena, such as entanglement (a relationship that quantum particles can share), die quickly under such conditions.

Yet perhaps some survive. Quantum biologists search for biological systems that might use quantum resources. Then, they model and measure the uses and resources. Three settings (at least) have held out promise during the past few decades: avian navigation, photosynthesis, and olfaction. You can read about them in this book, cowritten by a conference participant for the general public. I’ll give you a taste (or a possibly quantum smell?) by sketching the avian-navigation proposal, developed by Thorsten and colleagues.

Bird + flower

Birds migrate southward during the autumn and northward during the spring. How do they know where to fly? At least partially by sensing the Earth’s magnetic field, which leads compass needles to point northward. How do birds sense the field?

Possibly with a protein called “cryptochrome.” A photon (a particle of light) could knock an electron out of part of the protein and into another part. Each part would have one electron that lacked a partner. The electrons would share entanglement. One electron would interact with the Earth’s magnetic field differently than its partner, because its surroundings would differ. (Experts: The electrons would form a radical pair. One electron would neighbor different atoms than the other, so the electron would experience a different local magnetic field. The discrepancy would change the relative phase between the electrons’ spins.) The discrepancy could affect the rate at which the chemical system could undergo certain reactions. Which reactions occur could snowball into large and larger effects, eventually signaling the brain about where the bird should fly.

Angry bird

Quantum mechanics and life rank amongst the universe’s mysteries. How could a young researcher resist the combination? A postdoc warned me away, one lunchtime at the start of my PhD. Quantum biology had enjoyed attention several years earlier, he said, but noise the obscured experimental data. Controversy marred the field.

I ate lunch with that postdoc in 2013. Interest in quantum biology is reviving, as evidenced in the conference. Two reasons suggested themselves: new technologies and new research avenues. For example, Thorsten described the disabling and deletion of genes that code for cryptochrome. Such studies require years’ more work but might illuminate whether cryptochrome affects navigation.

Open door

The keynote speaker, Harvard’s Misha Lukin, illustrated new technologies and new research avenues. Misha’s lab has diamonds that contain quantum defects, which serve as artificial atoms. The defects sense tiny magnetic fields and temperatures. Misha’s group applies these quantum sensors to biology problems.

For example, different cells in an embryo divide at different times. Imagine reversing the order in which the cells divide. Would the reversal harm the organism? You could find out by manipulating the temperatures in different parts of the embryo: Temperature controls the rate at which cells divide.

Misha’s team injected nanoscale diamonds into a worm embryo. (See this paper for a related study.) The diamonds reported the temperature at various points in the worm. This information guided experimentalists who heated the embryo with lasers.

The manipulated embryos grew into fairly normal adults. But their cells, and their descendants’ cells, cycled through the stages of life slowly. This study exemplified, to me, one of the most meaningful opportunities for quantum physicists interested in biology: to develop technologies and analyses that can answer biology questions.

Thermometer

I mentioned, in an earlier blog post, another avenue emerging in quantum biology: Physicist Matthew Fisher proposed a mechanism by which entanglement might enhance coordinated neuron firing. My collaborator Elizabeth Crosson and I analyzed how the molecules in Matthew’s proposal—Posner clusters—could process quantum information. The field of Posner quantum biology had a population of about two, when Elizabeth and I entered, and I wondered whether anyone would join us.

The conference helped resolve my uncertainty. Three speakers (including me) presented work based on Matthew’s; two other participants were tilling the Posner soil; and another speaker mentioned Matthew’s proposal. The other two Posner talks related data from three experiments. The experimentalists haven’t finished their papers, so I won’t share details. But stay tuned.

Posner 2

Posner molecule (image by Swift et al.)

Clarice and Thorsten’s conference reminded me of a conference I’d participated in at the end of my PhD: Last month, I moonlighted as a quantum biologist. In 2017, I moonlighted as a quantum-gravity theorist. Two years earlier, I’d been dreaming about black holes and space-time. At UCLA, I was finishing the first paper I’ve coauthored with biophysicists. What a toolkit quantum information theory and thermodynamics provide, that it can unite such disparate fields. 

The contrast—on top of what I learned at UCLA—filled my mind for weeks. And reminded me of the description of asexual reproduction that we heard from the conference’s flatworm enthusiast. According to Western Michigan University’s Wendy Beane, a flatworm “glues its butt down, pops its head off, and grows a new one. Y’know. As one does.” 

I hope I never flinch from popping my head off and growing a new one—on my quantum-information-thermodynamics spine—whenever new science calls for figuring out.

 

With thanks to Clarice, Thorsten, and UCLA for their invitation and hospitality.

An equation fit for a novel

Archana Kamal was hunting for an apartment in Cambridge, Massachusetts. She was moving MIT, to work as a postdoc in physics. The first apartment she toured had housed John Updike, during his undergraduate career at Harvard. No other apartment could compete; Archana signed the lease.

The apartment occupied the basement of a red-brick building covered in vines. The rooms spanned no more than 350 square feet. Yet her window opened onto the neighbors’ garden, whose leaves she tracked across the seasons. And Archana cohabited with history.

Apartment photos

She’s now studying the universe’s history, as an assistant professor of physics at the University of Massachusetts Lowell. The cosmic microwave background (CMB) pervades the universe. The CMB consists of electromagnetic radiation, or light. Light has particle-like properties and wavelike properties. The wavelike properties include wavelength, the distance between successive peaks. Long-wavelength light includes red light, infrared light, and radio waves. Short-wavelength light includes blue light, ultraviolet light, and X-rays. Light of one wavelength and light of another wavelength are said to belong to different modes.

Wavelength

Does the CMB have nonclassical properties, impossible to predict with classical physics but (perhaps) predictable with quantum theory? The CMB does according to the theory of inflation. According to the theory, during a short time interval after the Big Bang, the universe expanded very quickly: Spacetime stretched. Inflation explains features of our universe, though we don’t know what mechanism would have effected the expansion.

According to inflation, around the Big Bang time, all the light in the universe crowded together. The photons (particles of light) interacted, entangling (developing strong quantum correlations). Spacetime then expanded, and the photons separated. But they might retain entanglement.

Detecting that putative entanglement poses challenges. For instance, the particles that you’d need to measure could produce a signal too weak to observe. Cosmologists have been scratching their heads about how to observe nonclassicality in the CMB. One team—Nishant Agarwal at UMass Lowell and Sarah Shandera at Pennsylvania State University—turned to Archana for help.

A sky full of stars

Archana studies the theory of open quantum systems, quantum systems that interact with their environments. She thinks most about systems such as superconducting qubits, tiny circuits with which labs are building quantum computers. But the visible universe constitutes an open quantum system.

We can see only part of the universe—or, rather, only part of what we believe is the whole universe. Why? We can see only stuff that’s emitted light that has reached us, and light has had only so long to travel. But the visible universe interacts (we believe) with stuff we haven’t seen. For instance, according to the theory of inflation, that rapid expansion stretched some light modes’ wavelengths. Those wavelengths grew longer than the visible universe. We can’t see those modes’ peak-to-peak variations or otherwise observe the modes, often called “frozen.” But the frozen modes act as an environment that exchanges information and energy with the visible universe.

We describe an open quantum system’s evolution with a quantum master equation, which I blogged about four-and-a-half years ago. Archana and collaborators constructed a quantum master equation for the visible universe. The frozen modes, they found, retain memories of the visible universe. (Experts: the bath is non-Markovian.) Next, they need to solve the equation. Then, they’ll try to use their solution to identify quantum observables that could reveal nonclassicality in the CMB.

Frozen modes

Frozen modes

Archana’s project caught my fancy for two reasons. First, when I visited her in October, I was collaborating on a related project. My coauthors and I were concocting a scheme for detecting nonclassical correlations in many-particle systems by measuring large-scale properties. Our paper debuted last month. It might—with thought and a dash of craziness—be applied to detect nonclassicality in the CMB. Archana’s explanation improved my understanding of our scheme’s potential. 

Second, Archana and collaborators formulated a quantum master equation for the visible universe. A quantum master equation for the visible universe. The phrase sounded romantic to me.1 It merited a coauthor who’d seized on an apartment lived in by a Pulitzer Prize-winning novelist. 

Archana’s cosmology and Updike stories reminded me of one reason why I appreciate living in the Boston area: History envelops us here. Last month, while walking to a grocery, I found a sign that marks the building in which the poet e. e. cummings was born. My walking partner then generously tolerated a recitation of cummings’s “anyone lived in a pretty how town.” History enriches our lives—and some of it might contain entanglement.

 

1It might sound like gobbledygook to you, if I’ve botched my explanations of the terminology.

With thanks to Archana and the UMass Lowell Department of Physics and Applied Physics for their hospitality and seminar invitation.

The paper that begged for a theme song

A year ago, the “I’m a little teapot” song kept playing in my head.

I was finishing a collaboration with David Limmer, a theoretical chemist at the University of California Berkeley. David studies quantum and classical systems far from equilibrium, including how these systems exchange energy and information with their environments. Example systems include photoisomers.

A photoisomer is a molecular switch. These switches appear across nature and technologies. We have photoisomers in our eyes, and experimentalists have used photoisomers to boost solar-fuel storage. A photoisomer has two functional groups, or collections of bonded atoms, attached to a central axis. 

Photoisomer

Your average-Joe photoisomer spends much of its life in equilibrium, exchanging heat with room-temperature surroundings. The molecule has the shape above, called the cis configuration. Imagine shining a laser or sunlight on the photoisomer. The molecule can absorb a photon, or particle of light, gaining energy. The energized switch has the opportunity to switch: One chemical group can rotate downward. The molecule will occupy its trans configuration.

Switch

The molecule now has more energy than it had while equilibrium, albeit less energy than it had right after absorbing the photon. The molecule can remain in this condition for a decent amount of time. (Experts: The molecule occupies a metastable state.) That is, the molecule can store sunlight. For that reason, experimentalists at Harvard and MIT attached photoisomers to graphene nanotubules, improving the nanotubules’ storage of solar fuel.

Teapot 1

With what probability does a photoisomer switch upon absorbing a photon? This question has resisted easy answering, because photoisomers prove difficult to model: They’re small, quantum, and far from equilibrium. People have progressed by making assumptions, but such assumptions can lack justifications or violate physical principles. David wanted to derive a simple, general bound—of the sort in which thermodynamicists specialize—on a photoisomer’s switching probability.

He had a hunch as to how he could derive such a bound. I’ve blogged, many times, about thermodynamic resource theories. Thermodynamic resource theories are simple models, developed in quantum information theory, for exchanges of heat, particles, information, and more. These models involve few assumptions: the conservation of energy, quantum theory, and, to some extent, the existence of a large environment (Markovianity). With such a model, David suspected, he might derive his bound.

Teapot 2

I knew nothing about photoisomers when I met David, but I knew about thermodynamic resource theories. I’d contributed to their development, to the theorems that have piled up in the resource-theory corner of quantum information theory. Then, the corner had given me claustrophobia. Those theorems felt so formal, abstract, and idealized. Formal, abstract theory has drawn me ever since I started studying physics in college. But did resource theories model physical reality? Could they impact science beyond our corner of quantum information theory? Did resource theories matter?

I called for connecting thermodynamic resource theories to physical reality four years ago, in a paper that begins with an embarrassing story about me. Resource theorists began designing experiments whose results should agree with our theorems. Theorists also tried to improve the accuracy with which resource theories model experimentalists’ limitations. See David’s and my paper for a list of these achievements. They delighted me, as a step toward the broadening of resource theories’ usefulness. 

Like any first step, this step pointed toward opportunities. Experiments designed to test our theorems essentially test quantum mechanics. Scientists have tested quantum mechanics for decades; we needn’t test it much more. Such experimental proposals can push experimentalists to hone their abilities, but I hoped that the community could accomplish more. We should be able to apply resource theories to answer questions cultivated in other fields, such as condensed matter and chemistry. We should be useful to scientists outside our corner of quantum information.

Teapot 3

David’s idea lit me up like photons on a solar-fuel-storage device. He taught me about photoisomers, I taught him about resource theories, and we derived his bound. Our proof relies on the “second laws of thermodynamics.” These abstract resource-theory results generalize the second law of thermodynamics, which helps us understand why time flows in only one direction. We checked our bound against numerical simulations (experts: of Lindbladian evolution). Our bound is fairly tight if the photoisomer has a low probability of absorbing a photon, as in the Harvard-MIT experiment. 

Experts: We also quantified the photoisomer’s coherences relative to the energy eigenbasis. Coherences can’t boost the switching probability, we concluded. But, en route to this conclusion, we found that the molecule is a natural realization of a quantum clock. Our quantum-clock modeling extends to general dissipative Landau-Zener transitions, prevalent across condensed matter and chemistry.

Teapot 4

As I worked on our paper one day, a jingle unfolded in my head. I recognized the tune first: “I’m a little teapot.” I hadn’t sung that much since kindergarten, I realized. Lyrics suggested themselves: 

I’m a little isomer
with two hands.
Here is my cis pose;
here is my trans.

Stand me in the sunlight;
watch me spin.
I’ll keep solar
energy in!

The song lodged itself in my head for weeks. But if you have to pay an earworm to collaborate with David, do.