Standing back at Stanford

T-shirt 1

This T-shirt came to mind last September. I was standing in front of a large silver-colored table littered with wires, cylinders, and tubes. Greg Bentsen was pointing at components and explaining their functions. He works in Monika Schleier-Smith’s lab, as a PhD student, at Stanford.

Monika’s group manipulates rubidium atoms. A few thousand atoms sit in one of the cylinders. That cylinder contains another cylinder, an optical cavity, that contains the atoms. A mirror caps each of the cavity’s ends. Light in the cavity bounces off the mirrors.

Light bounces off your bathroom mirror similarly. But we can describe your bathroom’s light accurately with Maxwellian electrodynamics, a theory developed during the 1800s. We describe the cavity’s light with quantum electrodynamics (QED). Hence we call the lab’s set-up cavity QED.

The light interacts with the atoms, entangling with them. The entanglement imprints information about the atoms on the light. Suppose that light escaped from the cavity. Greg and friends could measure the light, then infer about the atoms’ quantum state.

A little light leaks through the mirrors, though most light bounces off. From leaked light, you can infer about the ensemble of atoms. You can’t infer about individual atoms. For example, consider an atom’s electrons. Each electron has a quantum property called a spin. We sometimes imagine the spin as an arrow that points upward or downward. Together, the electrons’ spins form the atom’s joint spin. You can tell, from leaked light, whether one atom’s spin points upward. But you can’t tell which atom’s spin points upward. You can’t see the atoms for the ensemble.

Monika’s team can. They’ve cut a hole in their cylinder. Light escapes the cavity through the hole. The light from the hole’s left-hand edge carries information about the leftmost atom, and so on. The team develops a photograph of the line of atoms. Imagine holding a photograph of a line of people. You can point to one person, and say, “Aha! She’s the xkcd fan.” Similarly, Greg and friends can point to one atom in their photograph and say, “Aha! That atom has an upward-pointing spin.” Monika’s team is developing single-site imaging.

Solvay

Aha! She’s the xkcd fan.

Monika’s team plans to image atoms in such detail, they won’t need for light to leak through the mirrors. Light leakage creates problems, including by entangling the atoms with the world outside the cavity. Suppose you had to diminish the amount of light that leaks from a rubidium cavity. How should you proceed?

Tell the mirrors,

T-shirt 2

You should lengthen the cavity. Why? Imagine a photon, a particle of light, in the cavity. It zooms down the cavity’s length, hits a mirror, bounces off, retreats up the cavity’s length, hits the other mirror, and bounces off. The photon repeats this process until a mirror hit fails to generate a bounce. The mirror transmits the photon to the exterior; the photon leaks out. How can you reduce leaks? By preventing photons from hitting mirrors so often, by forcing the photons to zoom longer, by lengthening the cavity, by shifting the mirrors outward.

So Greg hinted, beside that silver-colored table in Monika’s lab. The hint struck a chord: I recognized the impulse to

T-shirt 3

The impulse had led me to Stanford.

Weeks earlier, I’d written my first paper about quantum chaos and information scrambling. I’d sat and read and calculated and read and sat and emailed and written. I needed to stand up, leave my cavity, and image my work from other perspectives.

Stanford physicists had written quantum-chaos papers I admired. So I visited, presented about my work, and talked. Patrick Hayden introduced me to a result that might help me apply my result to another problem. His group helped me simplify a mathematical expression. Monika reflected that a measurement scheme I’d proposed sounded not unreasonable for cavity QED.

And Greg led me to recognize the principle behind my visit: Sometimes, you have to

T-shirt 4

to move forward.

With gratitude to Greg, Monika, Patrick, and the rest of Monika’s and Patrick’s groups for their time, consideration, explanations, and feedback. With thanks to Patrick and Stanford’s Institute for Theoretical Physics for their hospitality.

The power of information

Sara Imari Walker studies ants. Her entomologist colleague Gabriele Valentini cultivates ant swarms. Gabriele coaxes a swarm from its nest, hides the nest, and offers two alternative nests. Gabriele observe the ants’ responses, then analyzes their data with Sara.

Sara doesn’t usually study ants. She trained in physics, information theory, and astrobiology. (Astrobiology is the study of life; life’s origins; and conditions amenable to life, on Earth and anywhere else life may exist.) Sara analyzes how information reaches, propagates through, and manifests in the swarm.

Some ants inspect one nest; some, the other. Few ants encounter both choices. Yet most of the ants choose simultaneously. (How does Gabriele know when an ant chooses? Decided ants carry other ants toward the chosen nest. Undecided ants don’t.)

Gabriele and Sara plotted each ant’s status (decided or undecided) at each instant. All the ants’ lines start in the “undecided” region, high up in the graph. Most lines drop to the “decided” region together. Physicists call such dramatic, large-scale changes in many-particle systems “phase transitions.” The swarm transitions from the “undecided” phase to the “decided,” as moisture transitions from vapor to downpour.

Sara presentation

Sara versus the ants

Look from afar, and you’ll see evidence of a hive mind: The lines clump and slump together. Look more closely, and you’ll find lags between ants’ decisions. Gabriele and Sara grouped the ants according to their behaviors. Sara explained the grouping at a workshop this spring.

The green lines, she said, are undecided ants.

My stomach dropped like Gabriele and Sara’s ant lines.

People call data “cold” and “hard.” Critics lambast scientists for not appealing to emotions. Politicians weave anecdotes into their numbers, to convince audiences to care.

But when Sara spoke, I looked at her green lines and thought, “That’s me.”

I’ve blogged about my indecisiveness. Postdoc Ning Bao and I formulated a quantum voting scheme in which voters can superpose—form quantum combinations of—options. Usually, when John Preskill polls our research group, I abstain from voting. Politics, and questions like “Does building a quantum computer require only engineering or also science?”,1 have many facets. I want to view such questions from many angles, to pace around the questions as around a sculpture, to hear other onlookers, to test my impressions on them, and to cogitate before choosing.2 However many perspectives I’ve gathered, I’m missing others worth seeing. I commiserated with the green-line ants.

Sculpture-question.001

I first met Sara in the building behind the statue. Sara earned her PhD in Dartmouth College’s physics department, with Professor Marcelo Gleiser.

Sara presented about ants at a workshop hosted by the Beyond Center for Fundamental Concepts in Science at Arizona State University (ASU). The organizers, Paul Davies of Beyond and Andrew Briggs of Oxford, entitled the workshop “The Power of Information.” Participants represented information theory, thermodynamics and statistical mechanics, biology, and philosophy.

Paul and Andrew posed questions to guide us: What status does information have? Is information “a real thing” “out there in the world”? Or is information only a mental construct? What roles can information play in causation?

We paced around these questions as around a Chinese viewing stone. We sat on a bench in front of those questions, stared, debated, and cogitated. We taught each other about ants, artificial atoms, nanoscale machines, and models for information processing.

Stone.001

Chinese viewing stone in Yuyuan Garden in Shanghai

I wonder if I’ll acquire opinions about Paul and Andrew’s questions. Maybe I’ll meander from “undecided” to “decided” over a career. Maybe I’ll phase-transition like Sara’s ants. Maybe I’ll remain near the top of her diagram, a green holdout.

I know little about information’s power. But Sara’s plot revealed one power of information: Information can move us—from homeless to belonging, from ambivalent to decided, from a plot’s top to its bottom, from passive listener to finding yourself in a green curve.

 

With thanks to Sara Imari Walker, Paul Davies, Andrew Briggs, Katherine Smith, and the Beyond Center for their hospitality and thoughts.

 

1By “only engineering,” I mean not “merely engineering” pejoratively, but “engineering and no other discipline.”

2I feel compelled to perform these activities before choosing. I try to. Psychological experiments, however, suggest that I might decide before realizing that I’ve decided.

Glass beads and weak-measurement schemes

Richard Feynman fiddled with electronics in a home laboratory, growing up. I fiddled with arts and crafts.1 I glued popsicle sticks, painted plaques, braided yarn, and designed greeting cards. Of the supplies in my family’s crafts box, I adored the beads most. Of the beads, I favored the glass ones.

I would pour them on the carpet, some weekend afternoons. I’d inherited a hodgepodge: The beads’ sizes, colors, shapes, trimmings, and craftsmanship varied. No property divided the beads into families whose members looked like they belonged together. But divide the beads I tried. I might classify them by color, then subdivide classes by shape. The color and shape groupings precluded me from grouping by size. But, by loosening my original classification and combining members from two classes, I might incorporate trimmings into the categorization. I’d push my classification scheme as far as I could. Then, I’d rake the beads together and reorganize them according to different principles.

Why have I pursued theoretical physics? many people ask. I have many answers. They include “Because I adored organizing craft supplies at age eight.” I craft and organize ideas.

Beads

I’ve blogged about the out-of-time-ordered correlator (OTOC), a signature of how quantum information spreads throughout a many-particle system. Experimentalists want to measure the OTOC, to learn how information spreads. But measuring the OTOC requires tight control over many quantum particles.

I proposed a scheme for measuring the OTOC, with help from Chapman University physicist Justin Dressel. The scheme involves weak measurements. Weak measurements barely disturb the systems measured. (Most measurements of quantum systems disturb the measured systems. So intuited Werner Heisenberg when formulating his uncertainty principle.)

I had little hope for the weak-measurement scheme’s practicality. Consider the stereotypical experimentalist’s response to a stereotypical experimental proposal by a theorist: Oh, sure, we can implement that—in thirty years. Maybe. If the pace of technological development doubles. I expected to file the weak-measurement proposal in the “unfeasible” category.

But experimentalists started collaring me. The scheme sounds reasonable, they said. How many trials would one have to perform? Did the proposal require ancillas, helper systems used to control the measured system? Must each ancilla influence the whole measured system, or could an ancilla interact with just one particle? How did this proposal compare with alternatives?

I met with a cavity-QED2 experimentalist and a cold-atoms expert. I talked with postdocs over skype, with heads of labs at Caltech, with grad students in Taiwan, and with John Preskill in his office. I questioned an NMR3 experimentalist over lunch and fielded superconducting-qubit4 questions in the sunshine. I read papers, reread papers, and powwowed with Justin.

I wouldn’t have managed half so well without Justin and without Brian Swingle. Brian and coauthors proposed the first OTOC-measurement scheme. He reached out after finding my first OTOC paper.

According to that paper, the OTOC is a moment of a quasiprobability.5 How does that quasiprobability look, we wondered? How does it behave? What properties does it have? Our answers appear in a paper we released with Justin this month. We calculate the quasiprobability in two examples, prove properties of the quasiprobability, and argue that the OTOC motivates generalizations of quasiprobability theory. We also enhance the weak-measurement scheme and analyze it.

Amidst that analysis, in a 10 x 6 table, we classify glass beads.

Table

We inventoried our experimental conversations and distilled them. We culled measurement-scheme features analogous to bead size, color, and shape. Each property labels a row in the table. Each measurement scheme labels a column. Each scheme has, I learned, gold flecks and dents, hues and mottling, an angle at which it catches the light.

I’ve kept most of the glass beads that fascinated me at age eight. Some of the beads have dispersed to necklaces, picture frames, and eyeglass leashes. I moved the remnants, a few years ago, to a compartmentalized box. Doesn’t it resemble the table?

Box

That’s why I work at the IQIM.

 

1I fiddled in a home laboratory, too, in a garage. But I lived across the street from that garage. I lived two rooms from an arts-and-crafts box.

2Cavity QED consists of light interacting with atoms in a box.

3Lots of nuclei manipulated with magnetic fields. “NMR” stands for “nuclear magnetic resonance.” MRI machines, used to scan brains, rely on NMR.

4Superconducting circuits are tiny, cold quantum circuits.

5A quasiprobability resembles a probability but behaves more oddly: Probabilities range between zero and one; quasiprobabilities can dip below zero. Think of a moment as like an average.

With thanks to all who questioned me; to all who answered questions of mine; to my wonderful coauthors; and to my parents, who stocked the crafts box.

The weak shall inherit the quasiprobability.

Justin Dressel’s office could understudy for the archetype of a physicist’s office. A long, rectangular table resembles a lab bench. Atop the table perches a tesla coil. A larger tesla coil perches on Justin’s desk. Rubik’s cubes and other puzzles surround a computer and papers. In front of the desk hangs a whiteboard.

A puzzle filled the whiteboard in August. Justin had written a model for a measurement of a quasiprobability. I introduced quasiprobabilities here last Halloween. Quasiprobabilities are to probabilities as ebooks are to books: Ebooks resemble books but can respond to touchscreen interactions through sounds and animation. Quasiprobabilities resemble probabilities but behave in ways that probabilities don’t.

tesla-coil-2

A tesla coil of Justin Dressel’s

 

Let p denote the probability that any given physicist keeps a tesla coil in his or her office. p ranges between zero and one. Quasiprobabilities can dip below zero. They can assume nonreal values, dependent on the imaginary number i = \sqrt{-1}. Probabilities describe nonquantum phenomena, like tesla-coil collectors,1 and quantum phenomena, like photons. Quasiprobabilities appear nonclassical.2,3

We can infer the tesla-coil probability by observing many physicists’ offices:

\text{Prob(any given physicist keeps a tesla coil in his/her office)}  =  \frac{ \text{\# physicists who keep tesla coils in their offices} }{ \text{\# physicists} } \, . We can infer quasiprobabilities from weak measurements, Justin explained. You can measure the number of tesla coils in an office by shining light on the office, correlating the light’s state with the tesla-coil number, and capturing the light on photographic paper. The correlation needn’t affect the tesla coils. Observing a quantum state changes the state, by the Uncertainty Principle heralded by Heisenberg.

We could observe a quantum system weakly. We’d correlate our measurement device (the analogue of light) with the quantum state (the analogue of the tesla-coil number) unreliably. Imagining shining a dull light on an office for a brief duration. Shadows would obscure our photo. We’d have trouble inferring the number of tesla coils. But the dull, brief light burst would affect the office less than a strong, long burst would.

Justin explained how to infer a quasiprobability from weak measurements. He’d explained on account of an action that others might regard as weak: I’d asked for help.

whiteboard

Chaos had seized my attention a few weeks earlier. Chaos is a branch of math and physics that involves phenomena we can’t predict, like weather. I had forayed into quantum chaos for reasons I’ll explain in later posts. I was studying a function F(t) that can flag chaos in cold atoms, black holes, and superconductors.

I’d derived a theorem about F(t). The theorem involved a UFO of a mathematical object: a probability amplitude that resembled a probability but could assume nonreal values. I presented the theorem to my research group, which was kind enough to provide feedback.

“Is this amplitude physical?” John Preskill asked. “Can you measure it?”

“I don’t know,” I admitted. “I can tell a story about what it signifies.”

“If you could measure it,” he said, “I might be more excited.”

You needn’t study chaos to predict that private clouds drizzled on me that evening. I was grateful to receive feedback from thinkers I respected, to learn of a weakness in my argument. Still, scientific works are creative works. Creative works carry fragments of their creators. A weakness in my argument felt like a weakness in me. So I took the step that some might regard as weak—by seeking help.

 

drizzle

Some problems, one should solve alone. If you wake me at 3 AM and demand that I solve the Schrödinger equation that governs a particle in a box, I should be able to comply (if you comply with my demand for justification for the need to solve the Schrödinger equation at 3 AM).One should struggle far into problems before seeking help.

Some scientists extend this principle into a ban on assistance. Some students avoid asking questions for fear of revealing that they don’t understand. Some boast about passing exams and finishing homework without the need to attend office hours. I call their attitude “scientific machismo.”

I’ve all but lived in office hours. I’ve interrupted lectures with questions every few minutes. I didn’t know if I could measure that probability amplitude. But I knew three people who might know. Twenty-five minutes after I emailed them, Justin replied: “The short answer is yes!”

sun

I visited Justin the following week, at Chapman University’s Institute for Quantum Studies. I sat at his bench-like table, eyeing the nearest tesla coil, as he explained. Justin had recognized my probability amplitude from studies of the Kirkwood-Dirac quasiprobability. Experimentalists infer the Kirkwood-Dirac quasiprobability from weak measurements. We could borrow these experimentalists’ techniques, Justin showed, to measure my probability amplitude.

The borrowing grew into a measurement protocol. The theorem grew into a paper. I plunged into quasiprobabilities and weak measurements, following Justin’s advice. John grew more excited.

The meek might inherit the Earth. But the weak shall measure the quasiprobability.

With gratitude to Justin for sharing his expertise and time; and to Justin, Matt Leifer, and Chapman University’s Institute for Quantum Studies for their hospitality.

Chapman’s community was gracious enough to tolerate a seminar from me about thermal states of quantum systems. You can watch the seminar here.

1Tesla-coil collectors consists of atoms described by quantum theory. But we can describe tesla-coil collectors without quantum theory.

2Readers foreign to quantum theory can interpret “nonclassical” roughly as “quantum.”

3Debate has raged about whether quasiprobabilities govern classical phenomena.

4I should be able also to recite the solutions from memory.

Happy Halloween from…the discrete Wigner function?

Do you hope to feel a breath of cold air on the back of your neck this Halloween? I’ve felt one literally: I earned my Masters in the icebox called “Ontario,” at the Perimeter Institute for Theoretical Physics. Perimeter’s colloquia1 take place in an auditorium blacker than a Quentin Tarantino film. Aephraim Steinberg presented a colloquium one air-conditioned May.

Steinberg experiments on ultracold atoms and quantum optics2 at the University of Toronto. He introduced an idea that reminds me of biting into an apple whose coating you’d thought consisted of caramel, then tasting blood: a negative (quasi)probability.

Probabilities usually range from zero upward. Consider Shirley Jackson’s short story The Lottery. Villagers in a 20th-century American village prepare slips of paper. The number of slips equals the number of families in the village. One slip bears a black spot. Each family receives a slip. Each family has a probability p > 0  of receiving the marked slip. What happens to the family that receives the black spot? Read Jackson’s story—if you can stomach more than a Tarantino film.

Jackson peeled off skin to reveal the offal of human nature. Steinberg’s experiments reveal the offal of Nature. I’d expect humaneness of Jackson’s villagers and nonnegativity of probabilities. But what looks like a probability and smells like a probability might be hiding its odor with Special-Edition Autumn-Harvest Febreeze.

febreeze

A quantum state resembles a set of classical3 probabilities. Consider a classical system that has too many components for us to track them all. Consider, for example, the cold breath on the back of your neck. The breath consists of air molecules at some temperature T. Suppose we measured the molecules’ positions and momenta. We’d have some probability p_1 of finding this particle here with this momentum, that particle there with that momentum, and so on. We’d have a probability p_2 of finding this particle there with that momentum, that particle here with this momentum, and so on. These probabilities form the air’s state.

We can tell a similar story about a quantum system. Consider the quantum light prepared in a Toronto lab. The light has properties analogous to position and momentum. We can represent the light’s state with a mathematical object similar to the air’s probability density.4 But this probability-like object can sink below zero. We call the object a quasiprobability, denoted by \mu.

If a \mu sinks below zero, the quantum state it represents encodes entanglement. Entanglement is a correlation stronger than any achievable with nonquantum systems. Quantum information scientists use entanglement to teleport information, encrypt messages, and probe the nature of space-time. I usually avoid this cliché, but since Halloween is approaching: Einstein called entanglement “spooky action at a distance.”

too-cute

Eugene Wigner and others defined quasiprobabilities shortly before Shirley Jackson wrote The Lottery. Quantum opticians use these \mu’s, because quantum optics and quasiprobabilities involve continuous variables. Examples of continuous variables include position: An air molecule can sit at this point (e.g., x = 0) or at that point (e.g., x = 1) or anywhere between the two (e.g., x = 0.001). The possible positions form a continuous set. Continuous variables model quantum optics as they model air molecules’ positions.

Information scientists use continuous variables less than we use discrete variables. A discrete variable assumes one of just a few possible values, such as 0 or 1, or trick or treat.

discrete

How a quantum-information theorist views Halloween.

Quantum-information scientists study discrete systems, such as electron spins. Can we represent discrete quantum systems with quasiprobabilities \mu as we represent continuous quantum systems? You bet your barmbrack.

Bill Wootters and others have designed quasiprobabilities for discrete systems. Wootters stipulated that his \mu have certain properties. The properties appear in this review.  Most physicists label properties “1,” “2,” etc. or “Prop. 1,” “Prop. 2,” etc. The Wootters properties in this review have labels suited to Halloween.

woo

Seeing (quasi)probabilities sink below zero feels like biting into an apple that you think has a caramel coating, then tasting blood. Did you eat caramel apples around age six? Caramel apples dislodge baby teeth. When baby teeth fall out, so does blood. Tasting blood can mark growth—as does the squeamishness induced by a colloquium that spooks a student. Who needs haunted mansions when you have negative quasiprobabilities?

 

For nonexperts:

1Weekly research presentations attended by a department.

2Light.

3Nonquantum (basically).

4Think “set of probabilities.”

What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Steaming soup

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Bridge - theory

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Bridge - exprmt

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Dunk 2

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter \delta. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number N_\delta of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform N_\delta trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

Schwinger headstone

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

Bread knife

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)

Carbon copy

The anticipatory excitement of summer vacation endures in the teaching profession like no place outside childhood schooldays. Undoubtedly, ranking high on the list that keep teachers teaching. The excitement was high as the summer of 2015 started out the same as it had the three previous years at Caltech. I would show up, find a place to set up, and wait for orders from scientist David Boyd. Upon arrival in Dr. Yeh’s lab, surprisingly, I found all the equipment and my work space very much untouched from last year. I was happy to find it this way, because it likely meant I could continue exactly where I left off last summer. Later, I realized David’s time since I left was devoted to the development of a revolutionary new process for making graphene in large sheets at low temperatures. He did not have time to mess with my stuff, including the stepper-motor I had been working on last summer.

landscape-1426869044-dboyd-ncyeh-0910So, I place my glorified man purse in a bottom drawer, log into my computer, and wait.   After maybe a half hour I hear the footsteps set to a rhythm defined only by someone with purpose, and I’m sure it’s David.  He peeks in the little office where I’m seated and with a brief welcoming phrase informs me that the goal for the summer is to wrap graphene around a thin copper wire using, what he refers to as, “your motor.” The motor is a stepper motor from an experiment David ran several years back. I wired and set up the track and motor last year for a proposed experiment that was never realized involving the growth of graphene strips. Due to the limited time I spend each summer at Caltech (8 weeks), that experiment came to a halt when I left, and was to be continued this year. Instead, the focus veered from growing graphene strips to growing a two to three layer coating of graphene around a copper wire. The procedure remains the same, however, the substrate onto which the graphene grows changes. When growing graphene-strips the substrate is a 25 micron thick copper foil, and after growth the graphene needs to be removed from the copper substrate. In our experiment we used a copper wire with an average thickness of 154 microns, and since the goal is to acquire a copper wire with graphene wrapped around, there’s no need to remove the graphene. 

Noteworthy of mention is the great effort toward research concerning the removal and transfer of graphene from copper to more useful substrates. After graphene growth, the challenge shifts to separating the graphene sheet from the copper substrate without damaging the graphene. Next, the graphene is transferred to various substrates for fabrication and other purposes. Current techniques to remove graphene from copper often damage the graphene, ill-effecting the amazing electrical properties warranting great attention from R&D groups globally. A surprisingly simple new technique employs water to harmlessly remove graphene from copper. This technique has been shown to be effective on plasma-enhanced chemical vapor deposition (PECVD).  PECVD is the technique employed by scientist David Boyd, and is the focus of his paper published in Nature Communications in March of 2015.

So, David wants me to do something that has never been done before; grow graphene around a copper wire using a translation stage. The technique is to attach an Evenson cavity to the stage of a stepper motor/threaded rod apparatus, and very slowly move the plasma along a strip of copper wire. If successful, this could have far reaching implications for use with copper wire including, but certainly not limited to, corrosion prevention and thermal dissipation due to the high thermal conductivity exhibited by graphene. With David granting me free reign in his lab, and Ph.D. candidate Chen-Chih Hsu agreeing to help, I felt I had all the tools to give it a go.

Setting up this experiment is similar to growing graphene on copper foil using PECVD with a couple modifications. First, prior to pumping the quartz tube down to a near vacuum, we place a single copper wire into the tube instead of thin copper foil. Also, special care is taken when setting up the translation stage ensuring the Evenson cavity, attached to the stage, travels perfectly parallel to the quartz tube so as not to create a bind between the cavity and tube during travel. For the first trial we decide to grow along a 5cm long section of copper wire at a translation speed of 25 microns per second, which is a very slow speed made possible by the use of the stepper motor apparatus. Per usual, after growth we check the sample using Raman Spectroscopy. The graph shown here is the actual Raman taken in the lab immediately after growth. As the sample is scanned, the graph develops from right to left.  We’re not expecting to see anything of much interest, however, hope and excitement steadily rise as the computer monitor shows a well defined 2D-peak (right peak), a G-peak (middle peak)Raman of Graphene on Copper Wire 4, and a D-peak (left peak) with a height indicative of high defects.  Not the greatest of Raman spectra if we were shooting for defect-free monolayer graphene, but this is a very strong indication that we have 2-3 layer graphene on the copper wire.  How could this be? Chen-Chih and I looked at each other incredulously.  We quickly checked several locations along the wire and found the same result.  We did it!  Not only did we do it, but we did it on our first try!  OK, now we can party.  Streamers popped up into the air, a DJ with a turn table slid out from one of the walls, a perfectly synchronized kick line of cabaret dancers pranced about…… okay, back to reality, we had a high-five and a back-and-forth “wow, that’s so cool!”

We knew before we even reported our success to David, and eventually Professor Yeh, that they would both, immediately, ask for the exact parameters of the experiment and if the results were reproducible. So, we set off to try and grow again. Unfortunately, the second run did not yield a copper wire coated with graphene. The third trial did not yield graphene, and neither did the fourth or fifth. We were, however, finding that multi-layer graphene was growing at the tips of the copper wire, but not in the middle sections.  Our hypothesis at that point was that the existence of three edges at the tips of the wire aided the growth of graphene, compared to only two edges in the wire’s midsection (we are still not sure if this is the whole story).

In an effort to repeat the experiment and attain the parameters for growth, an issue with the experimental setup needed to be addressed. We lacked control concerning the exact mixture of each gas employed for CVD (Chemical Vapor Deposition). In the initial setup of the experiment, a lack of control was acceptable, because the goal was only to discover if growing graphene around a copper wire was possible. Now that we knew it was possible, attaining reproducible results required a deeper understanding of the process, therefore, more precise control in our setup. Dr. Boyd agreed, and ordered two leak valves, providing greater control over the exact recipe for the mixture of gases used for CVD. With this improved control, the hope is to be able to control and, therefore, detect the exact gas mixture yielding the much needed parameters for reliable graphene growth on a copper wire.

Unfortunately, my last day at Caltech before returning to my regular teaching gig, and the delivery of the leak valves occurred on the same day. Fortunately, I will be returning this summer (2016) to continue the search for the elusive parameters. If we succeed, David Boyd’s and Chen-Chih’s names will, once again, show up in a prestigious journal (Nature, Science, one of those…) and, just maybe, mine will make it there too. For the first time ever.