# The weak shall inherit the quasiprobability.

Justin Dressel’s office could understudy for the archetype of a physicist’s office. A long, rectangular table resembles a lab bench. Atop the table perches a tesla coil. A larger tesla coil perches on Justin’s desk. Rubik’s cubes and other puzzles surround a computer and papers. In front of the desk hangs a whiteboard.

A puzzle filled the whiteboard in August. Justin had written a model for a measurement of a quasiprobability. I introduced quasiprobabilities here last Halloween. Quasiprobabilities are to probabilities as ebooks are to books: Ebooks resemble books but can respond to touchscreen interactions through sounds and animation. Quasiprobabilities resemble probabilities but behave in ways that probabilities don’t.

A tesla coil of Justin Dressel’s

Let $p$ denote the probability that any given physicist keeps a tesla coil in his or her office. $p$ ranges between zero and one. Quasiprobabilities can dip below zero. They can assume nonreal values, dependent on the imaginary number $i = \sqrt{-1}$. Probabilities describe nonquantum phenomena, like tesla-coil collectors,1 and quantum phenomena, like photons. Quasiprobabilities appear nonclassical.2,3

We can infer the tesla-coil probability by observing many physicists’ offices:

$\text{Prob(any given physicist keeps a tesla coil in his/her office)} = \frac{ \text{\# physicists who keep tesla coils in their offices} }{ \text{\# physicists} } \, .$ We can infer quasiprobabilities from weak measurements, Justin explained. You can measure the number of tesla coils in an office by shining light on the office, correlating the light’s state with the tesla-coil number, and capturing the light on photographic paper. The correlation needn’t affect the tesla coils. Observing a quantum state changes the state, by the Uncertainty Principle heralded by Heisenberg.

We could observe a quantum system weakly. We’d correlate our measurement device (the analogue of light) with the quantum state (the analogue of the tesla-coil number) unreliably. Imagining shining a dull light on an office for a brief duration. Shadows would obscure our photo. We’d have trouble inferring the number of tesla coils. But the dull, brief light burst would affect the office less than a strong, long burst would.

Justin explained how to infer a quasiprobability from weak measurements. He’d explained on account of an action that others might regard as weak: I’d asked for help.

Chaos had seized my attention a few weeks earlier. Chaos is a branch of math and physics that involves phenomena we can’t predict, like weather. I had forayed into quantum chaos for reasons I’ll explain in later posts. I was studying a function $F(t)$ that can flag chaos in cold atoms, black holes, and superconductors.

I’d derived a theorem about $F(t)$. The theorem involved a UFO of a mathematical object: a probability amplitude that resembled a probability but could assume nonreal values. I presented the theorem to my research group, which was kind enough to provide feedback.

“Is this amplitude physical?” John Preskill asked. “Can you measure it?”

“I don’t know,” I admitted. “I can tell a story about what it signifies.”

“If you could measure it,” he said, “I might be more excited.”

You needn’t study chaos to predict that private clouds drizzled on me that evening. I was grateful to receive feedback from thinkers I respected, to learn of a weakness in my argument. Still, scientific works are creative works. Creative works carry fragments of their creators. A weakness in my argument felt like a weakness in me. So I took the step that some might regard as weak—by seeking help.

Some problems, one should solve alone. If you wake me at 3 AM and demand that I solve the Schrödinger equation that governs a particle in a box, I should be able to comply (if you comply with my demand for justification for the need to solve the Schrödinger equation at 3 AM).One should struggle far into problems before seeking help.

Some scientists extend this principle into a ban on assistance. Some students avoid asking questions for fear of revealing that they don’t understand. Some boast about passing exams and finishing homework without the need to attend office hours. I call their attitude “scientific machismo.”

I’ve all but lived in office hours. I’ve interrupted lectures with questions every few minutes. I didn’t know if I could measure that probability amplitude. But I knew three people who might know. Twenty-five minutes after I emailed them, Justin replied: “The short answer is yes!”

I visited Justin the following week, at Chapman University’s Institute for Quantum Studies. I sat at his bench-like table, eyeing the nearest tesla coil, as he explained. Justin had recognized my probability amplitude from studies of the Kirkwood-Dirac quasiprobability. Experimentalists infer the Kirkwood-Dirac quasiprobability from weak measurements. We could borrow these experimentalists’ techniques, Justin showed, to measure my probability amplitude.

The borrowing grew into a measurement protocol. The theorem grew into a paper. I plunged into quasiprobabilities and weak measurements, following Justin’s advice. John grew more excited.

The meek might inherit the Earth. But the weak shall measure the quasiprobability.

With gratitude to Justin for sharing his expertise and time; and to Justin, Matt Leifer, and Chapman University’s Institute for Quantum Studies for their hospitality.

Chapman’s community was gracious enough to tolerate a seminar from me about thermal states of quantum systems. You can watch the seminar here.

1Tesla-coil collectors consists of atoms described by quantum theory. But we can describe tesla-coil collectors without quantum theory.

2Readers foreign to quantum theory can interpret “nonclassical” roughly as “quantum.”

3Debate has raged about whether quasiprobabilities govern classical phenomena.

4I should be able also to recite the solutions from memory.

# Happy Halloween from…the discrete Wigner function?

Do you hope to feel a breath of cold air on the back of your neck this Halloween? I’ve felt one literally: I earned my Masters in the icebox called “Ontario,” at the Perimeter Institute for Theoretical Physics. Perimeter’s colloquia1 take place in an auditorium blacker than a Quentin Tarantino film. Aephraim Steinberg presented a colloquium one air-conditioned May.

Steinberg experiments on ultracold atoms and quantum optics2 at the University of Toronto. He introduced an idea that reminds me of biting into an apple whose coating you’d thought consisted of caramel, then tasting blood: a negative (quasi)probability.

Probabilities usually range from zero upward. Consider Shirley Jackson’s short story The Lottery. Villagers in a 20th-century American village prepare slips of paper. The number of slips equals the number of families in the village. One slip bears a black spot. Each family receives a slip. Each family has a probability $p > 0$  of receiving the marked slip. What happens to the family that receives the black spot? Read Jackson’s story—if you can stomach more than a Tarantino film.

Jackson peeled off skin to reveal the offal of human nature. Steinberg’s experiments reveal the offal of Nature. I’d expect humaneness of Jackson’s villagers and nonnegativity of probabilities. But what looks like a probability and smells like a probability might be hiding its odor with Special-Edition Autumn-Harvest Febreeze.

A quantum state resembles a set of classical3 probabilities. Consider a classical system that has too many components for us to track them all. Consider, for example, the cold breath on the back of your neck. The breath consists of air molecules at some temperature $T$. Suppose we measured the molecules’ positions and momenta. We’d have some probability $p_1$ of finding this particle here with this momentum, that particle there with that momentum, and so on. We’d have a probability $p_2$ of finding this particle there with that momentum, that particle here with this momentum, and so on. These probabilities form the air’s state.

We can tell a similar story about a quantum system. Consider the quantum light prepared in a Toronto lab. The light has properties analogous to position and momentum. We can represent the light’s state with a mathematical object similar to the air’s probability density.4 But this probability-like object can sink below zero. We call the object a quasiprobability, denoted by $\mu$.

If a $\mu$ sinks below zero, the quantum state it represents encodes entanglement. Entanglement is a correlation stronger than any achievable with nonquantum systems. Quantum information scientists use entanglement to teleport information, encrypt messages, and probe the nature of space-time. I usually avoid this cliché, but since Halloween is approaching: Einstein called entanglement “spooky action at a distance.”

Eugene Wigner and others defined quasiprobabilities shortly before Shirley Jackson wrote The Lottery. Quantum opticians use these $\mu$’s, because quantum optics and quasiprobabilities involve continuous variables. Examples of continuous variables include position: An air molecule can sit at this point (e.g., $x = 0$) or at that point (e.g., $x = 1$) or anywhere between the two (e.g., $x = 0.001$). The possible positions form a continuous set. Continuous variables model quantum optics as they model air molecules’ positions.

Information scientists use continuous variables less than we use discrete variables. A discrete variable assumes one of just a few possible values, such as $0$ or $1$, or trick or treat.

How a quantum-information theorist views Halloween.

Quantum-information scientists study discrete systems, such as electron spins. Can we represent discrete quantum systems with quasiprobabilities $\mu$ as we represent continuous quantum systems? You bet your barmbrack.

Bill Wootters and others have designed quasiprobabilities for discrete systems. Wootters stipulated that his $\mu$ have certain properties. The properties appear in this review.  Most physicists label properties “1,” “2,” etc. or “Prop. 1,” “Prop. 2,” etc. The Wootters properties in this review have labels suited to Halloween.

Seeing (quasi)probabilities sink below zero feels like biting into an apple that you think has a caramel coating, then tasting blood. Did you eat caramel apples around age six? Caramel apples dislodge baby teeth. When baby teeth fall out, so does blood. Tasting blood can mark growth—as does the squeamishness induced by a colloquium that spooks a student. Who needs haunted mansions when you have negative quasiprobabilities?

For nonexperts:

1Weekly research presentations attended by a department.

2Light.

3Nonquantum (basically).

4Think “set of probabilities.”

# What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter $\delta$. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number $N_\delta$ of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform $N_\delta$ trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)

# Carbon copy

The anticipatory excitement of summer vacation endures in the teaching profession like no place outside childhood schooldays. Undoubtedly, ranking high on the list that keep teachers teaching. The excitement was high as the summer of 2015 started out the same as it had the three previous years at Caltech. I would show up, find a place to set up, and wait for orders from scientist David Boyd. Upon arrival in Dr. Yeh’s lab, surprisingly, I found all the equipment and my work space very much untouched from last year. I was happy to find it this way, because it likely meant I could continue exactly where I left off last summer. Later, I realized David’s time since I left was devoted to the development of a revolutionary new process for making graphene in large sheets at low temperatures. He did not have time to mess with my stuff, including the stepper-motor I had been working on last summer.

So, I place my glorified man purse in a bottom drawer, log into my computer, and wait.   After maybe a half hour I hear the footsteps set to a rhythm defined only by someone with purpose, and I’m sure it’s David.  He peeks in the little office where I’m seated and with a brief welcoming phrase informs me that the goal for the summer is to wrap graphene around a thin copper wire using, what he refers to as, “your motor.” The motor is a stepper motor from an experiment David ran several years back. I wired and set up the track and motor last year for a proposed experiment that was never realized involving the growth of graphene strips. Due to the limited time I spend each summer at Caltech (8 weeks), that experiment came to a halt when I left, and was to be continued this year. Instead, the focus veered from growing graphene strips to growing a two to three layer coating of graphene around a copper wire. The procedure remains the same, however, the substrate onto which the graphene grows changes. When growing graphene-strips the substrate is a 25 micron thick copper foil, and after growth the graphene needs to be removed from the copper substrate. In our experiment we used a copper wire with an average thickness of 154 microns, and since the goal is to acquire a copper wire with graphene wrapped around, there’s no need to remove the graphene.

Noteworthy of mention is the great effort toward research concerning the removal and transfer of graphene from copper to more useful substrates. After graphene growth, the challenge shifts to separating the graphene sheet from the copper substrate without damaging the graphene. Next, the graphene is transferred to various substrates for fabrication and other purposes. Current techniques to remove graphene from copper often damage the graphene, ill-effecting the amazing electrical properties warranting great attention from R&D groups globally. A surprisingly simple new technique employs water to harmlessly remove graphene from copper. This technique has been shown to be effective on plasma-enhanced chemical vapor deposition (PECVD).  PECVD is the technique employed by scientist David Boyd, and is the focus of his paper published in Nature Communications in March of 2015.

So, David wants me to do something that has never been done before; grow graphene around a copper wire using a translation stage. The technique is to attach an Evenson cavity to the stage of a stepper motor/threaded rod apparatus, and very slowly move the plasma along a strip of copper wire. If successful, this could have far reaching implications for use with copper wire including, but certainly not limited to, corrosion prevention and thermal dissipation due to the high thermal conductivity exhibited by graphene. With David granting me free reign in his lab, and Ph.D. candidate Chen-Chih Hsu agreeing to help, I felt I had all the tools to give it a go.

Setting up this experiment is similar to growing graphene on copper foil using PECVD with a couple modifications. First, prior to pumping the quartz tube down to a near vacuum, we place a single copper wire into the tube instead of thin copper foil. Also, special care is taken when setting up the translation stage ensuring the Evenson cavity, attached to the stage, travels perfectly parallel to the quartz tube so as not to create a bind between the cavity and tube during travel. For the first trial we decide to grow along a 5cm long section of copper wire at a translation speed of 25 microns per second, which is a very slow speed made possible by the use of the stepper motor apparatus. Per usual, after growth we check the sample using Raman Spectroscopy. The graph shown here is the actual Raman taken in the lab immediately after growth. As the sample is scanned, the graph develops from right to left.  We’re not expecting to see anything of much interest, however, hope and excitement steadily rise as the computer monitor shows a well defined 2D-peak (right peak), a G-peak (middle peak), and a D-peak (left peak) with a height indicative of high defects.  Not the greatest of Raman spectra if we were shooting for defect-free monolayer graphene, but this is a very strong indication that we have 2-3 layer graphene on the copper wire.  How could this be? Chen-Chih and I looked at each other incredulously.  We quickly checked several locations along the wire and found the same result.  We did it!  Not only did we do it, but we did it on our first try!  OK, now we can party.  Streamers popped up into the air, a DJ with a turn table slid out from one of the walls, a perfectly synchronized kick line of cabaret dancers pranced about…… okay, back to reality, we had a high-five and a back-and-forth “wow, that’s so cool!”

We knew before we even reported our success to David, and eventually Professor Yeh, that they would both, immediately, ask for the exact parameters of the experiment and if the results were reproducible. So, we set off to try and grow again. Unfortunately, the second run did not yield a copper wire coated with graphene. The third trial did not yield graphene, and neither did the fourth or fifth. We were, however, finding that multi-layer graphene was growing at the tips of the copper wire, but not in the middle sections.  Our hypothesis at that point was that the existence of three edges at the tips of the wire aided the growth of graphene, compared to only two edges in the wire’s midsection (we are still not sure if this is the whole story).

In an effort to repeat the experiment and attain the parameters for growth, an issue with the experimental setup needed to be addressed. We lacked control concerning the exact mixture of each gas employed for CVD (Chemical Vapor Deposition). In the initial setup of the experiment, a lack of control was acceptable, because the goal was only to discover if growing graphene around a copper wire was possible. Now that we knew it was possible, attaining reproducible results required a deeper understanding of the process, therefore, more precise control in our setup. Dr. Boyd agreed, and ordered two leak valves, providing greater control over the exact recipe for the mixture of gases used for CVD. With this improved control, the hope is to be able to control and, therefore, detect the exact gas mixture yielding the much needed parameters for reliable graphene growth on a copper wire.

Unfortunately, my last day at Caltech before returning to my regular teaching gig, and the delivery of the leak valves occurred on the same day. Fortunately, I will be returning this summer (2016) to continue the search for the elusive parameters. If we succeed, David Boyd’s and Chen-Chih’s names will, once again, show up in a prestigious journal (Nature, Science, one of those…) and, just maybe, mine will make it there too. For the first time ever.

# LIGO: Playing the long game, and winning big!

Wow. What a day! And what a story!

Kip Thorne in 1972, around the time MTW was completed.

It is hard for me to believe, but I have been on the Caltech faculty for nearly a third of a century. And when I arrived in 1983, interferometric detection of gravitational waves was already a hot topic of discussion here. At Kip Thorne’s urging, Ron Drever had been recruited to Caltech and was building the 40-meter prototype interferometer (which is still operating as a testbed for future detection technologies). Kip and his colleagues, spurred by Vladimir Braginsky’s insights, had for several years been actively studying the fundamental limits of quantum measurement precision, and how these might impact the search for gravitational waves.

I decided to bone up a bit on the subject, so naturally I pulled down from my shelf the “telephone book” — Misner, Thorne, and Wheeler’s mammoth Gravitationand browsed Chapter 37 (Detection of Gravitational Wave), for which Kip had been the lead author. The chapter brimmed over with enthusiasm for the subject, but to my surprise interferometers were hardly mentioned. Instead the emphasis was on mechanical bar detectors. These had been pioneered by Joseph Weber, whose efforts in the 1960s had first aroused Kip’s interest in detecting gravitational waves, and by Braginsky.

I sought Kip out for an explanation, and with characteristic clarity and patience he told how his views had evolved. He had realized in the 1970s that a strain sensitivity of order $10^{-21}$ would be needed for a good chance at detection, and after many discussions with colleagues like Drever, Braginsky, and Rai Weiss, he had decided that kind of sensitivity would not be achievable with foreseeable technology using bars.

Ron Drever, who built Caltech’s 40-meter prototype interferometer in the 1980s.

We talked about what would be needed — a kilometer scale detector capable of sensing displacements of $10^{-18}$ meters. I laughed. As he had many times by then, Kip told why this goal was not completely crazy, if there is enough light in an interferometer, which bounces back and forth many times as a waveform passes. Immediately after the discussion ended I went to my desk and did some crude calculations. The numbers kind of worked, but I shook my head, unconvinced. This was going to be a huge undertaking. Success seemed unlikely. Poor Kip!

I’ve never been involved in LIGO, but Kip and I remained friends, and every now and then he would give me the inside scoop on the latest developments (most memorably while walking the streets of London for hours on a beautiful spring evening in 1991). From afar I followed the forced partnership between Caltech and MIT that was forged in the 1980s, and the painful transition from a small project under the leadership of Drever-Thorne-Weiss (great scientists but lacking much needed management expertise) to a large collaboration under a succession of strong leaders, all based at Caltech.

Vladimir Braginsky, who realized that quantum effects limit the sensitivity of  gravitational wave detectors.

During 1994-95, I co-chaired a committee formulating a long-range plan for Caltech physics, and we spent more time talking about LIGO than any other issue. Part of our concern was whether a small institution like Caltech could absorb such a large project, which was growing explosively and straining Institute resources. And we also worried about whether LIGO would ultimately succeed. But our biggest worry of all was different — could Caltech remain at the forefront of gravitational wave research so that if and when LIGO hit paydirt we would reap the scientific benefits?

A lot has changed since then. After searching for years we made two crucial new faculty appointments: theorist Yanbei Chen (2007), who provided seminal ideas for improving sensitivity, and experimentalist Rana Adhikari (2006), a magician at the black art of making an interferometer really work. Alan Weinstein transitioned from high energy physics to become a leader of LIGO data analysis. We established a world-class numerical relativity group, now led by Mark Scheel. Staff scientists like Stan Whitcomb also had an essential role, as did longtime Project Manager Gary Sanders. LIGO Directors Robbie Vogt, Barry Barish, Jay Marx, and now Dave Reitze have provided effective and much needed leadership.

Rai Weiss, around the time he conceived LIGO in an amazing 1972 paper.

My closest connection to LIGO arose during the 1998-99 academic year, when Kip asked me to participate in a “QND reading group” he organized. (QND stands for Quantum Non-Demolition, Braginsky’s term for measurements that surpass the naïve quantum limits on measurement precision.) At that time we envisioned that Advanced LIGO would turn on in 2008, yet there were still many questions about how it would achieve the sensitivity required to ensure detection. I took part enthusiastically, and learned a lot, but never contributed any ideas of enduring value. The discussions that year did have positive outcomes, however; leading for example to a seminal paper by Kimble, Levin, Matsko, Thorne, and Vyatchanin on improving precision through squeezing of light. By the end of the year I had gained a much better appreciation of the strength of the LIGO team, and had accepted that Advanced LIGO might actually work!

I once asked Vladimir Braginsky why he spent years working on bar detectors for gravitational waves, while at the same time realizing that fundamental limits on quantum measurement would make successful detection very unlikely. Why wasn’t he trying to build an interferometer already in the 1970s? Braginsky loved to be asked questions like this, and his answer was a long story, told with many dramatic flourishes. The short answer is that he viewed interferometric detection of gravitational waves as too ambitious. A bar detector was something he could build in his lab, while an interferometer of the appropriate scale would be a long-term project involving a much larger, technically diverse team.

Joe Weber, whose audacious belief that gravitational waves are detectable on earth inspired Kip Thorne and many others.

Kip’s chapter in MTW ends with section 37.10 (“Looking toward the future”) which concludes with this juicy quote (written almost 45 years ago):

“The technical difficulties to be surmounted in constructing such detectors are enormous. But physicists are ingenious; and with the impetus provided by Joseph Weber’s pioneering work, and with the support of a broad lay public sincerely interested in pioneering in science, all obstacles will surely be overcome.”

That’s what we call vision, folks. You might also call it cockeyed optimism, but without optimism great things would never happen.

Optimism alone is not enough. For something like the detection of gravitational waves, we needed technical ingenuity, wise leadership, lots and lots of persistence, the will to overcome adversity, and ultimately the efforts of hundreds of hard working, talented scientists and engineers. Not to mention the courage displayed by the National Science Foundation in supporting such a risky project for decades.

I have never been prouder than I am today to be part of the Caltech family.

# Some like it cold.

When I reached IBM’s Watson research center, I’d barely seen Aaron in three weeks. Aaron is an experimentalist pursuing a physics PhD at Caltech. I eat dinner with him and other friends, most Fridays. The group would gather on a sidewalk in the November dusk, those three weeks. Light would spill from a lamppost, and we’d tuck our hands into our pockets against the chill. Aaron’s wife would shake her head.

“The fridge is running,” she’d explain.

Aaron cools down mechanical devices to near absolute zero. Absolute zero is the lowest temperature possible,1 lower than outer space’s temperature. Cold magnifies certain quantum behaviors. Researchers observe those behaviors in small systems, such as nanoscale devices (devices about 10-9 meters long). Aaron studies few-centimeter-long devices. Offsetting the devices’ size with cold might coax them into exhibiting quantum behaviors.

The cooling sounds as effortless as teaching a cat to play fetch. Aaron lowers his fridge’s temperature in steps. Each step involves checking for leaks: A mix of two fluids—two types of helium—cools the fridge. One type of helium costs about \$800 per liter. Lose too much helium, and you’ve lost your shot at graduating. Each leak requires Aaron to warm the fridge, then re-cool it. He hauled helium and pampered the fridge for ten days, before the temperature reached 10 milliKelvins (0.01 units above absolute zero). He then worked like…well, like a grad student to check for quantum behaviors.

Aaron came to mind at IBM.

Nick works at Watson, IBM’s research center in Yorktown Heights, New York. Watson has sweeping architecture frosted with glass and stone. The building reminded me of Fred Astaire: decades-old, yet classy. I found Nick outside the cafeteria, nursing a coffee. He had sandy hair, more piercings than I, and a mandate to build a quantum computer.

IBM Watson

“Definitely!” Nick fished out an ID badge; grabbed his coffee cup; and whisked me down a wide, window-paneled hall.

Different researchers, across the world, are building quantum computers from different materials. IBMers use superconductors. Superconductors are tiny circuits. They function at low temperatures, so IBM has seven closet-sized fridges. Different teams use different fridges to tackle different challenges to computing.

Nick found a fridge that wasn’t running. He climbed half-inside, pointed at metallic wires and canisters, and explained how they work. I wondered how his cooling process compared to Aaron’s.

“You push a button.” Nick shrugged. “The fridge cools in two days.”

IBM, I learned, has dry fridges. Aaron uses a wet fridge. Dry and wet fridges operate differently, though both require helium. Aaron’s wet fridge vibrates less, jiggling his experiment less. Jiggling relates to transferring heat. Heat suppresses the quantum behaviors Aaron hopes to observe.

Heat and warmth manifest in many ways, in physics. Count Rumford, an 18th-century American-Brit, conjectured the relationship between heat and jiggling. He noticed that drilling holes into canons immersed in water boils the water. The drill bits rotated–moved in circles–transferring energy of movement to the canons, which heated up. Heat enraptures me because it relates to entropy, a measure of disorderliness and ignorance. The flow of heat helps explain why time flows in just one direction.

A physicist friend of mine writes papers, he says, when catalyzed by “blinding rage.” He reads a paper by someone else, whose misunderstandings anger him. His wrath boils over into a research project.

Warmth manifests as the welcoming of a visitor into one’s lab. Nick didn’t know me from Fred Astaire, but he gave me the benefit of the doubt. He let me pepper him with questions and invited more questions.

Warmth manifests as a 500-word disquisition on fridges. I asked Aaron, via email, about how his cooling compares to IBM’s. I expected two sentences and a link to Wikipedia, since Aaron works 12-hour shifts. But he took pity on his theorist friend. He also warmed to his subject. Can’t you sense the zeal in “Helium is the only substance in the world that will naturally isotopically separate (neat!)”? No knowledge of isotopic separation required.

Many quantum scientists like it cold. But understanding, curiosity, and teamwork fire us up. Anyone under the sway of those elements of science likes it hot.

With thanks to Aaron and Nick. Thanks also to John Smolin and IBM Watson’s quantum-computing-theory team for their hospitality.

1In many situations. Some systems, like small magnets, can access negative temperatures.

# Surprise Happens in Experiments

The discovery of high temperature superconductivity in copper-oxide-based ceramics (cuprates) in 1986 created tremendous excitement in the scientific community. For the first time superconductivity, the ability of a material to conduct electricity with zero energy loss to heat, was possible at temperatures an order of magnitude higher than what were previously thought possible. Thus began the dream of room temperature superconductivity, a dream that has been heavily sought but still unfulfilled to this day.

The difficulty in creating a room temperature superconductor is that we still do not even understand how cuprate high temperature superconductors exactly work. We have known that the superconductivity is born from removing or adding a proper amount of electrons to an insulating antiferromagnet. What is more is that the material experiences a mysterious region, usually called pseudogap, when transiting from the insulating antiferromagnet into the superconductor. For decades, scientists have debated whether the pseudogap in cuprates is a continuous evolution into superconductivity or a competing phase of matter with distinct symmetry properties, and some believe that a better understanding of its nature and relationship to superconductivity can help to pave a path towards room temperature superconductivity.

The compound that we are studying, strontium-iridium-oxide (Sr2IrO4), is a promising candidate for a new family of high temperature superconductors. Recent experimental findings in Sr2IrO4 reveal great similarities between Sr2IrO4 and cuprates. Sr2IrO4 is a novel insulator at room temperature and turns into an antiferromagnet below a critical temperature called Néel temperature (TN). With a certain amount of electrons added or removed by introducing foreign atoms in it, Sr2IrO4 enters into the pseudogap regime. At an even higher charge carrier concentration and a lower temperature, Sr2IrO4 exhibits strong signatures of unconventional superconductivity. A summary of the evolution of Sr2IrO4 as functions of charge carrier density and temperature, usually referred as a phase diagram, is depicted into a cartoon below, which mimics that of cuprates.

A cartoon showing similarities between Sr2IrO4 and cuprates.

Our experimental results on the multipolar order in Sr2IrO4 further bridges the connection between Sr2IrO4 and cuprates. On one hand, there have been growing experimental evidences in recent years to support the presence of symmetry breaking phases of matter in the pseudogap regime of cuprates. On the other hand, the discovery of multipolar order in Sr2IrO4 where the psuedogap phenomenon has also been observed suggests a possible connection between these two. To establish the relationship between the multipolar order and the pseudogap in Sr2IrO4, one needs to compare the temperature scales at which each of them happens. So far, we have bounded a line in the Sr2IrO4 phase diagram for the multipolar ordered phase that breaks the 90o rotational symmetry from its high temperature state. However, the onset temperature for the pseudogap in Sr2IrO4 remains unknown in the community.

An artistic rendition of rotational anisotropy patterns both above and below the transition temperature T_Ω where the multipolar order happens, showing the 90^o rotational symmetry breaking across T_Ω.

Retrospectively, the scientific story was told as above in which it seems our experiment perfectly fits in a void in the connections between Sr2IrO4 and cuprates. In reality, this experiment is my first encounter of serendipity in scientific researches. When we started our experiment, there were no experimental indications about pseudogap or superconductivity in Sr2IrO4, and we were just planning to refine its antiferromagnetic structure based upon its recently refined crystallographic structure. This joyful surprise makes me aware of the importance of sensitivity to unexpected results, especially in a developing field. Another surprise to me is the technique that we used in this study, namely rotational anisotropy optical second harmonic generation. This technique is as simple as shining light of frequency ω at the sample from a series of angles and collecting light of frequency 2ω reflected from the sample. The novelty of our setup is to move the light around the sample as opposed to the other way in the traditional version of this technique. Exactly thank to this seemingly trivial novelty, we are able to probe the multipolar order that is still challenging for other more sophisticated symmetry sensitive techniques. To me, it is this experience that is more valuable, and that is what I feel happiest to share.

Although the dream of room temperature superconductivity is still unfulfilled, the cross comparisons between Sr2IrO4 and cuprates could be insightful in determining the important factors for superconductivity, and eventually make the journey towards the dream.

Please find more details in our paper and Caltech media.

Artist’s rendition of spatially segregated domains of multipolar order in the Sr2IrO4 crystal.