Bringing the heat to Cal State LA

John Baez is a tough act to follow.

The mathematical physicist presented a colloquium at Cal State LA this May.1 The talk’s title: “My Favorite Number.” The advertisement image: A purple “24” superimposed atop two egg cartons.

The colloquium concerned string theory. String theorists attempt to reconcile Einstein’s general relativity with quantum mechanics. Relativity concerns the large and the fast, like the sun and light. Quantum mechanics concerns the small, like atoms. Relativity and with quantum mechanics individually suggest that space-time consists of four dimensions: up-down, left-right, forward-backward, and time. String theory suggests that space-time has more than four dimensions. Counting dimensions leads theorists to John Baez’s favorite number.

His topic struck me as bold, simple, and deep. As an otherworldly window onto the pedestrian. John Baez became, when I saw the colloquium ad, a hero of mine.

And a tough act to follow.

I presented Cal State LA’s physics colloquium the week after John Baez. My title: “Quantum steampunk: Quantum information applied to thermodynamics.” Steampunk is a literary, artistic, and film genre. Stories take place during the 1800s—the Victorian era; the Industrial era; an age of soot, grime, innovation, and adventure. Into the 1800s, steampunkers transplant modern and beyond-modern technologies: automata, airships, time machines, etc. Example steampunk works include Will Smith’s 1999 film Wild Wild West. Steampunk weds the new with the old.

So does quantum information applied to thermodynamics. Thermodynamics budded off from the Industrial Revolution: The steam engine crowned industrial technology. Thinkers wondered how efficiently engines could run. Thinkers continue to wonder. But the steam engine no longer crowns technology; quantum physics (with other discoveries) does. Quantum information scientists study the roles of information, measurement, and correlations in heat, energy, entropy, and time. We wed the new with the old.

What image could encapsulate my talk? I couldn’t lean on egg cartons. I proposed a steampunk warrior—cravatted, begoggled, and spouting electricity. The proposal met with a polite cough of an email. Not all department members, Milan Mijic pointed out, had heard of steampunk.

Milan is a Cal State LA professor and my erstwhile host. We toured the palm-speckled campus around colloquium time. What, he asked, can quantum information contribute to thermodynamics?

Heat offers an example. Imagine a classical (nonquantum) system of particles. The particles carry kinetic energy, or energy of motion: They jiggle. Particles that bump into each other can exchange energy. We call that energy heat. Heat vexes engineers, breaking transistors and lowering engines’ efficiencies.

Like heat, work consists of energy. Work has more “orderliness” than the heat transferred by random jiggles. Examples of work exertion include the compression of a gas: A piston forces the particles to move in one direction, in concert. Consider, as another example, driving electrons around a circuit with an electric field. The field forces the electrons to move in the same direction. Work and heat account for all the changes in a system’s energy. So states the First Law of Thermodynamics.

Suppose that the system is quantum. It doesn’t necessarily have a well-defined energy. But we can stick the system in an electric field, and the system can exchange motional-type energy with other systems. How should we define “work” and “heat”?

Quantum information offers insights, such as via entropies. Entropies quantify how “mixed” or “disordered” states are. Disorder grows as heat suffuses a system. Entropies help us extend the First Law to quantum theory.

So I explained during the colloquium. Rarely have I relished engaging with an audience as much as I relished engaging with Cal State LA’s. Attendees made eye contact, posed questions, commented after the talk, and wrote notes. A student in a corner appeared to be writing homework solutions. But a presenter couldn’t have asked for more from the rest. One exclamation arrested me like a coin in the cogs of a grandfather clock.

I’d peppered my slides with steampunk art: paintings, drawings, stills from movies. The peppering had staved off boredom as I’d created the talk. I hoped that the peppering would stave off my audience’s boredom. I apologized about the trimmings.

“No!” cried a woman near the front. “It’s lovely!”

I was about to discuss experiments by Jukka Pekola’s group. Pekola’s group probes quantum thermodynamics using electronic circuits. The group measures heat by counting the electrons that hop from one part of the circuit to another. Single-electron transistors track tunneling (quantum movements) of single particles.

Heat complicates engineering, calculations, and California living. Heat scrambles signals, breaks devices, and lowers efficiencies. Quantum heat can evade definition. Thermodynamicists grind their teeth over heat.

“No!” the woman near the front had cried. “It’s lovely!”

She was referring to steampunk art. But her exclamation applied to my subject. Heat has not only practical importance, but also fundamental: Heat influences every law of thermodynamics. Thermodynamic law underpins much of physics as 24 underpins much of string theory. Lovely, I thought, indeed.

Cal State LA offered a new view of my subfield, an otherworldly window onto the pedestrian. The more pedestrian an idea—the more often the idea surfaces, the more of our world the idea accounts for—the deeper the physics. Heat seems as pedestrian as a Pokémon Go player. But maybe, someday, I’ll present an idea as simple, bold, and deep as the number 24.

A window onto Cal State LA.

With gratitude to Milan Mijic, and to Cal State LA’s Department of Physics and Astronomy, for their hospitality.

1For nonacademics: A typical physics department hosts several presentations per week. A seminar relates research that the speaker has undertaken. The audience consists of department members who specialize in the speaker’s subfield. A department’s astrophysicists might host a Monday seminar; its quantum theorists, a Wednesday seminar; etc. One colloquium happens per week. Listeners gather from across the department. The speaker introduces a subfield, like the correction of errors made by quantum computers. Course lectures target students. Endowed lectures, often named after donors, target researchers.

What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter $\delta$. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number $N_\delta$ of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform $N_\delta$ trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)

Schopenhauer and the Geometry of Evil

At the beginning of the 18th century, Gottfried Leibniz took a break from quarreling with Isaac Newton over which of them had invented calculus to confront a more formidable adversary, Evil.  His landmark 1710 book Théodicée argued that, as creatures of an omnipotent and benevolent God, we live in the best of all possible worlds.  Earthquakes and wars, he said, are compatible with God’s benevolence because they may lead to beneficial consequences in ways we don’t understand.  Moreover, for us as individuals, having the freedom to make bad decisions challenges us to learn from our mistakes and improve our moral characters.

In 1844 another philosopher, Arthur Schopenhauer, came to the opposite conclusion, that we live in the worst of all possible worlds.  By this he meant not just a world is full of calamity and suffering, but one that in many respects, both human and natural, functions so badly that if it were only a little worse it could not continue to exist at all.   An atheist, Schopenhauer felt no need to defend God’s benevolence, and could turn his full attention to the mechanics and indeed (though not a mathematician) the geometry of badness.  He argued that if the world’s continued existence depends on many continuous variables such as temperature, composition of the atmosphere, etc., each of which must be within a narrow range, then almost all possible worlds will be just barely possible, lying near the periphery of the possible region.  Here, in his own words, is his refutation of Leibniz’ optimism.

Writing at a time when diseases were thought to be caused by poisonous vapors, and when “germ” meant not a pathogen but a seed or embryo, Schopenhauer hints at Darwin and Wallace’s natural selection.  But more importantly, as Alejandro Jenkins pointed out,  Schopenhauer’s distinction between possible and impossible worlds may be the first adequate statement of what in the 20th century came to be called the weak anthropic principle, the thesis that our perspective on the universe is unavoidably biased toward conditions hospitable to the existence and maintenance of complex structures. His examples of orbital instability and lethal atmospheric changes show that by an “impossible” world he meant one that might continue to exist physically, but would extinguish beings able to witness its existence.  At that time only seven planets were known, so, given all the ways things might go wrong, and barring divine assistance, it would have required incredible good luck for even one of them to be habitable.  Thus Schopenhauer’s principle, as it might better be called, was  less satisfactory as an answer to the problem of existence than to the problem of evil.

Returning to Schopenhauer’s  refutation of  Leibniz’s optimism, his  qualitative verbal reasoning can easily be recast in terms of high-dimensional geometry.  Let the goodness g  of a possible world   X   be approximated to lowest order as

g(X) = 1-q(X),

where  q  is a positive definite quadratic form in the d-dimensional real variable X. Possible worlds correspond to  X  values where   g  is positive, lying under a paraboloidal cap centered on the optimum,   g(0)=1,  with negative values of   representing impossible worlds.  Leaving out the impossible worlds, simple integration, of the sort Leibniz invented, shows that the average of  g  over possible worlds is  1-d/(d+2).   So if there is one variable, the average world is 2/3 as good as the best possible, while if there are 198 variables the average world is only 1% as good.  Thus, in the limit of many dimensions, the average world approaches  g=0,  the worst possible.   More general versions of this idea can be developed using post-18’th century mathematical tools like Lipschitz continuity.

Earthquakes are an oft-cited  example of senseless evil, hard to fit into a beneficent divine plan, but today we understand them as impersonal consequences of slow convection in the Earth’s mantle, which in turn is driven by the heat of its molten iron core.  Another consequence of the Earth’s molten core is its magnetic field, which deflects solar wind particles and keeps them from blowing away our atmosphere.   Lacking this protection, Mars lost most of its formerly dense atmosphere long ago.

One of my adult children, a surgeon, went to Haiti in 2010 to treat victims of the great earthquake and has returned regularly since. Opiate painkillers, he says, are in short supply there even in normal times, so patients routinely deal with post-operative pain by singing hymns until the pain abates naturally.  When I told him of the connection between earthquakes and atmospheres, he said, “So I’m supposed to tell this guy who just had his leg amputated that he should be grateful for earthquakes because otherwise there wouldn’t be any air to breathe?   No wonder people find scientific explanations less than comforting.”

*From R.B. Haldane and J. Kemp’s translation of Schopenhauer’s “Die Welt als Wille und Vorstellung”,  supplement to the 4th book  pp 395-397  On the vanity and suffering of life.
Cf German original, pp. 2222-2227 of  Von der Nichtigkeit und dem Leiden des Lebens

Quantum braiding: It’s all in (and on) your head.

Morning sunlight illuminated John Preskill’s lecture notes. The notes concern Caltech’s quantum-computation course, Ph 219. I’m TAing (the teaching assistant for) Ph 219. I previewed lecture material one sun-kissed Sunday.

Pasadena sunlight spilled through my window. So did the howling of a dog that’s deepened my appreciation for Billy Collins’s poem “Another reason why I don’t keep a gun in the house.” My desk space warmed up, and I unbuttoned my jacket. I underlined a phrase, braided my hair so my neck could cool, and flipped a page.

I flipped back. The phrase concerned a mathematical statement called “the Yang-Baxter relation.” A sunbeam had winked on in my mind: The Yang-Baxter relation described my hair.

The Yang-Baxter relation belongs to a branch of math called “topology.” Topology resembles geometry in its focus on shapes. Topologists study spheres, doughnuts, knots, and braids.

Topology describes some quantum physics. Scientists are harnessing this physics to build quantum computers. Alexei Kitaev largely dreamed up the harness. Alexei, a Caltech professor, is teaching Ph 219 this spring.1 His computational scheme works like this.

We can encode information in radio signals, in letters printed on a page, in the pursing of one’s lips as one passes a howling dog’s owner, and in quantum particles. Imagine three particles on a tabletop.

Consider pushing the particles around like peas on a dinner plate. You could push peas 1 and 2 until they swapped places. The swap represents a computation, in Alexei’s scheme.2

The diagram below shows how the peas move. Imagine slicing the figure into horizontal strips. Each strip would show one instant in time. Letting time run amounts to following the diagram from bottom to top.

Arrows copied from John Preskill’s lecture notes. Peas added by the author.

Imagine swapping peas 1 and 3.

Humor me with one more swap, an interchange of 2 and 3.

Congratulations! You’ve modeled a significant quantum computation. You’ve also braided particles.

The author models a quantum computation.

Let’s recap: You began with peas 1, 2, and 3. You swapped 1 with 2, then 1 with 3, and then 2 with 3. The peas end up ordered oppositely the way they began—end up ordered as 3, 2, 1.

You could, instead, morph 1-2-3 into 3-2-1 via a different sequence of swaps. That sequence, or braid, appears below.

Congratulations! You’ve begun proving the Yang-Baxter relation. You’ve shown that  each braid turns 1-2-3 into 3-2-1.

The relation states also that 1-2-3 is topologically equivalent to 3-2-1: Imagine standing atop pea 2 during the 1-2-3 braiding. You’d see peas 1 and 3 circle around you counterclockwise. You’d see the same circling if you stood atop pea 2 during the 3-2-1 braiding.

That Sunday morning, I looked at John’s swap diagrams. I looked at the hair draped over my left shoulder. I looked at John’s swap diagrams.

“Yang-Baxter relation” might sound, to nonspecialists, like a mouthful of tweed. It might sound like a sneeze in a musty library. But an eight-year-old could grasp the half the relation. When I braid my hair, I pass my left hand over the back of my neck. Then, I pass my right hand over. But I could have passed the right hand first, then the left. The braid would have ended the same way. The braidings would look identical to a beetle hiding atop what had begun as the middle hunk of hair.

The Yang-Baxter relation.

I tried to keep reading John’s lecture notes, but the analogy mushroomed. Imagine spinning one pea atop the table.

A 360° rotation returns the pea to its initial orientation. You can’t distinguish the pea’s final state from its first. But a quantum particle’s state can change during a 360° rotation. Physicists illustrate such rotations with corkscrews.

A quantum corkscrew (“twisted worldribbon,” in technical jargon)

Like the corkscrews formed as I twirled my hair around a finger. I hadn’t realized that I was fidgeting till I found John’s analysis.

I gave up on his lecture notes as the analogy sprouted legs.

I’ve never mastered the fishtail braid. What computation might it represent? What about the French braid? You begin French-braiding by selecting a clump of hair. You add strands to the clump while braiding. The addition brings to mind particles created (and annihilated) during a topological quantum computation.

Ancient Greek statues wear elaborate hairstyles, replete with braids and twists.  Could you decode a Greek hairdo? Might it represent the first 18 digits in pi? How long an algorithm could you run on Rapunzel’s hair?

Call me one bobby pin short of a bun. But shouldn’t a scientist find inspiration in every fiber of nature? The sunlight spilling through a window illuminates no less than the hair spilling over a shoulder. What grows on a quantum physicist’s head informs what grows in it.

1Alexei and John trade off on teaching Ph 219. Alexei recommends the notes that John wrote while teaching in previous years.

2When your mother ordered you to quit playing with your food, you could have objected, “I’m modeling computations!”

little by little and gate by gate

Washington state was drizzling on me. I was dashing from a shuttle to Building 112 on Microsoft’s campus. Microsoft has headquarters near Seattle. The state’s fir trees refreshed me. The campus’s vastness awed me. The conversations planned for the day enthused me. The drizzle dampened me.

Building 112 houses QuArC, one of Microsoft’s research teams. “QuArC” stands for “Quantum Architectures and Computation.” Team members develop quantum algorithms and codes. QuArC members write, as their leader Dr. Krysta Svore says, “software for computers that don’t exist.”

Small quantum computers exist. Large ones have eluded us like gold at the end of a Washington rainbow. Large quantum computers could revolutionize cybersecurity, materials engineering, and fundamental physics. Quantum computers are growing, in labs across the world. When they mature, the computers will need software.

Software consists of instructions. Computers follow instructions as we do. Suppose you want to find and read the poem “anyone lived in a pretty how town,” by 20th-century American poet e e cummings. You follow steps—for example:

1) Wake up your computer.
3) Hit “Enter.”
4) Kick yourself for entering the wrong password.
5) Type the right password.
6) Hit “Enter.”
7) Open a web browser.
8) Navigate to Google.
9) Type “anyone lived in a pretty how town e e cummings” into the search bar.
10) Hit “Enter.”
11) Click the Academy of American Poets’ link.
12) Exclaim, “Really? April is National Poetry Month?”
13) Read about National Poetry Month for four-and-a-half minutes.
14) Remember that you intended to look up a poem.
15) Return to the Academy of American Poets’ “anyone lived” webpage.
16) Read the poem.

We break tasks into chunks executed sequentially. So do software writers. Microsoft researchers break up tasks intended for quantum computers to perform.

Your computer completes tasks by sending electrons through circuits. Quantum computers will have circuits. A circuit contains wires, which carry information. The wires run through circuit components called gates. Gates manipulate the information in the wires. A gate can, for instance, add the number carried by this wire to the number carried by that wire.

Running a circuit amounts to completing a task, like hunting a poem. Computer engineers break each circuit into wires and gates, as we broke poem-hunting into steps 1-16.1

Circuits hearten me, because decomposing tasks heartens me. Suppose I demanded that you read a textbook in a week, or create a seminar in a day, or crack a cybersecurity system. You’d gape like a visitor to Washington who’s realized that she’s forgotten her umbrella.

Suppose I demanded instead that you read five pages, or create one Powerpoint slide, or design one element of a quantum circuit. You might gape. But you’d have more hope.2 Life looks more manageable when broken into circuit elements.

Circuit decomposition—and life decomposition—brings to mind “anyone lived in a pretty how town.” The poem concerns two characters who revel in everyday events. Laughter, rain, and stars mark their time. The more the characters attune to nature’s rhythm, the more vibrantly they live:3

little by little and was by was

all by all and deep by deep
and more by more they dream their sleep

Those lines play in my mind when a seminar looms, or a trip to Washington coincident with a paper deadline, or a quantum circuit I’ve no idea how to parse. Break down the task, I tell myself. Inch by inch, we advance. Little by little and drop by drop, step by step and gate by gate.

Not what e e cummings imagined when composing “anyone lived in a pretty how town”

Unless you’re dashing through raindrops to gate designers at Microsoft. I don’t recommend inching through Washington’s rain. But I would have dashed in a drought. What sees us through everyday struggles—the inching of science—if not enthusiasm? We tackle circuits and struggles because, beyond the drizzle, lie ideas and conversations that energize us to run.

e e cummings

With thanks to QuArC members for their time and hospitality.

1One might object that Steps 4 and 14 don’t belong in the instructions. But software involves error correction.

2Of course you can design a quantum-circuit element. Anyone can quantum.

3Even after the characters die.

Remember to take it slow

“Spiros, can you explain to me this whole business about time being an illusion?”

These were William Shatner’s words to me, minutes after I walked into the green room at Silicon Valley’s Comic Con. The iconic Star Trek actor, best known for his portrayal of James Tiberius Kirk, captain of the starship Enterprise, was chatting with Andy Weir, author of The Martian, when I showed up at the door. I was obviously in the wrong room. I had been looking for the room reserved for science panelists, but had been sent up an elevator to the celebrity green room instead (a special room reserved for VIPs during their appearance at the convention). Realizing quickly that something was off, I did what anyone else would do in my position. I sat down. To my right was Mr. Weir and to my left was Mr. Shatner and his agent, Mr. Gary Hasson. For the first few minutes I was invisible, listening in casually as Mr. Weir revealed juicy details about his upcoming novel. And then, it happened. Mr. Shatner turned to me and asked: “And who are you?” Keep calm young man. You can outrun him if you have to. You are as entitled to the free croissants as any of them. “I am Spiros,” I replied. “And what do you do, Spiros?” he continued. “I am a quantum physicist at Caltech.” Drop the mic. Boom. Now I will see myself out before security…

“Spiros, can you explain to me this whole business about time being an illusion?”

“Spiros, where do you think you are going? Come here, sit right next to me. You promised to explain how time works. You can’t leave me hanging now!” Mr. Shatner was adamant.

I looked to Mr. Hasson and Mr. Weir, who were caught in the middle of this. “I… I can come back and we can talk more after Andy’s panel… My panel isn’t until 2 o’ clock,” I pleaded. Mr. Shatner did not think so. Science could not wait another second. He was actually interested in what I had to say, so I turned to Mr. Weir apologetically and he nodded with understanding and a “good luck, kid” kind-of-smile. Mr. Hasson seemed pleased with my choice and made some room for me to sit next to the captain.

“Now, where were we? Ah yes, you were going to explain to me how time itself is an illusion. Something about time in quantum evolution being emergent. What do you mean?” asked Mr. Shatner, cutting right to the chase. It was time for me to go all in: “Well, you see, there is this equation in quantum mechanics – Erwin Schrodinger came up with it – that tells us how the state of the universe at the quantum level changes with time. But where does time come from? Is it a fundamental concept, or is there something out there without which time itself cannot exist?” I waited for a second, as Mr. Shatner contemplated my question. He was stumped. What could possibly be more fundamental than time? Hmm… “Change,” I said. “Without change, there is no time and, thus, no quantum evolution. And without quantum evolution there is no classical evolution, no arrow of time. So everything hinges on the ability of the quantum state of the visible universe to change.” I paused to make sure he was following, then continued, “But if there is change, then where does it come from? Wherever it comes from, unless we end up with a timeless, unchanging and featureless entity, we will always be on the hook for explaining why it is changing, how it is changing and why it looks the way it does and not some other way,” I said and waited a second to let this sink in. “Spiros, if you are right, then how the heck can you get something out of nothing? If the whole thing is static, how come we are not frozen in time?” asked pointedly Mr. Shatner. “We are not the whole thing,” I said, maybe a bit too abruptly. “What do you mean we are not the whole thing? What else is there?” questioned Mr. Shatner. At this point I could see a large smile forming on Mr. Hasson’s face. His old friend, Bill Shatner, was having fun. A different kind of fun. A different kind of Comic Con. Sure, Bill still had to sit at a table in the main Exhibit Hall to greet thousands of fans, sign their favorite pictures of him and, for a premium, stand next to them for a picture that they would frame and display in their homes for decades to come. “Spiros, do you have a card?” interjected Mr. Hasson. Hmm, how do I say that this is not a thing among scientists… “I ran out. Sorry, everyone wants one these days, so… Here, I can type my email and number in your phone. Would that work?” I said, stretching the truth 1/slightly. “That would be great, thanks,” replied Mr. Hasson.

With Mr. Stan Lee at the Silicon Valley Comic Con. At 93, Mr. Lee spent the whole weekend with fans, not once showing up at the green room to take a break. So I hunted him down with help from Mr. Hasson.

“Hey, stop distracting him! We are so close to the good stuff!” blasted Mr. Shatner. “Go on, now, Spiros. How does anything ever change?” asked Mr. Shatner with some urgency in his voice. “Dynamic equilibrium,” I replied. “Like a chemical reaction that is in equilibrium. You look from afar and see nothing happening. No bubbles, nothing. But zoom in a little and you see products and reactants dissolving and recombining like crazy, but always in perfect balance. The whole remains static, while the parts experience dramatic change.” I let this simmer for a moment. “We are not the whole. We are just a part of the whole. We are too big to see the quantum evolution as it happens in all its glory. But we are also too small to remain unchanged. Our visible universe is in dynamic equilibrium with a clock universe with which we are maximally entangled. We change only because the state of the clock universe changes randomly and we have no control over it, but to change along with it so that the whole remains unchanged,” I concluded, hoping that he would be convinced by a theory that had not seen the light of day until that fateful afternoon. He was not convinced yet. “Wait a minute, why would that clock universe change in the first place?” he asked suspiciously. “It doesn’t have to,” I replied, anticipating this excellent question, and went on, “It could remain in the same state for a million years. But we wouldn’t know it, because the state of our visible universe would have to remain in the same state also for a million years. We wouldn’t be able to tell that a million years passed between every microsecond of change, just like a person under anesthesia can’t tell that they are undergoing surgery for hours, only to wake up thinking it was just a moment earlier that they were counting down to zero.” He fell silent for a moment and then a big smile appeared on his face. “Spiros, you have an accent,” he said, as if stating the obvious. “Can I offer you a piece of advise?” he asked, in a calm voice. I nodded. “One day you will be in front of a large crowd talking about this stuff. When you are up there, make sure you talk slow so people can keep up. When you get excited, you start speaking faster and faster. Take breaks in-between,” he offered. I smiled and thanked him for the advise. By then, it was almost one o’ clock and Mr. Weir’s panel was about to end. I needed to go down there for real this time and meet up with my co-panelists, Shaun Maguire and Laetitia Garriott de Cayeux, since our panel was coming up next. I got up and as I was leaving the room, I heard from behind,

“Remember to take it slow, Spiros. When you are back, you will tell me all about how space is also an illusion.”

Aye aye captain!

March madness and quantum memory

Madness seized me this March. It pounced before newspaper and Facebook feeds began buzzing about basketball.1 I haven’t bought tickets or bet on teams. I don’t obsess over jump-shot statistics. But madness infected me two weeks ago. I began talking with condensed-matter physicists.

Condensed-matter physicists study collections of many particles. Example collections include magnets and crystals. And the semiconductors in the iPhones that report NCAA updates.

Caltech professor Gil Refael studies condensed matter. He specializes in many-body localization. By “many-body,” I mean “involving lots of quantum particles.” By “localization,” I mean “each particle anchors itself to one spot.” We’d expect these particles to spread out, like the eau de hotdog that wafts across a basketball court. But Gil’s particles stay put.

How many-body-localized particles don’t behave.

Experts call many-body localization “MBL.” I’ve accidentally been calling many-body localization “MLB.” Hence the madness. You try injecting baseball into quantum discussions without sounding one out short of an inning.2

I wouldn’t have minded if the madness had erupted in October. The World Series began in October. The World Series involves Major League Baseball, what normal people call “the MLB.” The MLB dominates October; the NCAA dominates March. Preoccupation with the MLB during basketball season embarrasses me. I feel like I’ve bet on the last team that I could remember winning the championship, then realized that that team had last won in 2002.

March madness has been infecting my thoughts about many-body localization. I keep envisioning a localized particle as dribbling a basketball in place, opponents circling, fans screaming, “Go for it!” Then I recall that I’m pondering MBL…I mean, MLB…or the other way around. The dribbler gives way to a baseball player who refuses to abandon first base for second. Then I recall that I should be pondering particles, not playbooks.

Localized particles.

Recollection holds the key to MBL’s importance. Colleagues of Gil’s want to build quantum computers. Computers store information in memories. Memories must retain their contents; information mustn’t dribble away.

Consider recording halftime scores. You could encode the scores in the locations of the particles that form eau de hotdog. (Imagine you have advanced technology that manipulates scent particles.) If Duke had scored one point, you’d put this particle here; if Florida had scored two, you’d put that particle there. The particles—as smells too often do—would drift. You’d lose the information you’d encoded. Better to paint the scores onto scorecards. Dry paint stays put, preserving information.

The quantum particles studied by Gil stay put. They inspire scientists who develop memories for quantum computers. Quantum computation is gunning for a Most Valuable Player plaque in the technology hall of fame. Many-body localized systems could contain Most Valuable Particles.

Remembering the past, some say, one can help one read the future. I don’t memorize teams’ records. I can’t advise you about whom root for. But prospects for quantum memories are brightening. Bet on quantum information science.

1Non-American readers: University basketball teams compete in a tournament each March. The National Collegiate Athletic Association (NCAA) hosts the tournament. Fans glue themselves to TVs, tweet exaltations and frustrations, and excommunicate friends who support opposing teams.

2Without being John Preskill.