The Ground Space of Babel

Librarians are committing suicide.

So relates the narrator of the short story “The Library of Babel.” The Argentine magical realist Jorge Luis Borges wrote the story in 1941.

Librarians are committing suicide partially because they can’t find the books they seek. The librarians are born in, and curate, a library called “infinite” by the narrator. The library consists of hexagonal cells, of staircases, of air shafts, and of closets for answering nature’s call. The narrator has never heard of anyone’s finding an edge of the library. Each hexagon houses 20 shelves, each of which houses 32 books, each of which contains 410 pages, each of which contains 40 lines, each of which consists of about 80 symbols. Every symbol comes from a set of 25: 22 letters, the period, the comma, and the space.

The library, a sage posited, contains every combination of the 25 symbols that satisfy the 410-40-and-80-ish requirement. His compatriots rejoiced:

All men felt themselves to be the masters of an intact and secret treasure. There was no personal or world problem whose eloquent solution did not exist in some hexagon. [ . . . ] a great deal was said about the Vindications: books of apology and prophecy which vindicated for all time the acts of every man in the universe and retained prodigious arcana for his future. Thousands of the greedy abandoned their sweet native hexagons and rushed up the stairways, urged on by the vain intention of finding their Vindication.

Probability punctured their joy: “the possibility of a man’s finding his Vindication, or some treacherous variation thereof, can be computed as zero.”

Many-body quantum physicists can empathize with Borges’s librarian.

A handful of us will huddle over a table or cluster in front of a chalkboard.

“Has anyone found this Hamiltonian’s ground space?” someone will ask.1

Library of Babel

A Hamiltonian is an observable, a measurable property. Consider a quantum system S, such as a set of particles hopping between atoms. We denote the system’s Hamiltonian by H. H determines how the system’s state changes in time. A musical about H swept Broadway last year.

A quantum system’s energy, E, might assume any of many possible values. H encodes the possible values. The least possible value, E0, we call the ground-state energy.

Under what condition does S have an amount E0 of energy? S must occupy a ground state. Consider Olympic snowboarder Shaun White in a half-pipe. He has kinetic energy, or energy of motion, when sliding along the pipe. He gains gravitational energy upon leaving the ground. He has little energy when sitting still on the snow. A quantum analog of that sitting constitutes a ground state.2

Consider, for example, electrons in a magnetic field. Each electron has a property called spin, illustrated with an arrow. The arrow’s direction represents the spin’s state. The system occupies a ground state when every arrow points in the same direction as the magnetic field.

Shaun White has as much energy, sitting on the ground in the half-pipe’s center, as he has sitting at the bottom of an edge of the half-pipe. Similarly, a quantum system might have multiple ground states. These states form the ground space.

“Has anyone found this Hamiltonian’s ground space?”

Olympic crashes

“Find” means, here,“identify the form of.” We want to derive a mathematical expression for the quantum analog of “sitting still, at the bottom of the half-pipe.”

“Find” often means “locate.” How do we locate an object such as a library? By identifying its spatial coordinates. We specify coordinates relative to directions, such as north, east, and up. We specify coordinates also when “finding” ground states.

Libraries occupy the physical space we live in. Ground states occupy an abstract mathematical space, a Hilbert space. The Hilbert space consists of the (pure) quantum states accessible to the system—loosely speaking, how the spins can orient themselves.

Libraries occupy a three-dimensional space. An N-spin system corresponds to a 2N-dimensional Hilbert space. Finding a ground state amounts to identifying 2N coordinates. The problem’s size grows exponentially with the number of particles.

An exponential quantifies also the size of the librarian’s problem. Imagine trying to locate some book in the Library of Babel. How many books should you expect to have to check? How many books does the library hold? Would you have more hope of finding the book, wandering the Library of Babel, or finding a ground state, wandering the Hilbert space? (Please take this question with a grain of whimsy, not as instructions for calculating ground states.)

A book’s first symbol has one of 25 possible values. So does the second symbol. The pair of symbols has one of 25 \times 25 = 25^2 possible values. A trio has one of 25^3 possible values, and so on.

How many symbols does a book contain? About \frac{ 410 \text{ pages} }{ 1 \text{ book} }  \:  \frac{ 40 \text{ lines} }{ 1 \text{ page} }  \:  \frac{ 80 \text{ characters} }{ 1 \text{ line} }  \approx  10^6 \, , or a million. The number of books grows exponentially with the number of symbols per book: The library contains about 25^{ 10^6 } books. You contain only about 10^{24} atoms. No wonder librarians are committing suicide.

Do quantum physicists deserve more hope? Physicists want to find ground states of chemical systems. Example systems are discussed here and here. The second paper refers to 65 electrons distributed across 57 orbitals (spatial regions). How large a Hilbert space does this system have? Each electron has a spin that, loosely speaking, can point upward or downward (that corresponds to a two-dimensional Hilbert space). One might expect each electron to correspond to a Hilbert space of dimensionality (57 \text{ orbitals}) \frac{ 2 \text{ spin states} }{ 1 \text{ orbital} } = 114. The 65 electrons would correspond to a Hilbert space \mathcal{H}_{\rm tot} of dimensionality 114^{65}.

But no two electrons can occupy the same one-electron state, due to Pauli’s exclusion principle. Hence \mathcal{H}_{\rm tot} has dimensionality {114 \choose 65} (“114 choose 65″), the number of ways in which you can select 65 states from a set of 114 states.

{114 \choose 65} equals approximately 10^{34}. Mathematica (a fancy calculator) can print a one followed by 34 zeroes. Mathematica refuses to print the number 25^{ 10^6 } of Babel’s books. Pity the librarians more than the physicists.

Catalogue

Pity us less when we have quantum computers (QCs). They could find ground states far more quickly than today’s supercomputers. But building QCs is taking about as long as Borges’s narrator wandered the library, searching for “the catalogue of catalogues.”

What would Borges and his librarians make of QCs? QCs will be able to search unstructured databases quickly, via Grover’s algorithm. Babel’s library lacks structure. Grover’s algorithm outperforms classical algorithms just when fed large databases. 25^{ 10^6 } books constitute a large database. Researchers seek a “killer app” for QCs. Maybe Babel’s librarians could vindicate quantum computing and quantum computing could rescue the librarians. If taken with a grain of magical realism.

 

1Such questions remind me of an Uncle Alfred who’s misplaced his glasses. I half-expect an Auntie Muriel to march up to us physicists. She, sensible in plaid, will cross her arms.

“Where did you last see your ground space?” she’ll ask. “Did you put it on your dresser before going to bed last night? Did you use it at breakfast, to read the newspaper?”

We’ll bow our heads and shuffle off to double-check the kitchen.

2More accurately, a ground state parallels Shaun White’s lying on the ground, stone-cold.

Fractons, for real?

“Fractons” is my favorite new toy (short for quantum many-body toy models). It has amazing functions that my old toys do not have; it is so new that there are tons of questions waiting to be addressed; it is perfectly situated at the interface between quantum information and condensed matter and has attracted a lot of interest and efforts from both sides; and it gives me excuses and new incentives to learn more math. I have been having a lot of fun playing with it in the last couple years and in the process, I had the great opportunity to work with some amazing collaborators: Han Ma and Mike Hermele at Boulder, Ethan Lake at MIT, Wilbur Shirley at Caltech, Kevin Slagle at U Toronto and Zhenghan Wang at Station Q. Together we have written a few papers on this subject, but I always felt there are more interesting stories and more excitement in me than what can be properly contained in scientific papers. Hence this blog post.

How I first learned about Fractons

Qmem

Back in the early 2000s, a question that kept attracting and frustrating people in quantum information is how to build a quantum hard drive to store quantum information. This is of course a natural question to ask as quantum computing has been demonstrated to be possible, at least in theory, and experimental progress has shown great potential. It turned out, however, that the question is one of those deceptively enticing ones which are super easy to state, but extremely hard to answer. In a classical hard drive, information is stored using magnetism. Quantum information, instead of being just 0 and 1, is represented using superpositions of 0 and 1, and can be probed in non-commutative ways (that is, measuring along different directions can alter previous answers). To store quantum information, we need “magnetism” in all such non-commutative channels. But how can we do that?

At that time, some proposals had been made, but they either involve actively looking for and correcting errors throughout the time during which information is stored (which is something we never have to do with our classical hard drives) or going into four spatial dimensions. Reliable passive storage of quantum information seemed out of reach in the three-dimensional world we live in, even at the level of a proof of principle toy model!

Given all the previously failed attempts and without a clue about where else to look, this problem probably looked like a dead-end to many. But not to Jeongwan Haah, a fearless graduate student in Preskill’s group at IQIM at that time, who turned the problem from guesswork into a systematic computer search (over a constrained set of models). The result of the search surprised everyone. Jeongwan found a three-dimensional quantum spin model with physical properties that had never been seen before, making it a better quantum hard drive than any other model that we know of!

The model looks surprising not only to the quantum information community, but even more so to the condensed matter community. It is a strongly interacting quantum many-body model, a subject that has been under intense study in condensed matter physics. Yet it exhibits some very strange behaviors whose existence had not even been suspected. It is a condensed matter discovery made not from real materials in real experiments, but through computer search!

fractal

Excitations (stars) in Haah’s code live at the corner of a fractal.

In condensed matter systems, what we know can happen is that elementary excitations can come in the form of point particles – usually called quasi-particles – which can then move around and interact with other excitations. In Jeongwan’s model, now commonly referred to as Haah’s code, elementary excitations still come in the form of point particles, but they cannot freely move around. Instead, if they want to move, four of them have to coordinate with each other to move together, so that they stay at the vertices of a fractal shaped structure! The restricted motion of the quasi-particles leads to slower dynamics at low energy, making the model much better suited for the purpose of storing quantum information.

But how can something like this happen? This is the question that I want to yell out loud every time I read Jeongwan’s papers or listen to his talks. Leaving aside the motivation of building a quantum hard drive, this model presents a grand challenge to the theoretical framework we now have in condensed matter. All of our intuitions break down in predicting the behavior of this model; even some of the most basic assumptions and definitions do not apply.

Haahcode

The interactions in Haah’s code involve eight spins at a time (the eight Z’s and eight X’s in each cube).

I felt so uncomfortable and so excited at the same time because there was something out there that should be related to things I know, yet I totally did not understand how. And there was an even bigger problem. I was like a sick person going to a doctor but unable to pinpoint what was wrong. Something must have been wrong, but I didn’t know what that was and I didn’t know how to even begin to look for it. The model looked so weird. Interaction involved eight spins at a time; there was no obvious symmetry other than translation. Jeongwan, with his magic math power, worked out explicitly many of the amazing properties of the model, but that to me only added to the mystery. Where did all these strange properties coming from?

From the unfathomable to the seemingly approachable

I remained in this superposition of excited state and powerless state for several years, until Jeongwan moved to MIT and posted some papers with Sagar Vijay and Liang Fu in 2015 and 2016.

Xcube

Interaction terms in a nicer looking fracton model.

In these papers, they listed several other models, which, similar to Haah’s code, contain quasi-particle excitations whose motion is constrained. The constraints are weaker and these models do not make good quantum hard drives, but they still represent new condensed matter phenomena. What’s nice about these models is that the form of interaction is more symmetric, takes a simpler form, or is similar to some other models we are familiar with. The quasi-particles do not need a fractal-shaped structure to move around, instead they move along a line, in a plane, or at the corner of a rectangle. In fact, as early as 2005 – six years before Haah’s code, Claudio Chamon at Boston University already proposed a model of this kind. Together with the previous fractal examples, these models are what’s now being referred to as the fracton models. If the original Haah’s code looks like an ET from beyond the milky way, these models at least seem to live somewhere in the solar system. So there must be something that we can do to understand them better!

Obviously, I was not the only one who felt this way. A flurry of papers appeared on these “fracton” models. People came at these models armed with their favorite tools in condensed matter, looking for an entry point to crack them open. The two approaches that I found most attractive was the coupled layer construction and the higher rank gauge theory, and I worked on these ideas together with Han Ma, Ethan Lake and Michael Hermele. Each approach comes from a different perspective and establishes a connection between fractons and physical models that we are familiar with. In the coupled layer construction, the connection is to the 2D discrete gauge theories, while in the higher rank approach it is to the 3D gauge theory of electromagnetism.

I was excited about these results. They each point to simple physical mechanisms underlying the existence of fractons in some particular models. By relating these models to things I already know, I feel a bit relieved. But deep down, I know that this is far from the complete story. Our understanding barely goes beyond the particular models discussed in the paper. In condensed matter, we spend a lot of time studying toy models; but toy models are not the end goal. Toy models are only meaningful if they represent some generic feature in a whole class of models. It is not clear at all to what extent this is the case for fractons.

Step zero: define “order”, define “topological order”

I gave a talk about these results at KITP last fall under the title “Fracton Topological Order”. It was actually too big a title because all we did was to study specific realizations of individual models and analyze their properties. To claim topological order, one needs to show much more. The word “order” refers to the common properties of a continuous range of models within the same phase. For example, crystalline order refers to the regular lattice organization of atoms in the solid phase within a continuous range of temperature and pressure. When the word “topological” is added in front of “order”, it signifies that such properties are usually directly related to the topology of the system. A prototypical example is the fractional quantum Hall system, whose ground state degeneracy is directly determined by the topology of the manifold the system lives in. For fractons, we are far from an understanding at this level. We cannot answer basic questions like what range of models form a phase, what is the order (the common properties of this whole range of models) characterizing each phase, and in what sense is the order topological. So, the title was more about what I hope will happen than what has already happened.

But it did lead to discussions that could make things happen. After my talk, Zhenghan Wang, a mathematician at Microsoft Station Q, said to me, “I would agree these fracton models are topological if you can show me how to define them on different three manifolds”. Of course! How can I claim anything related to topology if all I know is one model on a cubic lattice with periodic boundary condition? It is like claiming a linear relation between two quantities with only one data point.

But how to get more data points? Well, from the paper by Haah, Vijay and Fu, we knew how to define the model on cubic lattices. With periodic boundary conditions, the underlying manifold is a three torus. Is it possible to have a cubic lattice, or something similar, in other three manifolds as well? Usually, this kind of request would be too much to ask. But it turns out that if you whisper your wish to the right mathematician, even the craziest ones can come true. With insightful suggestions from Michael Freedman (the Fields medalist leading Station Q) and Zhenghan, and through the amazing work of Kevin Slagle (U Toronto) and Wilbur Shirley (Caltech), we found that if we make use of a structure called Total Foliation, one of the fracton models can be generalized to different kinds of three manifolds and we can see how the properties of the model are related to certain topological features of the manifold!

Foliation

Foliation.

Foliation is the process of dividing a manifold into parallel planes. Total foliation is a set of three foliations which intersect each other in a transversal way. The xy, yz, and zx planes in a cubic lattice form a total foliation and similar constructions can be made for other three manifolds as well.

Things start to get technical from here, but the basic lesson we learned about some of the fracton models is that structural-wise, they pretty much look like an onion. Even though onions look like a three-dimensional object from the outside, they actually grow in a layered structure. Some of the properties of the fracton models are simply determined by the layers, and related

Onionto the topology of the layers. Once we peel off all the layers, we find that for some, there is nothing left while for others, there is a nontrivial core. This observation allows us to better address the previous questions: we defined a fracton phase (one type of it) as models smoothly related to each other after adding or removing layers; the topological nature of the order is manifested in how the properties of the model are determined by the topology of the layers.

staringThe onion structure is nice, because it allows us to reduce much of the story from 3D to 2D, where we understand things much better. It clarifies many of the weirdnesses of the fracton model we studied, and there is indication that it may apply to a much wider range of fracton models, so we have an exciting road ahead of us. On the other hand, it is also clear that the onion structure does not cover everything. In particular, it does not cover Haah’s code! Haah’s code cannot be built in a layered way and its properties are in a sense intrinsically three dimensional. So, after finishing this whole journey through the onion field, I will be back to staring at Haah’s code again and wondering what to do with it, like what I have been doing in the eight years since Jeongwan’s paper first came out. But maybe this time I will have some better ideas.

What makes extraordinary science extraordinary?

My article for this month appears on Sean Carroll’s blog, Preposterous UniverseSean is a theoretical physicist who practices cosmology at Caltech. He interfaces with philosophy, which tinges the question I confront: What distinguishes extraordinary science from good science? The topic seemed an opportunity to take Sean up on an invitation to guest-post on Preposterous Universe. Head there for my article. Thanks to Sean for hosting!

Big Dipper

Panza’s paradox

I finished reading a translation of Don Quixote this past spring. Miguel de Cervantes wrote the novel during the 1600s. The hero, a Spanish gentleman, believes the tales of chivalry’s golden days. He determines to outdo his heroes as a knight. Don Quixote enlists a peasant, Sancho Panza, to serve as his squire. Bony Don Quixote quotes classical texts; tubby Sancho Panza can’t sign his name. The pair roams the countryside, seeking adventures.

Don Quixote might have sold more copies than any other novel in history. Historians have dubbed Don Quixote “the first modern novel”; “quixotic” appears in English dictionaries; and artists and writers still spin off the story. Don Quixote reverberates throughout the 500 years that have followed it.

Cervantes, I discovered, had grasped a paradox that mathematicians had exposed last century.

xkcd.jpg

Artists continue to spin off Don Quixote.

Don Quixote will vanquish so many villains, the pair expects, rulers will shower gifts on him. Someone will bequeath him a kingdom or an empire. Don Quixote promises to transfer part of his land to Sancho. Sancho expects to govern an ínsula, or island.

Sancho’s expectation amuses a duke and duchess. They pretend to grant Sancho an ínsula as a joke. How would such a simpleton rule? they wonder. They send servants and villagers to Sancho with fabricated problems. Sancho arbitrates the actors’ cases. Grossman translates one case as follows:

the first [case] was an engima presented to him by a stranger, who said:
“Señor, a very large river divided a lord’s lands into two parts [ . . . ] a bridge crossed this river, and at the end of it was a gallows and a kind of tribunal hall in which there were ordinarily four judges who applied the law set down by the owner of the river, the bridge, and the lands, which was as follows: ‘If anyone crosses this bridge from one side to the other, he must first take an oath as to where he is going and why; and if he swears the truth, let him pass; and if he tells a lie, let him die by hanging on the gallows displayed there, with no chance of pardon.’ Knowing this law and its rigorous conditions, many people crossed the bridge, and then, when it was clear that what they swore was true, the judges let them pass freely. It so happened, then, that a man once took the oath, and he swore and said that because of the oath he was going to die on the gallows, and he swore to nothing else. The judges studied the oath and said: ‘If we allow this man to pass freely, he lied in his oath, and according to the law he must die; and if we hang him, he swore that he was going to die on this gallows, and having sworn the truth, according to the same law he must go free.’ Señor Governor, the question for your grace is what should the judges do with the man.”

Cervantes formulated a paradox that looks, to me, equivalent to Russell’s barber paradox. Bertrand Russell contributed to philosophy during the early 1900s. He concocted an argument called the Russell-Zermelo paradox, which I’ll describe later. An acquaintance tried to encapsulate the paradox as follows: Consider an adult male barber who shaves all men who do not shave themselves. Does the barber shave himself?

Suppose that the barber doesn’t. (Analogously, suppose that the smart aleck in Panza’s paradox doesn’t lie.) The barber is a man whom the barber shaves. (The smart aleck must survive.) Hence the barber must shave himself. (Hence the traveler lies.) But we assumed that the barber doesn’t shave himself. (But we assumed that the traveler doesn’t lie.) Stalemate.

Barber

A barber plays a role in Don Quixote as in the Russell-Zermelo-like paradox. But the former barber has a wash basin that Don Quixote mistakes for a helmet.

Philosophers and mathematicians have debated to what extent the barber paradox illustrates the Russell-Zermelo paradox. Russell formulated the paradox in 1902. The mathematician Ernst Zermelo formulated the paradox around the same time. Mathematicians had just developed the field of set theory. A set is a collection of objects. Examples include the set of positive numbers, the set of polygons, and the set of readers who’ve looked at a Quantum Frontiers post.

Russell and Zermelo described a certain set \mathcal{S} of sets, a certain collection of sets. Let’s label the second-tier sets S_j  =  S_1, S_2, S_3, etc.

Russell 1

Each second-tier set S_j can contain elements. The elements can include third-tier sets s^{(j)}_k = s^{(j)}_1 ,  s^{(j)}_2, s^{(j)}_3, etc.

Russell 2

But no third-tier set s^{(j)}_k equals the second-tier set S_j. That is, no second-tier set S_j is element of itself.

Russell 3

Let \mathcal{S} contain every set that does not contain itself. Does the first-tier set \mathcal{S} contain itself?

Russell 4

Suppose that it does: \mathcal{S} = S_j for some j. \mathcal{S} is an element of itself. But, we said, “no second-tier set S_j is an element of itself.” So \mathcal{S} must not be an element of itself. But \mathcal{S} “contain[s] every set that does not contain itself.” So \mathcal{S} must contain itself. But we just concluded that \mathcal{S} doesn’t. Stalemate.

The Stanford Encyclopedia of Philosophy synopsizes the Russell-Zermelo paradox: “the paradox arises within naïve set theory by considering the set of all sets that are not members of themselves. Such a set appears to be a member of itself if and only if it is not a member of itself.”

One might resolve the Russell-Zermelo paradox by concluding that no set \mathcal{S} exists. One might resolve the barber paradox by concluding that no such barber exists. How does Sancho resolve what I’ll dub Panza’s paradox?

He initially decrees, “‘let the part of the man that swore the truth pass freely, and hang the part that told a lie.’”1 The petitioner protests: Dividing the smart aleck will kill him. But the law suggests that the smart aleck should live.

Sancho revises his decree:

“since the reasons for condemning him or sparing him are balanced perfectly, they should let him pass freely, for doing good is always more praiseworthy than doing evil, and I’d sign this with my own name if I knew how to write, and in this case I haven’t said my own idea but a precept that came to mind, one of many that was given to me by my master, Don Quixote [ . . . ] when the law is in doubt, I should favor and embrace mercy.”

One can resolve the barber’s paradox by concluding that such a barber cannot exist. Sancho resolves Panza’s paradox by concluding that the landowner’s law cannot govern all bridge-crossings. The law lacks what computer scientists would call an “edge case.” An edge case falls outside the jurisdiction of the most-often-used part of a rule. One must specify explicitly how to process edge cases, when writing computer programs. Sancho codes the edge case, supplementing the law.

Bridge

Upon starting to read about Sancho’s trial, I sat bolt upright. I ran to my laptop upon finishing. Miguel de Cervantes had intuited, during the 1600s, a paradox not articulated by mathematicians until the 1900s. Surely, the literati had pounced on Cervantes’s foresight?

Mathematics writer Martin Gardner had. I found also two relevant slides in a Powerpoint and three relevant pages in an article. But more critics classified Panza’s paradox as an incarnation of the liar’s paradox than invoked Russell.

Scholars have credited Cervantes with anticipating, and initiating, literary waves that have propagated for four centuries. Perhaps we should credit him with anticipating mathematics not formalized for three.

 

1This decree evokes the story of King Solomon and the baby. Two women bring a baby to King Solomon. Each woman claims the baby as hers. “Cut the baby in two,” Solomon rules, “and give half to each woman.” One woman assents. The other cries, “No! Let her have the child; just don’t kill the baby.” The baby, Solomon discerns, belongs to the second woman. A mother would rather see her child in someone else’s home than see her child killed. Sancho, like Solomon, rules that someone be divided in two. But Sancho, lacking Solomon’s wisdom, misapplies Solomon’s tactic.

The light show

Atoms 2

A strontium magneto-optical trap.

How did a quantum physics experiment end up looking like a night club? Due to a fortunate coincidence of nature, my lab mates and I at Endres Lab get to use three primary colors of laser light – red, blue, and green – to trap strontium atoms.  Let’s take a closer look at the physics behind this visually entrancing combination.

The spectrum

Sr level structure

The electronic spectrum of strontium near the ground state.

The trick to research is finding a problem that is challenging enough to be interesting, but accessible enough to not be impossible.  Strontium embodies this maxim in its electronic spectrum.  While at first glance it may seem daunting, it’s not too bad once you get to know each other.  Two valence electrons divide the spectrum into a spin-singlet sector and a spin-triplet sector – a designation that roughly defines whether the electron spins point in the opposite or in the same direction.  Certain transitions between these sectors are extremely precisely defined, and currently offer the best clock standards in the world.  Although navigating this spectrum requires more lasers, it offers opportunities for quantum physics that singly-valent spectra do not.  In the end, the experimental complexity is still very much manageable, and produces some great visuals to boot.  Here are some of the lasers we use in our lab:

The blue

At the center of the .gif above is a pulsating cloud of strontium atoms, shining brightly blue.  This is a magneto-optical trap, produced chiefly by strontium’s blue transition at 461nm.

IMG_3379

461nm blue laser light being routed through various paths.

The blue transition is exceptionally strong, scattering about 100 million photons per atom per second.  It is the transition we use to slow strontium atoms from a hot thermal beam traveling at hundreds of meters per second down to a cold cloud at about 1 milliKelvin.  In less than a second, this procedure gives us a couple hundred million atoms to work with.  As the experiment repeats, we get to watch this cloud pulse in and out of existence.

The red(s)

IMG_3380

689nm red light.  Bonus: Fabry-Perot interference fringes on my camera!

While the blue transition is a strong workhorse, the red transition at 689nm trades off strength for precision.  It couples strontium’s spin-singlet ground state to an excited spin-triplet state, a much weaker but more precisely defined transition.  While it does not scatter as fast as the blue (only about 23,000 photons per atom per second), it allows us to cool our atoms to much colder temperatures, on the order of 1 microKelvin.

In addition to our red laser at 689nm, we have two other reds at 679nm and 707nm.  These are necessary to essentially plug “holes” in the blue transition, which eventually cause an atom to fall into long-lived states other than the ground state.  It is generally true that the more complicated an atomic spectrum gets, the more “holes” there are to plug, and this is many times the reason why certain atoms and molecules are harder to trap than others.

The green

After we have established a cold magneto-optical trap, it is time to pick out individual atoms from this cloud and load them into very tightly focused optical traps that we call tweezers.  Here, our green laser comes into play.  This laser’s wavelength is far away from any particular transition, as we do not want it to scatter any photons at all.  However, its large intensity creates a conservative trapping potential for the atom, allowing us to hold onto it and even move it around.  Furthermore, its wavelength is what we call “magic”, which means it is chosen such that the ground and excited state experience the same trapping potential.

IMG_3369

The quite powerful green laser.  So powerful that you can see the beam in the air, like in the movies.

The invisible

Yet to be implemented are two more lasers slightly off the visible spectrum at both the ultraviolet and infrared sides.  Our ultraviolet laser will be crucial to elevating our experiment from single-body to many-body quantum physics, as it will allow us to drive our atoms to very highly excited Rydberg states which interact with long range.  Our infrared laser will allow us to trap atoms in the extremely precise clock state under “magic” conditions.

 

The combination of strontium’s various optical pathways allows for a lot of new tricks beyond just cooling and trapping.  Having Rydberg states alongside narrow-line transitions, for example, has yet unexplored potential for quantum simulation.  It is a playground that is very exciting without being utterly overwhelming.  Stay tuned as we continue our exploration – maybe we’ll have a yellow laser next time too.

 

Machine learning the arXiv

Over the last year or so, the machine learning wave has really been sweeping through the field of condensed matter physics. Machine learning techniques have been applied to condensed matter physics before, but very sparsely and with little recognition. These days, I guess (partially) due to the general machine learning and AI hype, the amount of such studies skyrocketed (I admit to contributing to that..). I’ve been keeping track of this using the arXiv and Twitter (@Evert_v_N), but you should know about this website for getting an overview of the physics & machine learning papers: https://physicsml.github.io/pages/papers.html.

This effort of applying machine learning to physics is a serious attempt at trying to understand how such tools could be useful in a variety of ways. It isn’t very hard to get a neural network to learn ‘something’ from physics data, but it is really hard to find out what – and especially how – the network does that. That’s why toy cases such as the Ising model or the Kosterlitz-Thouless transition have been so important!

When you’re keeping track of machine learning and AI developments, you soon realize that there are examples out there of amazing feats. Being able to generate photo-realistic pictures given just a sentence. e.g. “a brown bird with golden speckles and red wings is sitting on a yellow flower with pointy petals”, is (I think..) pretty cool. I can’t help but wonder if we’ll get to a point where we can ask it to generate “the groundstate of the Heisenberg model on a Kagome lattice of 100×100”…

Another feat I want to mention, and the main motivation for this post, is that of being able to encode words as vectors. That doesn’t immediately seem like a big achievement, but it is once you want to have ‘similar’ words have ‘similar’ vectors. That is, you intuitively understand that Queen and King are very similar, but differ basically only in gender. Can we teach that to a computer (read: neural network) by just having it read some text? Turns out we can. The general encoding of words to vectors is aptly named ‘Word2Vec’, and some of the top algorithms that do that were introduced here (https://arxiv.org/abs/1301.3781) and here (https://arxiv.org/abs/1310.4546). The neat thing is that we can actually do arithmetics with these words encoded as vectors, so that the network learns (with no other input than text!):

  • King – Man + Woman = Queen
  • Paris – France + Italy = Rome

In that spirit, I wondered if we can achieve the same thing with physics jargon. Everyone knows, namely, that “electrons + two dimensions + magnetic field = Landau levels”. But is that clear from condensed matter titles?

Try it yourself

If you decide at this point that the rest of the blog is too long, at least have a look here: everthemore.pythonanywhere.com or skip to the last section. That website demonstrates the main point of this post. If that sparks your curiosity, read on!

This post is mainly for entertainment, and so a small disclaimer is in order: in all of the results below, I am sure things can be improved upon. Consider this a ‘proof of principle’. However, I would be thrilled to see what kind of trained models you can come up with yourself! So for that purpose, all of the code (plus some bonus content!) can be found on this github repository: https://github.com/everthemore/physics2vec.

Harvesting the arXiv

The perfect dataset for our endeavor can be found in the form of the arXiv. I’ve written a small script (see github repository) that harvests the titles of a given section from the arXiv. It also has options for getting the abstracts, but I’ll leave that for a separate investigation. Note that in principle we could also get the source-files of all of these papers, but doing that in bulk requires a payment; and getting them one by one will 1) take forever and 2) probably get us banned.

Collecting all this data of the physics:cond-mat subsection took right about 1.5 hours and resulted in 240737 titles and abstracts (I last ran this script on November 20th, 2017). I’ve filtered them by year and month, and you can see the result in Fig.1 below. Seems like we have some catching up to do in 2017 still (although as the inset shows, we have nothing to fear. November is almost over, but we still have the ‘getting things done before x-mas’ rush coming up!).

numpapers

Figure 1: The number of papers in the cond-mat arXiv section over the years. We’re behind, but the year isn’t over yet! (Data up to Nov 20th 2017)

Analyzing n-grams

After tidying up the titles (removing LaTeX, converting everything to lowercase, etc.), the next thing to do is to train a language model on finding n-grams. N-grams are basically fixed n-word expressions such as ‘cooper pair’ (bigram) or ‘metal insulator transition’ (trigram). This makes it easier to train a Word2Vec encoding, since these phrases are fixed and can be considered a single word. The python module we’ll use for Word2Vec is gensim (https://radimrehurek.com/gensim/), and it conveniently has phrase-detection built-in. The language model it builds reports back to us the n-grams it finds, and assigns them a score indicating how certain it is about them. Notice that this is not the same as how frequently it appears in the dataset. Hence an n-gram can appear fewer times than another, but have a higher certainty because it always appears in the same combination. For example, ‘de-haas-van-alphen’ appears less than, but is more certain than, ‘cooper-pair’, because ‘pair’ does not always come paired (pun intended) with ‘cooper’. I’ve analyzed up to 4-grams in the analysis below.

I can tell you’re curious by now to find out what some of the most certain n-grams in cond-mat are (again, these are not necessarily the most frequent), so here are some interesting findings:

  • The most certain n-grams are all surname combo’s, Affleck-Kennedy-Lieb-Tasaki being the number 1. Kugel-Khomskii is the most certain 2-name combo and Einstein-Podolksi-Rosen the most certain 3-name combo.
  • The first certain non-name n-gram is a ‘quartz tuning fork’, followed by a ‘superconducting coplanar waveguide resonator’. Who knew.
  • The bigram ‘phys. rev.’ and trigram ‘phys. rev. lett.’ are relatively high up in the confidence lists. These seem to come from the “Comment on […]”-titles on the arXiv.
  • I learned that there is such a thing as a Lefschetz thimble. I also learned that those things are called thimbles in English (we (in Holland) call them ‘finger-hats’!).

In terms of frequency however, which is probably more of interest to us, the most dominant n-grams are Two-dimensional, Quantum dot, Phase transition, Magnetic field, One dimensional and Bose-Einstein (in descending order). It seems 2D is still more popular than 1D, and all in all the top n-grams do a good job at ‘defining’ condensed matter physics. I’ll refer you to the github repository code if you want to see a full list! You’ll find there a piece of code that produces wordclouds from the dominant words and n-grams too, such as this one:

caltechwordcloud.png

For fun though, before we finally get to the Word2Vec encoding, I’ve also kept track of all of these as a function of year, so that we can now turn to finding out which bigrams have been gaining the most popularity. The table below shows the top 5 n-grams for the period 2010 – 2016 (not including 2017) and for the period 2015 – Nov 20th 2017.

2010-2016

2015 – November 20th 2017

Spin liquids  Topological phases & transitions
 Weyl semimetals  Spin chains
 Topological phases & transitions  Machine learning
 Surface states  Transition metal dichalcogenides
 Transition metal dichalcogenides  Thermal transport
 Many-body localization  Open quantum systems

Actually, the real number 5 in the left column was ‘Topological insulators’, but given number 3 I skipped it. Also, this top 5 includes a number 6 (!), which I just could not leave off given that everyone seems to have been working on MBL. If we really want to be early adopters though, taking only the last 1.8 years (2015 – now, Nov 20th 2017)  in the right column of the table shows some interesting newcomers. Surprisingly, many-body localization is not even in the top 20 anymore. Suffice it to say, if you have been working on anything topology-related, you have nothing to worry about. Machine learning is indeed gaining lots of attention, but we’ve yet to see if it doesn’t go the MBL-route (I certainly don’t hope so!). Quantum computing does not seem to be on the cond-mat radar, but I’m certain we would find that high up in the quant-ph arXiv section.

CondMat2Vec

Alright, finally time to use some actual neural networks for machine learning. As I started this post, what we’re about to do is try to train a network to encode/decode words into vectors, while simultaneously making sure that similar words (by meaning!) have similar vectors. Now that we have the n-grams, we want the Word2Vec algorithm to treat these as words by themselves (they are, after all, fixed combinations).

In the Word2Vec algorithm, we get to decide the length of the vectors that encode words ourselves. Larger vectors means more freedom in encoding words, but also makes it harder to learn similarity. In addition, we get to choose a window-size, indicating how many words the algorithm will look ahead to analyze relations between words. Both of these parameters are free for you to play with if you have a look at the source code repository. For the website everthemore.pythonanywhere.com, I’ve uploaded a size 100 with window-size 10 model, which I found to produce sensible results. Sensible here means “based on my expectations”, such as the previous example of “2D + electrons + magnetic field = Landau levels”. Let’s ask our network some questions.

First, as a simple check, let’s see what our encoding thinks some jargon is similar to:

  • Superconductor ~ Superconducting, Cuprate superconductor, Superconductivity, Layered superconductor, Unconventional superconductor, Superconducting gap, Cuprate, Weyl semimetal, …
  • Majorana ~ Majorana fermion, Majorana mode, Non-abelian, Zero-energy, braiding, topologically protected, …

It seems we could start to cluster words based on this. But the real test comes now, in the form of arithmetics. According to our network (I am listing the top two choices in some cases; the encoder outputs a list of similar vectors, ordered by similarity):

  • Majorana + Braiding = Non-Abelian
  • Electron + Hole = Exciton, Carrier
  • Spin + Magnetic field = Magnetization, Antiferromagnetic
  • Particle + Charge = Electron, Charged particle

And, sure enough:

  • 2D + electrons + magnetic field = Landau level, Magnetoresistance oscillation

The above is just a small sample of the things I’ve tried. See the link in the try it yourself section above if you want to have a go. Not all of the examples work nicely. For example, neither lattice + wave nor lattice + excitation nor lattice + force seem to result in anything related to the word ‘phonon’. I would guess that increasing the window size will help remedy this problem. Even better probably would be to include abstracts!

Outlook

I could play with this for hours, and I’m sure that by including the abstracts and tweaking the vector size (plus some more parameters I haven’t even mentioned) one could optimize this more. Once we have an optimized model, we could start to cluster the vectors to define research fields, visualizing the relations between n-grams (both suggestions thanks to Thomas Vidick and John Preskill!), and many other things. This post has become rather long already however, and I will leave further investigation to a possible future post. I’d be very happy to incorporate anything cool you find yourselves though, so please let me know!

Gently yoking yin to yang

The architecture at the University of California, Berkeley mystified me. California Hall evokes a Spanish mission. The main library consists of white stone pillared by ionic columns. A sea-green building scintillates in the sunlight like a scarab. The buildings straddle the map of styles.

Architecture.001

So do Berkeley’s quantum scientists, information-theory users, and statistical mechanics.

The chemists rove from abstract quantum information (QI) theory to experiments. Physicists experiment with superconducting qubits, trapped ions, and numerical simulations. Computer scientists invent algorithms for quantum computers to perform.

Few activities light me up more than bouncing from quantum group to info-theory group to stat-mech group, hunting commonalities. I was honored to bounce from group to group at Berkeley this September.

What a trampoline Berkeley has.

The groups fan out across campus and science, but I found compatibility. Including a collaboration that illuminated quantum incompatibility.

Quantum incompatibility originated in studies by Werner Heisenberg. He and colleagues cofounded quantum mechanics during the early 20th century. Measuring one property of a quantum system, Heisenberg intuited, can affect another property.

The most famous example involves position and momentum. Say that I hand you an electron. The electron occupies some quantum state represented by | \Psi \rangle. Suppose that you measure the electron’s position. The measurement outputs one of many possible values x (unless | \Psi \rangle has an unusual form, the form a Dirac delta function).

But we can’t say that the electron occupies any particular point x = x_0 in space. Measurement devices have limited precision. You can measure the position only to within some error \varepsilon: x = x_0 \pm \varepsilon.

Suppose that, immediately afterward, you measure the electron’s momentum. This measurement, too, outputs one of many possible values. What probability q(p) dp does the measurement have of outputting some value p? We can calculate q(p) dp, knowing the mathematical form of | \Psi \rangle and knowing the values of x_0 and \varepsilon.

q(p) is a probability density, which you can think of as a set of probabilities. The density can vary with p. Suppose that q(p) varies little: The probabilities spread evenly across the possible p values. You have no idea which value your momentum measurement will output. Suppose, instead, that q(p) peaks sharply at some value p = p_0. You can likely predict the momentum measurement’s outcome.

The certainty about the momentum measurement trades off with the precision \varepsilon of the position measurement. The smaller the \varepsilon (the more precisely you measured the position), the greater the momentum’s unpredictability. We call position and momentum complementary, or incompatible.

You can’t measure incompatible properties, with high precision, simultaneously. Imagine trying to do so. Upon measuring the momentum, you ascribe a tiny range of momentum values p to the electron. If you measured the momentum again, an instant later, you could likely predict that measurement’s outcome: The second measurement’s q(p) would peak sharply (encode high predictability). But, in the first instant, you measure also the position. Hence, by the discussion above, q(p) would spread out widely. But we just concluded that q(p) would peak sharply. This contradiction illustrates that you can’t measure position and momentum, precisely, at the same time.

But you can simultaneously measure incompatible properties weakly. A weak measurement has an enormous \varepsilon. A weak position measurement barely spreads out q(p). If you want more details, ask a Quantum Frontiers regular; I’ve been harping on weak measurements for months.

Blame Berkeley for my harping this month. Irfan Siddiqi’s and Birgitta Whaley’s groups collaborated on weak measurements of incompatible observables. They tracked how the measured quantum state | \Psi (t) \rangle evolved in time (represented by t).

Irfan’s group manipulates superconducting qubits.1 The qubits sit in the physics building, a white-stone specimen stamped with an egg-and-dart motif. Across the street sit chemists, including members of Birgitta’s group. The experimental physicists and theoretical chemists teamed up to study a quantum lack of teaming up.

Phys. & chem. bldgs

The experiment involved one superconducting qubit. The qubit has properties analogous to position and momentum: A ball, called the Bloch ball, represents the set of states that the qubit can occupy. Imagine an arrow pointing from the sphere’s center to any point in the ball. This Bloch vector represents the qubit’s state. Consider an arrow that points upward from the center to the surface. This arrow represents the qubit state | 0 \rangle. | 0 \rangle is the quantum analog of the possible value 0 of a bit, or unit of information. The analogous downward-pointing arrow represents the qubit state | 1 \rangle, analogous to 1.

Infinitely many axes intersect the sphere. Different axes represent different observables that Irfan’s group can measure. Nonparallel axes represent incompatible observables. For example, the x-axis represents an observable \sigma_x analogous to position. The y-axis represents an observable \sigma_y analogous to momentum.

Tug-of-war

Siddiqi lab, decorated with the trademark for the paper’s tug-of-war between incompatible observables. Photo credit: Leigh Martin, one of the paper’s leading authors.

Irfan’s group stuck their superconducting qubit in a cavity, or box. The cavity contained light that interacted with the qubit. The interactions transferred information from the qubit to the light: The light measured the qubit’s state. The experimentalists controlled the interactions, controlling the axes “along which” the light was measured. The experimentalists weakly measured along two axes simultaneously.

Suppose that the axes coincided—say, at the x-axis \hat{x}. The qubit would collapse to the state | \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle + | 1 \rangle ), represented by the arrow that points along \hat{x} to the sphere’s surface, or to the state | \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle - | 1 \rangle ), represented by the opposite arrow.

0 deg

(Projection of) the Bloch Ball after the measurement. The system can access the colored points. The lighter a point, the greater the late-time state’s weight on the point.

Let \hat{x}' denote an axis near \hat{x}—say, 18° away. Suppose that the group weakly measured along \hat{x} and \hat{x}'. The state would partially collapse. The system would access points in the region straddled by \hat{x} and \hat{x}', as well as points straddled by - \hat{x} and - \hat{x}'.

18 deg

Finally, suppose that the group weakly measured along \hat{x} and \hat{y}. These axes stand in for position and momentum. The state would, loosely speaking, swirl around the Bloch ball.

90 deg

The Berkeley experiment illuminates foundations of quantum theory. Incompatible observables, physics students learn, can’t be measured simultaneously. This experiment blasts our expectations, using weak measurements. But the experiment doesn’t just destroy. It rebuilds the blast zone, by showing how | \Psi (t) \rangle evolves.

“Position” and “momentum” can hang together. So can experimentalists and theorists, physicists and chemists. So, somehow, can the California mission and the ionic columns. Maybe I’ll understand the scarab building when we understand quantum theory.2

With thanks to Birgitta’s group, Irfan’s group, and the rest of Berkeley’s quantum/stat-mech/info-theory community for its hospitality. The Bloch-sphere figures come from http://www.nature.com/articles/nature19762.

1The qubit is the quantum analog of a bit. The bit is the basic unit of information. A bit can be in one of two possible states, which we can label as 0 and 1. Qubits can manifest in many physical systems, including superconducting circuits. Such circuits are tiny quantum circuits through which current can flow, without resistance, forever.

2Soda Hall dazzled but startled me.