Rock-paper-scissors, granite-clock-idea

I have a soft spot for lamassu. Ten-foot-tall statues of these winged bull-men guarded the entrances to ancient Assyrian palaces. Show me lamassu, or apkallu—human-shaped winged deities—or other reliefs from the Neo-Assyrian capital of Nineveh, and you’ll have trouble showing me the door.

Assyrian art fills a gallery in London’s British Museum. Lamassu flank the gallery’s entrance. Carvings fill the interior: depictions of soldiers attacking, captives trudging, and kings hunting lions. The artwork’s vastness, its endurance, and the contact with a three-thousand-year-old civilization floor me. I tore myself away as the museum closed one Sunday night.


I visited the British Museum the night before visiting Jonathan Oppenheim’s research group at University College London (UCL). Jonathan combines quantum information theory with thermodynamics. He and others co-invented thermodynamic resource theories (TRTs), which Quantum Frontiers regulars will know of. TRTs are quantum-information-theoretic models for systems that exchange energy with their environments.

Energy is conjugate to time: Hamiltonians, mathematical objects that represent energy, represent also translations through time. We measure time with clocks. Little wonder that one can study quantum clocks using a model for energy exchanges.

Mischa Woods, Ralph Silva, and Jonathan used a resource theory to design an autonomous quantum clock. “Autonomous” means that the clock contains all the parts it needs to operate, needs no periodic winding-up, etc. When might we want an autonomous clock? When building quantum devices that operate independently of classical engineers. Or when performing a quantum computation: Computers must perform logical gates at specific times.


Wolfgang Pauli and others studied quantum clocks, the authors recall. How, Pauli asked, would an ideal clock look? Its Hamiltonian, \hat{H}_{\rm C}, would have eigenstates | E \rangle. The labels E denote possible amounts of energy.

The Hamiltonian would be conjugate to a “time operator” \hat{t}. Let | \theta \rangle denote an eigenstate of \hat{t}. This “time state” would equal an even superposition over the | E \rangle’s. The clock would occupy the state | \theta \rangle at time t_\theta.

Imagine measuring the clock, to learn the time, or controlling another system with the clock. The interaction would disturb the clock, changing the clock’s state. The disturbance wouldn’t mar the clock’s timekeeping, if the clock were ideal. What would enable an ideal clock to withstand the disturbances? The ability to have any amount of energy: E must stretch from - \infty to \infty. Such clocks can’t exist.

Approximations to them can. Mischa, Ralph, and Jonathan designed a finite-size clock, then characterized how accurately the clock mimics the ideal. (Experts: The clock corresponds to a Hilbert space of finite dimensionality d. The clock begins in a Gaussian state that peaks at one time state | \theta \rangle. The finite-width Gaussian offers more stability than a clock state.)

Disturbances degrade our ability to distinguish instants by measuring the clock. Imagine gazing at a kitchen clock through blurry lenses: You couldn’t distinguish 6:00 from 5:59 or 6:01. Disturbances also hinder the clock’s ability to implement processes, such as gates in a computation, at desired instants.

Mischa & co. quantified these degradations. The errors made by the clock, they found, decay inverse-exponentially with the clock’s size: Grow the clock a little, and the errors shrink a lot.


Time has degraded the lamassu, but only a little. You can distinguish feathers in their wings and strands in their beards. People portray such artifacts as having “withstood the flow of time,” or “evaded,” or “resisted.” Such portrayals have never appealed to me. I prefer to think of the lamassu as surviving not because they clash with time, but because they harmonize with it. The prospect of harmonizing with time—whatever that means—has enticed me throughout my life. The prospect partially underlies my research into time—perhaps childishly, foolishly—I recognize if I remove my blurry lenses before gazing in the mirror.

The creation of lasting works, like lamassu, has enticed me throughout my life. I’ve scrapbooked, archived, and recorded, and tended memories as though they were Great-Grandma’s cookbook. Ancient civilizations began alluring me at age six, partially due to artifacts’ longevity. No wonder I study the second law of thermodynamics.

Yet doing theoretical physics makes no sense from another perspective. The ancient Egyptians sculpted granite, when they could afford it. Gudea, king of the ancient city-state of Lagash, immortalized himself in diorite. I fashion ideas, which lack substance. Imagine playing, rather than rock-paper-scissors, granite-diorite-idea. The idea wouldn’t stand a chance.

Would it? Because an idea lacks substance, it can manifest in many forms. Plato’s cave allegory has manifested as a story, as classroom lectures, on handwritten pages, on word processors and websites, in cartloads of novels, in the film The Matrix, in one of the four most memorable advertisements I received from colleges as a high-school junior, and elsewhere. Plato’s allegory has survived since about the fourth century BCE. King Ashurbanipal’s lion-hunt reliefs have survived for only about 200 years longer.

The lion-hunt reliefs—and lamassu—exude a grandness, a majesty that’s attracted me as their longevity has. The nature of time and the perfect clock have as much grandness. Leaving the British Museum’s Assyrian gallery at 6 PM one Sunday, I couldn’t have asked for a more fitting location, 24 hours later, than in a theoretical-physics conversation.


With thanks to Jonathan, to Álvaro Martín-Alhambra, and to Mischa for their hospitality at UCL; to Ada Cohen for the “Art history of ancient Egypt and the ancient Near East” course for which I’d been hankering for years; to my brother, for transmitting the ancient-civilizations bug; and to my parents, who fed the infection with museum visits.

Click here for a follow-up to the quantum-clock paper.

The Ground Space of Babel

Librarians are committing suicide.

So relates the narrator of the short story “The Library of Babel.” The Argentine magical realist Jorge Luis Borges wrote the story in 1941.

Librarians are committing suicide partially because they can’t find the books they seek. The librarians are born in, and curate, a library called “infinite” by the narrator. The library consists of hexagonal cells, of staircases, of air shafts, and of closets for answering nature’s call. The narrator has never heard of anyone’s finding an edge of the library. Each hexagon houses 20 shelves, each of which houses 32 books, each of which contains 410 pages, each of which contains 40 lines, each of which consists of about 80 symbols. Every symbol comes from a set of 25: 22 letters, the period, the comma, and the space.

The library, a sage posited, contains every combination of the 25 symbols that satisfy the 410-40-and-80-ish requirement. His compatriots rejoiced:

All men felt themselves to be the masters of an intact and secret treasure. There was no personal or world problem whose eloquent solution did not exist in some hexagon. [ . . . ] a great deal was said about the Vindications: books of apology and prophecy which vindicated for all time the acts of every man in the universe and retained prodigious arcana for his future. Thousands of the greedy abandoned their sweet native hexagons and rushed up the stairways, urged on by the vain intention of finding their Vindication.

Probability punctured their joy: “the possibility of a man’s finding his Vindication, or some treacherous variation thereof, can be computed as zero.”

Many-body quantum physicists can empathize with Borges’s librarian.

A handful of us will huddle over a table or cluster in front of a chalkboard.

“Has anyone found this Hamiltonian’s ground space?” someone will ask.1

Library of Babel

A Hamiltonian is an observable, a measurable property. Consider a quantum system S, such as a set of particles hopping between atoms. We denote the system’s Hamiltonian by H. H determines how the system’s state changes in time. A musical about H swept Broadway last year.

A quantum system’s energy, E, might assume any of many possible values. H encodes the possible values. The least possible value, E0, we call the ground-state energy.

Under what condition does S have an amount E0 of energy? S must occupy a ground state. Consider Olympic snowboarder Shaun White in a half-pipe. He has kinetic energy, or energy of motion, when sliding along the pipe. He gains gravitational energy upon leaving the ground. He has little energy when sitting still on the snow. A quantum analog of that sitting constitutes a ground state.2

Consider, for example, electrons in a magnetic field. Each electron has a property called spin, illustrated with an arrow. The arrow’s direction represents the spin’s state. The system occupies a ground state when every arrow points in the same direction as the magnetic field.

Shaun White has as much energy, sitting on the ground in the half-pipe’s center, as he has sitting at the bottom of an edge of the half-pipe. Similarly, a quantum system might have multiple ground states. These states form the ground space.

“Has anyone found this Hamiltonian’s ground space?”

Olympic crashes

“Find” means, here,“identify the form of.” We want to derive a mathematical expression for the quantum analog of “sitting still, at the bottom of the half-pipe.”

“Find” often means “locate.” How do we locate an object such as a library? By identifying its spatial coordinates. We specify coordinates relative to directions, such as north, east, and up. We specify coordinates also when “finding” ground states.

Libraries occupy the physical space we live in. Ground states occupy an abstract mathematical space, a Hilbert space. The Hilbert space consists of the (pure) quantum states accessible to the system—loosely speaking, how the spins can orient themselves.

Libraries occupy a three-dimensional space. An N-spin system corresponds to a 2N-dimensional Hilbert space. Finding a ground state amounts to identifying 2N coordinates. The problem’s size grows exponentially with the number of particles.

An exponential quantifies also the size of the librarian’s problem. Imagine trying to locate some book in the Library of Babel. How many books should you expect to have to check? How many books does the library hold? Would you have more hope of finding the book, wandering the Library of Babel, or finding a ground state, wandering the Hilbert space? (Please take this question with a grain of whimsy, not as instructions for calculating ground states.)

A book’s first symbol has one of 25 possible values. So does the second symbol. The pair of symbols has one of 25 \times 25 = 25^2 possible values. A trio has one of 25^3 possible values, and so on.

How many symbols does a book contain? About \frac{ 410 \text{ pages} }{ 1 \text{ book} }  \:  \frac{ 40 \text{ lines} }{ 1 \text{ page} }  \:  \frac{ 80 \text{ characters} }{ 1 \text{ line} }  \approx  10^6 \, , or a million. The number of books grows exponentially with the number of symbols per book: The library contains about 25^{ 10^6 } books. You contain only about 10^{24} atoms. No wonder librarians are committing suicide.

Do quantum physicists deserve more hope? Physicists want to find ground states of chemical systems. Example systems are discussed here and here. The second paper refers to 65 electrons distributed across 57 orbitals (spatial regions). How large a Hilbert space does this system have? Each electron has a spin that, loosely speaking, can point upward or downward (that corresponds to a two-dimensional Hilbert space). One might expect each electron to correspond to a Hilbert space of dimensionality (57 \text{ orbitals}) \frac{ 2 \text{ spin states} }{ 1 \text{ orbital} } = 114. The 65 electrons would correspond to a Hilbert space \mathcal{H}_{\rm tot} of dimensionality 114^{65}.

But no two electrons can occupy the same one-electron state, due to Pauli’s exclusion principle. Hence \mathcal{H}_{\rm tot} has dimensionality {114 \choose 65} (“114 choose 65″), the number of ways in which you can select 65 states from a set of 114 states.

{114 \choose 65} equals approximately 10^{34}. Mathematica (a fancy calculator) can print a one followed by 34 zeroes. Mathematica refuses to print the number 25^{ 10^6 } of Babel’s books. Pity the librarians more than the physicists.


Pity us less when we have quantum computers (QCs). They could find ground states far more quickly than today’s supercomputers. But building QCs is taking about as long as Borges’s narrator wandered the library, searching for “the catalogue of catalogues.”

What would Borges and his librarians make of QCs? QCs will be able to search unstructured databases quickly, via Grover’s algorithm. Babel’s library lacks structure. Grover’s algorithm outperforms classical algorithms just when fed large databases. 25^{ 10^6 } books constitute a large database. Researchers seek a “killer app” for QCs. Maybe Babel’s librarians could vindicate quantum computing and quantum computing could rescue the librarians. If taken with a grain of magical realism.


1Such questions remind me of an Uncle Alfred who’s misplaced his glasses. I half-expect an Auntie Muriel to march up to us physicists. She, sensible in plaid, will cross her arms.

“Where did you last see your ground space?” she’ll ask. “Did you put it on your dresser before going to bed last night? Did you use it at breakfast, to read the newspaper?”

We’ll bow our heads and shuffle off to double-check the kitchen.

2More accurately, a ground state parallels Shaun White’s lying on the ground, stone-cold.

What makes extraordinary science extraordinary?

My article for this month appears on Sean Carroll’s blog, Preposterous UniverseSean is a theoretical physicist who practices cosmology at Caltech. He interfaces with philosophy, which tinges the question I confront: What distinguishes extraordinary science from good science? The topic seemed an opportunity to take Sean up on an invitation to guest-post on Preposterous Universe. Head there for my article. Thanks to Sean for hosting!

Big Dipper

Panza’s paradox

I finished reading a translation of Don Quixote this past spring. Miguel de Cervantes wrote the novel during the 1600s. The hero, a Spanish gentleman, believes the tales of chivalry’s golden days. He determines to outdo his heroes as a knight. Don Quixote enlists a peasant, Sancho Panza, to serve as his squire. Bony Don Quixote quotes classical texts; tubby Sancho Panza can’t sign his name. The pair roams the countryside, seeking adventures.

Don Quixote might have sold more copies than any other novel in history. Historians have dubbed Don Quixote “the first modern novel”; “quixotic” appears in English dictionaries; and artists and writers still spin off the story. Don Quixote reverberates throughout the 500 years that have followed it.

Cervantes, I discovered, had grasped a paradox that mathematicians had exposed last century.


Artists continue to spin off Don Quixote.

Don Quixote will vanquish so many villains, the pair expects, rulers will shower gifts on him. Someone will bequeath him a kingdom or an empire. Don Quixote promises to transfer part of his land to Sancho. Sancho expects to govern an ínsula, or island.

Sancho’s expectation amuses a duke and duchess. They pretend to grant Sancho an ínsula as a joke. How would such a simpleton rule? they wonder. They send servants and villagers to Sancho with fabricated problems. Sancho arbitrates the actors’ cases. Grossman translates one case as follows:

the first [case] was an engima presented to him by a stranger, who said:
“Señor, a very large river divided a lord’s lands into two parts [ . . . ] a bridge crossed this river, and at the end of it was a gallows and a kind of tribunal hall in which there were ordinarily four judges who applied the law set down by the owner of the river, the bridge, and the lands, which was as follows: ‘If anyone crosses this bridge from one side to the other, he must first take an oath as to where he is going and why; and if he swears the truth, let him pass; and if he tells a lie, let him die by hanging on the gallows displayed there, with no chance of pardon.’ Knowing this law and its rigorous conditions, many people crossed the bridge, and then, when it was clear that what they swore was true, the judges let them pass freely. It so happened, then, that a man once took the oath, and he swore and said that because of the oath he was going to die on the gallows, and he swore to nothing else. The judges studied the oath and said: ‘If we allow this man to pass freely, he lied in his oath, and according to the law he must die; and if we hang him, he swore that he was going to die on this gallows, and having sworn the truth, according to the same law he must go free.’ Señor Governor, the question for your grace is what should the judges do with the man.”

Cervantes formulated a paradox that looks, to me, equivalent to Russell’s barber paradox. Bertrand Russell contributed to philosophy during the early 1900s. He concocted an argument called the Russell-Zermelo paradox, which I’ll describe later. An acquaintance tried to encapsulate the paradox as follows: Consider an adult male barber who shaves all men who do not shave themselves. Does the barber shave himself?

Suppose that the barber doesn’t. (Analogously, suppose that the smart aleck in Panza’s paradox doesn’t lie.) The barber is a man whom the barber shaves. (The smart aleck must survive.) Hence the barber must shave himself. (Hence the traveler lies.) But we assumed that the barber doesn’t shave himself. (But we assumed that the traveler doesn’t lie.) Stalemate.


A barber plays a role in Don Quixote as in the Russell-Zermelo-like paradox. But the former barber has a wash basin that Don Quixote mistakes for a helmet.

Philosophers and mathematicians have debated to what extent the barber paradox illustrates the Russell-Zermelo paradox. Russell formulated the paradox in 1902. The mathematician Ernst Zermelo formulated the paradox around the same time. Mathematicians had just developed the field of set theory. A set is a collection of objects. Examples include the set of positive numbers, the set of polygons, and the set of readers who’ve looked at a Quantum Frontiers post.

Russell and Zermelo described a certain set \mathcal{S} of sets, a certain collection of sets. Let’s label the second-tier sets S_j  =  S_1, S_2, S_3, etc.

Russell 1

Each second-tier set S_j can contain elements. The elements can include third-tier sets s^{(j)}_k = s^{(j)}_1 ,  s^{(j)}_2, s^{(j)}_3, etc.

Russell 2

But no third-tier set s^{(j)}_k equals the second-tier set S_j. That is, no second-tier set S_j is element of itself.

Russell 3

Let \mathcal{S} contain every set that does not contain itself. Does the first-tier set \mathcal{S} contain itself?

Russell 4

Suppose that it does: \mathcal{S} = S_j for some j. \mathcal{S} is an element of itself. But, we said, “no second-tier set S_j is an element of itself.” So \mathcal{S} must not be an element of itself. But \mathcal{S} “contain[s] every set that does not contain itself.” So \mathcal{S} must contain itself. But we just concluded that \mathcal{S} doesn’t. Stalemate.

The Stanford Encyclopedia of Philosophy synopsizes the Russell-Zermelo paradox: “the paradox arises within naïve set theory by considering the set of all sets that are not members of themselves. Such a set appears to be a member of itself if and only if it is not a member of itself.”

One might resolve the Russell-Zermelo paradox by concluding that no set \mathcal{S} exists. One might resolve the barber paradox by concluding that no such barber exists. How does Sancho resolve what I’ll dub Panza’s paradox?

He initially decrees, “‘let the part of the man that swore the truth pass freely, and hang the part that told a lie.’”1 The petitioner protests: Dividing the smart aleck will kill him. But the law suggests that the smart aleck should live.

Sancho revises his decree:

“since the reasons for condemning him or sparing him are balanced perfectly, they should let him pass freely, for doing good is always more praiseworthy than doing evil, and I’d sign this with my own name if I knew how to write, and in this case I haven’t said my own idea but a precept that came to mind, one of many that was given to me by my master, Don Quixote [ . . . ] when the law is in doubt, I should favor and embrace mercy.”

One can resolve the barber’s paradox by concluding that such a barber cannot exist. Sancho resolves Panza’s paradox by concluding that the landowner’s law cannot govern all bridge-crossings. The law lacks what computer scientists would call an “edge case.” An edge case falls outside the jurisdiction of the most-often-used part of a rule. One must specify explicitly how to process edge cases, when writing computer programs. Sancho codes the edge case, supplementing the law.


Upon starting to read about Sancho’s trial, I sat bolt upright. I ran to my laptop upon finishing. Miguel de Cervantes had intuited, during the 1600s, a paradox not articulated by mathematicians until the 1900s. Surely, the literati had pounced on Cervantes’s foresight?

Mathematics writer Martin Gardner had. I found also two relevant slides in a Powerpoint and three relevant pages in an article. But more critics classified Panza’s paradox as an incarnation of the liar’s paradox than invoked Russell.

Scholars have credited Cervantes with anticipating, and initiating, literary waves that have propagated for four centuries. Perhaps we should credit him with anticipating mathematics not formalized for three.


1This decree evokes the story of King Solomon and the baby. Two women bring a baby to King Solomon. Each woman claims the baby as hers. “Cut the baby in two,” Solomon rules, “and give half to each woman.” One woman assents. The other cries, “No! Let her have the child; just don’t kill the baby.” The baby, Solomon discerns, belongs to the second woman. A mother would rather see her child in someone else’s home than see her child killed. Sancho, like Solomon, rules that someone be divided in two. But Sancho, lacking Solomon’s wisdom, misapplies Solomon’s tactic.

Machine learning the arXiv

Over the last year or so, the machine learning wave has really been sweeping through the field of condensed matter physics. Machine learning techniques have been applied to condensed matter physics before, but very sparsely and with little recognition. These days, I guess (partially) due to the general machine learning and AI hype, the amount of such studies skyrocketed (I admit to contributing to that..). I’ve been keeping track of this using the arXiv and Twitter (@Evert_v_N), but you should know about this website for getting an overview of the physics & machine learning papers:

This effort of applying machine learning to physics is a serious attempt at trying to understand how such tools could be useful in a variety of ways. It isn’t very hard to get a neural network to learn ‘something’ from physics data, but it is really hard to find out what – and especially how – the network does that. That’s why toy cases such as the Ising model or the Kosterlitz-Thouless transition have been so important!

When you’re keeping track of machine learning and AI developments, you soon realize that there are examples out there of amazing feats. Being able to generate photo-realistic pictures given just a sentence. e.g. “a brown bird with golden speckles and red wings is sitting on a yellow flower with pointy petals”, is (I think..) pretty cool. I can’t help but wonder if we’ll get to a point where we can ask it to generate “the groundstate of the Heisenberg model on a Kagome lattice of 100×100”…

Another feat I want to mention, and the main motivation for this post, is that of being able to encode words as vectors. That doesn’t immediately seem like a big achievement, but it is once you want to have ‘similar’ words have ‘similar’ vectors. That is, you intuitively understand that Queen and King are very similar, but differ basically only in gender. Can we teach that to a computer (read: neural network) by just having it read some text? Turns out we can. The general encoding of words to vectors is aptly named ‘Word2Vec’, and some of the top algorithms that do that were introduced here ( and here ( The neat thing is that we can actually do arithmetics with these words encoded as vectors, so that the network learns (with no other input than text!):

  • King – Man + Woman = Queen
  • Paris – France + Italy = Rome

In that spirit, I wondered if we can achieve the same thing with physics jargon. Everyone knows, namely, that “electrons + two dimensions + magnetic field = Landau levels”. But is that clear from condensed matter titles?

Try it yourself

If you decide at this point that the rest of the blog is too long, at least have a look here: or skip to the last section. That website demonstrates the main point of this post. If that sparks your curiosity, read on!

This post is mainly for entertainment, and so a small disclaimer is in order: in all of the results below, I am sure things can be improved upon. Consider this a ‘proof of principle’. However, I would be thrilled to see what kind of trained models you can come up with yourself! So for that purpose, all of the code (plus some bonus content!) can be found on this github repository:

Harvesting the arXiv

The perfect dataset for our endeavor can be found in the form of the arXiv. I’ve written a small script (see github repository) that harvests the titles of a given section from the arXiv. It also has options for getting the abstracts, but I’ll leave that for a separate investigation. Note that in principle we could also get the source-files of all of these papers, but doing that in bulk requires a payment; and getting them one by one will 1) take forever and 2) probably get us banned.

Collecting all this data of the physics:cond-mat subsection took right about 1.5 hours and resulted in 240737 titles and abstracts (I last ran this script on November 20th, 2017). I’ve filtered them by year and month, and you can see the result in Fig.1 below. Seems like we have some catching up to do in 2017 still (although as the inset shows, we have nothing to fear. November is almost over, but we still have the ‘getting things done before x-mas’ rush coming up!).


Figure 1: The number of papers in the cond-mat arXiv section over the years. We’re behind, but the year isn’t over yet! (Data up to Nov 20th 2017)

Analyzing n-grams

After tidying up the titles (removing LaTeX, converting everything to lowercase, etc.), the next thing to do is to train a language model on finding n-grams. N-grams are basically fixed n-word expressions such as ‘cooper pair’ (bigram) or ‘metal insulator transition’ (trigram). This makes it easier to train a Word2Vec encoding, since these phrases are fixed and can be considered a single word. The python module we’ll use for Word2Vec is gensim (, and it conveniently has phrase-detection built-in. The language model it builds reports back to us the n-grams it finds, and assigns them a score indicating how certain it is about them. Notice that this is not the same as how frequently it appears in the dataset. Hence an n-gram can appear fewer times than another, but have a higher certainty because it always appears in the same combination. For example, ‘de-haas-van-alphen’ appears less than, but is more certain than, ‘cooper-pair’, because ‘pair’ does not always come paired (pun intended) with ‘cooper’. I’ve analyzed up to 4-grams in the analysis below.

I can tell you’re curious by now to find out what some of the most certain n-grams in cond-mat are (again, these are not necessarily the most frequent), so here are some interesting findings:

  • The most certain n-grams are all surname combo’s, Affleck-Kennedy-Lieb-Tasaki being the number 1. Kugel-Khomskii is the most certain 2-name combo and Einstein-Podolksi-Rosen the most certain 3-name combo.
  • The first certain non-name n-gram is a ‘quartz tuning fork’, followed by a ‘superconducting coplanar waveguide resonator’. Who knew.
  • The bigram ‘phys. rev.’ and trigram ‘phys. rev. lett.’ are relatively high up in the confidence lists. These seem to come from the “Comment on […]”-titles on the arXiv.
  • I learned that there is such a thing as a Lefschetz thimble. I also learned that those things are called thimbles in English (we (in Holland) call them ‘finger-hats’!).

In terms of frequency however, which is probably more of interest to us, the most dominant n-grams are Two-dimensional, Quantum dot, Phase transition, Magnetic field, One dimensional and Bose-Einstein (in descending order). It seems 2D is still more popular than 1D, and all in all the top n-grams do a good job at ‘defining’ condensed matter physics. I’ll refer you to the github repository code if you want to see a full list! You’ll find there a piece of code that produces wordclouds from the dominant words and n-grams too, such as this one:


For fun though, before we finally get to the Word2Vec encoding, I’ve also kept track of all of these as a function of year, so that we can now turn to finding out which bigrams have been gaining the most popularity. The table below shows the top 5 n-grams for the period 2010 – 2016 (not including 2017) and for the period 2015 – Nov 20th 2017.


2015 – November 20th 2017

Spin liquids  Topological phases & transitions
 Weyl semimetals  Spin chains
 Topological phases & transitions  Machine learning
 Surface states  Transition metal dichalcogenides
 Transition metal dichalcogenides  Thermal transport
 Many-body localization  Open quantum systems

Actually, the real number 5 in the left column was ‘Topological insulators’, but given number 3 I skipped it. Also, this top 5 includes a number 6 (!), which I just could not leave off given that everyone seems to have been working on MBL. If we really want to be early adopters though, taking only the last 1.8 years (2015 – now, Nov 20th 2017)  in the right column of the table shows some interesting newcomers. Surprisingly, many-body localization is not even in the top 20 anymore. Suffice it to say, if you have been working on anything topology-related, you have nothing to worry about. Machine learning is indeed gaining lots of attention, but we’ve yet to see if it doesn’t go the MBL-route (I certainly don’t hope so!). Quantum computing does not seem to be on the cond-mat radar, but I’m certain we would find that high up in the quant-ph arXiv section.


Alright, finally time to use some actual neural networks for machine learning. As I started this post, what we’re about to do is try to train a network to encode/decode words into vectors, while simultaneously making sure that similar words (by meaning!) have similar vectors. Now that we have the n-grams, we want the Word2Vec algorithm to treat these as words by themselves (they are, after all, fixed combinations).

In the Word2Vec algorithm, we get to decide the length of the vectors that encode words ourselves. Larger vectors means more freedom in encoding words, but also makes it harder to learn similarity. In addition, we get to choose a window-size, indicating how many words the algorithm will look ahead to analyze relations between words. Both of these parameters are free for you to play with if you have a look at the source code repository. For the website, I’ve uploaded a size 100 with window-size 10 model, which I found to produce sensible results. Sensible here means “based on my expectations”, such as the previous example of “2D + electrons + magnetic field = Landau levels”. Let’s ask our network some questions.

First, as a simple check, let’s see what our encoding thinks some jargon is similar to:

  • Superconductor ~ Superconducting, Cuprate superconductor, Superconductivity, Layered superconductor, Unconventional superconductor, Superconducting gap, Cuprate, Weyl semimetal, …
  • Majorana ~ Majorana fermion, Majorana mode, Non-abelian, Zero-energy, braiding, topologically protected, …

It seems we could start to cluster words based on this. But the real test comes now, in the form of arithmetics. According to our network (I am listing the top two choices in some cases; the encoder outputs a list of similar vectors, ordered by similarity):

  • Majorana + Braiding = Non-Abelian
  • Electron + Hole = Exciton, Carrier
  • Spin + Magnetic field = Magnetization, Antiferromagnetic
  • Particle + Charge = Electron, Charged particle

And, sure enough:

  • 2D + electrons + magnetic field = Landau level, Magnetoresistance oscillation

The above is just a small sample of the things I’ve tried. See the link in the try it yourself section above if you want to have a go. Not all of the examples work nicely. For example, neither lattice + wave nor lattice + excitation nor lattice + force seem to result in anything related to the word ‘phonon’. I would guess that increasing the window size will help remedy this problem. Even better probably would be to include abstracts!


I could play with this for hours, and I’m sure that by including the abstracts and tweaking the vector size (plus some more parameters I haven’t even mentioned) one could optimize this more. Once we have an optimized model, we could start to cluster the vectors to define research fields, visualizing the relations between n-grams (both suggestions thanks to Thomas Vidick and John Preskill!), and many other things. This post has become rather long already however, and I will leave further investigation to a possible future post. I’d be very happy to incorporate anything cool you find yourselves though, so please let me know!

Gently yoking yin to yang

The architecture at the University of California, Berkeley mystified me. California Hall evokes a Spanish mission. The main library consists of white stone pillared by ionic columns. A sea-green building scintillates in the sunlight like a scarab. The buildings straddle the map of styles.


So do Berkeley’s quantum scientists, information-theory users, and statistical mechanics.

The chemists rove from abstract quantum information (QI) theory to experiments. Physicists experiment with superconducting qubits, trapped ions, and numerical simulations. Computer scientists invent algorithms for quantum computers to perform.

Few activities light me up more than bouncing from quantum group to info-theory group to stat-mech group, hunting commonalities. I was honored to bounce from group to group at Berkeley this September.

What a trampoline Berkeley has.

The groups fan out across campus and science, but I found compatibility. Including a collaboration that illuminated quantum incompatibility.

Quantum incompatibility originated in studies by Werner Heisenberg. He and colleagues cofounded quantum mechanics during the early 20th century. Measuring one property of a quantum system, Heisenberg intuited, can affect another property.

The most famous example involves position and momentum. Say that I hand you an electron. The electron occupies some quantum state represented by | \Psi \rangle. Suppose that you measure the electron’s position. The measurement outputs one of many possible values x (unless | \Psi \rangle has an unusual form, the form a Dirac delta function).

But we can’t say that the electron occupies any particular point x = x_0 in space. Measurement devices have limited precision. You can measure the position only to within some error \varepsilon: x = x_0 \pm \varepsilon.

Suppose that, immediately afterward, you measure the electron’s momentum. This measurement, too, outputs one of many possible values. What probability q(p) dp does the measurement have of outputting some value p? We can calculate q(p) dp, knowing the mathematical form of | \Psi \rangle and knowing the values of x_0 and \varepsilon.

q(p) is a probability density, which you can think of as a set of probabilities. The density can vary with p. Suppose that q(p) varies little: The probabilities spread evenly across the possible p values. You have no idea which value your momentum measurement will output. Suppose, instead, that q(p) peaks sharply at some value p = p_0. You can likely predict the momentum measurement’s outcome.

The certainty about the momentum measurement trades off with the precision \varepsilon of the position measurement. The smaller the \varepsilon (the more precisely you measured the position), the greater the momentum’s unpredictability. We call position and momentum complementary, or incompatible.

You can’t measure incompatible properties, with high precision, simultaneously. Imagine trying to do so. Upon measuring the momentum, you ascribe a tiny range of momentum values p to the electron. If you measured the momentum again, an instant later, you could likely predict that measurement’s outcome: The second measurement’s q(p) would peak sharply (encode high predictability). But, in the first instant, you measure also the position. Hence, by the discussion above, q(p) would spread out widely. But we just concluded that q(p) would peak sharply. This contradiction illustrates that you can’t measure position and momentum, precisely, at the same time.

But you can simultaneously measure incompatible properties weakly. A weak measurement has an enormous \varepsilon. A weak position measurement barely spreads out q(p). If you want more details, ask a Quantum Frontiers regular; I’ve been harping on weak measurements for months.

Blame Berkeley for my harping this month. Irfan Siddiqi’s and Birgitta Whaley’s groups collaborated on weak measurements of incompatible observables. They tracked how the measured quantum state | \Psi (t) \rangle evolved in time (represented by t).

Irfan’s group manipulates superconducting qubits.1 The qubits sit in the physics building, a white-stone specimen stamped with an egg-and-dart motif. Across the street sit chemists, including members of Birgitta’s group. The experimental physicists and theoretical chemists teamed up to study a quantum lack of teaming up.

Phys. & chem. bldgs

The experiment involved one superconducting qubit. The qubit has properties analogous to position and momentum: A ball, called the Bloch ball, represents the set of states that the qubit can occupy. Imagine an arrow pointing from the sphere’s center to any point in the ball. This Bloch vector represents the qubit’s state. Consider an arrow that points upward from the center to the surface. This arrow represents the qubit state | 0 \rangle. | 0 \rangle is the quantum analog of the possible value 0 of a bit, or unit of information. The analogous downward-pointing arrow represents the qubit state | 1 \rangle, analogous to 1.

Infinitely many axes intersect the sphere. Different axes represent different observables that Irfan’s group can measure. Nonparallel axes represent incompatible observables. For example, the x-axis represents an observable \sigma_x analogous to position. The y-axis represents an observable \sigma_y analogous to momentum.


Siddiqi lab, decorated with the trademark for the paper’s tug-of-war between incompatible observables. Photo credit: Leigh Martin, one of the paper’s leading authors.

Irfan’s group stuck their superconducting qubit in a cavity, or box. The cavity contained light that interacted with the qubit. The interactions transferred information from the qubit to the light: The light measured the qubit’s state. The experimentalists controlled the interactions, controlling the axes “along which” the light was measured. The experimentalists weakly measured along two axes simultaneously.

Suppose that the axes coincided—say, at the x-axis \hat{x}. The qubit would collapse to the state | \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle + | 1 \rangle ), represented by the arrow that points along \hat{x} to the sphere’s surface, or to the state | \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle - | 1 \rangle ), represented by the opposite arrow.

0 deg

(Projection of) the Bloch Ball after the measurement. The system can access the colored points. The lighter a point, the greater the late-time state’s weight on the point.

Let \hat{x}' denote an axis near \hat{x}—say, 18° away. Suppose that the group weakly measured along \hat{x} and \hat{x}'. The state would partially collapse. The system would access points in the region straddled by \hat{x} and \hat{x}', as well as points straddled by - \hat{x} and - \hat{x}'.

18 deg

Finally, suppose that the group weakly measured along \hat{x} and \hat{y}. These axes stand in for position and momentum. The state would, loosely speaking, swirl around the Bloch ball.

90 deg

The Berkeley experiment illuminates foundations of quantum theory. Incompatible observables, physics students learn, can’t be measured simultaneously. This experiment blasts our expectations, using weak measurements. But the experiment doesn’t just destroy. It rebuilds the blast zone, by showing how | \Psi (t) \rangle evolves.

“Position” and “momentum” can hang together. So can experimentalists and theorists, physicists and chemists. So, somehow, can the California mission and the ionic columns. Maybe I’ll understand the scarab building when we understand quantum theory.2

With thanks to Birgitta’s group, Irfan’s group, and the rest of Berkeley’s quantum/stat-mech/info-theory community for its hospitality. The Bloch-sphere figures come from

1The qubit is the quantum analog of a bit. The bit is the basic unit of information. A bit can be in one of two possible states, which we can label as 0 and 1. Qubits can manifest in many physical systems, including superconducting circuits. Such circuits are tiny quantum circuits through which current can flow, without resistance, forever.

2Soda Hall dazzled but startled me.


The word dominates chapter one of Richard Holmes’s book The Age of WonderHolmes writes biographies of Romantic-Era writers: Mary Wollstonecraft, Percy Shelley, and Samuel Taylor Coleridge populate his bibliography. They have cameos in Age. But their scientific counterparts star.

“Their natural-philosopher” counterparts, I should say. The word “scientist” emerged as the Romantic Era closed. Romanticism, a literary and artistic movement, flourished between the 1700s and the 1800s. Romantics championed self-expression, individuality, and emotion over convention and artificiality. Romantics wondered at, and drew inspiration from, the natural world. So, Holmes argues, did Romantic-Era natural philosophers. They explored, searched, and innovated with Wollstonecraft’s, Shelley’s, and Coleridge’s zest.

Age of Wonder

Holmes depicts Wilhelm and Caroline Herschel, a German brother and sister, discovering the planet Uranus. Humphry Davy, an amateur poet from Penzance, inventing a lamp that saved miners’ lives. Michael Faraday, a working-class Londoner, inspired by Davy’s chemistry lectures.

Joseph Banks in paradise.

So Holmes entitled chapter one.

Banks studied natural history as a young English gentleman during the 1760s. He then sailed around the world, a botanist on exploratory expeditions. The second expedition brought Banks aboard the HMS Endeavor. Captain James Cook steered the ship to Brazil, Tahiti, Australia, and New Zealand. Banks brought a few colleagues onboard. They studied the native flora, fauna, skies, and tribes.

Banks, with fellow botanist Daniel Solander, accumulated over 30,000 plant samples. Artist Sydney Parkinson drew the plants during the voyage. Parkinson’s drawings underlay 743 copper engravings that Banks commissioned upon returning to England. Banks planned to publish the engravings as the book Florilegium. He never succeeded. Two institutions executed Banks’s plan more than 200 years later.

Banks’s Florilegium crowns an exhibition at the University of California at Santa Barbara (UCSB). UCSB’s Special Research Collections will host “Botanical Illustrations and Scientific Discovery—Joseph Banks and the Exploration of the South Pacific, 1768–1771” until May 2018. The exhibition features maps of Banks’s journeys, biographical sketches of Banks and Cook, contemporary art inspired by the engravings, and the Florilegium.

online poster

The exhibition spotlights “plants that have subsequently become important ornamental plants on the UCSB campus, throughout Santa Barbara, and beyond.” One sees, roaming Santa Barbara, slivers of Banks’s paradise.

2 bouganvilleas

In Santa Barbara resides the Kavli Institute for Theoretical Physics (KITP). The KITP is hosting a program about the physics of quantum information (QI). QI scientists are congregating from across the world. Everyone visits for a few weeks or months, meeting some participants and missing others (those who have left or will arrive later). Participants attend and present tutorials, explore beyond their areas of expertise, and initiate research collaborations.

A conference capstoned the program, one week this October. Several speakers had founded subfields of physics: quantum error correction (how to fix errors that dog quantum computers), quantum computational complexity (how quickly quantum computers can solve hard problems), topological quantum computation, AdS/CFT (a parallel between certain gravitational systems and certain quantum systems), and more. Swaths of science exist because of these thinkers.


One evening that week, I visited the Joseph Banks exhibition.

Joseph Banks in paradise.

I’d thought that, by “paradise,” Holmes had meant “physical attractions”: lush flowers, vibrant colors, fresh fish, and warm sand. Another meaning occurred to me, after the conference talks, as I stood before a glass case in the library.

Joseph Banks, disembarking from the Endeavour, didn’t disembark onto just an island. He disembarked onto terra incognita. Never had he or his colleagues seen the blossoms, seed pods, or sprouts before him. Swaths of science awaited. What could the natural philosopher have craved more?

QI scientists of a certain age reminisce about the 1990s, the cowboy days of QI. When impactful theorems, protocols, and experiments abounded. When they dangled, like ripe fruit, just above your head. All you had to do was look up, reach out, and prove a pineapple.


Typical 1990s quantum-information scientist

That generation left mine few simple theorems to prove. But QI hasn’t suffered extinction. Its frontiers have advanced into other fields of science. Researchers are gaining insight into thermodynamics, quantum gravity, condensed matter, and chemistry from QI. The KITP conference highlighted connections with quantum gravity.

…in paradise.

What could a natural philosopher crave more?


Artwork commissioned by the UCSB library: “Sprawling Neobiotic Chimera (After Banks’ Florilegium),” by Rose Briccetti

Most KITP talks are recorded and released online. You can access talks from the conference here. My talk, about quantum chaos and thermalization, appears here. 

With gratitude to the KITP, and to the program organizers and the conference organizers, for the opportunity to participate.