The light show

Atoms 2

A strontium magneto-optical trap.

How did a quantum physics experiment end up looking like a night club? Due to a fortunate coincidence of nature, my lab mates and I at Endres Lab get to use three primary colors of laser light – red, blue, and green – to trap strontium atoms.  Let’s take a closer look at the physics behind this visually entrancing combination.

The spectrum

Sr level structure

The electronic spectrum of strontium near the ground state.

The trick to research is finding a problem that is challenging enough to be interesting, but accessible enough to not be impossible.  Strontium embodies this maxim in its electronic spectrum.  While at first glance it may seem daunting, it’s not too bad once you get to know each other.  Two valence electrons divide the spectrum into a spin-singlet sector and a spin-triplet sector – a designation that roughly defines whether the electron spins point in the opposite or in the same direction.  Certain transitions between these sectors are extremely precisely defined, and currently offer the best clock standards in the world.  Although navigating this spectrum requires more lasers, it offers opportunities for quantum physics that singly-valent spectra do not.  In the end, the experimental complexity is still very much manageable, and produces some great visuals to boot.  Here are some of the lasers we use in our lab:

The blue

At the center of the .gif above is a pulsating cloud of strontium atoms, shining brightly blue.  This is a magneto-optical trap, produced chiefly by strontium’s blue transition at 461nm.

IMG_3379

461nm blue laser light being routed through various paths.

The blue transition is exceptionally strong, scattering about 100 million photons per atom per second.  It is the transition we use to slow strontium atoms from a hot thermal beam traveling at hundreds of meters per second down to a cold cloud at about 1 milliKelvin.  In less than a second, this procedure gives us a couple hundred million atoms to work with.  As the experiment repeats, we get to watch this cloud pulse in and out of existence.

The red(s)

IMG_3380

689nm red light.  Bonus: Fabry-Perot interference fringes on my camera!

While the blue transition is a strong workhorse, the red transition at 689nm trades off strength for precision.  It couples strontium’s spin-singlet ground state to an excited spin-triplet state, a much weaker but more precisely defined transition.  While it does not scatter as fast as the blue (only about 23,000 photons per atom per second), it allows us to cool our atoms to much colder temperatures, on the order of 1 microKelvin.

In addition to our red laser at 689nm, we have two other reds at 679nm and 707nm.  These are necessary to essentially plug “holes” in the blue transition, which eventually cause an atom to fall into long-lived states other than the ground state.  It is generally true that the more complicated an atomic spectrum gets, the more “holes” there are to plug, and this is many times the reason why certain atoms and molecules are harder to trap than others.

The green

After we have established a cold magneto-optical trap, it is time to pick out individual atoms from this cloud and load them into very tightly focused optical traps that we call tweezers.  Here, our green laser comes into play.  This laser’s wavelength is far away from any particular transition, as we do not want it to scatter any photons at all.  However, its large intensity creates a conservative trapping potential for the atom, allowing us to hold onto it and even move it around.  Furthermore, its wavelength is what we call “magic”, which means it is chosen such that the ground and excited state experience the same trapping potential.

IMG_3369

The quite powerful green laser.  So powerful that you can see the beam in the air, like in the movies.

The invisible

Yet to be implemented are two more lasers slightly off the visible spectrum at both the ultraviolet and infrared sides.  Our ultraviolet laser will be crucial to elevating our experiment from single-body to many-body quantum physics, as it will allow us to drive our atoms to very highly excited Rydberg states which interact with long range.  Our infrared laser will allow us to trap atoms in the extremely precise clock state under “magic” conditions.

 

The combination of strontium’s various optical pathways allows for a lot of new tricks beyond just cooling and trapping.  Having Rydberg states alongside narrow-line transitions, for example, has yet unexplored potential for quantum simulation.  It is a playground that is very exciting without being utterly overwhelming.  Stay tuned as we continue our exploration – maybe we’ll have a yellow laser next time too.

 

Machine learning the arXiv

Over the last year or so, the machine learning wave has really been sweeping through the field of condensed matter physics. Machine learning techniques have been applied to condensed matter physics before, but very sparsely and with little recognition. These days, I guess (partially) due to the general machine learning and AI hype, the amount of such studies skyrocketed (I admit to contributing to that..). I’ve been keeping track of this using the arXiv and Twitter (@Evert_v_N), but you should know about this website for getting an overview of the physics & machine learning papers: https://physicsml.github.io/pages/papers.html.

This effort of applying machine learning to physics is a serious attempt at trying to understand how such tools could be useful in a variety of ways. It isn’t very hard to get a neural network to learn ‘something’ from physics data, but it is really hard to find out what – and especially how – the network does that. That’s why toy cases such as the Ising model or the Kosterlitz-Thouless transition have been so important!

When you’re keeping track of machine learning and AI developments, you soon realize that there are examples out there of amazing feats. Being able to generate photo-realistic pictures given just a sentence. e.g. “a brown bird with golden speckles and red wings is sitting on a yellow flower with pointy petals”, is (I think..) pretty cool. I can’t help but wonder if we’ll get to a point where we can ask it to generate “the groundstate of the Heisenberg model on a Kagome lattice of 100×100”…

Another feat I want to mention, and the main motivation for this post, is that of being able to encode words as vectors. That doesn’t immediately seem like a big achievement, but it is once you want to have ‘similar’ words have ‘similar’ vectors. That is, you intuitively understand that Queen and King are very similar, but differ basically only in gender. Can we teach that to a computer (read: neural network) by just having it read some text? Turns out we can. The general encoding of words to vectors is aptly named ‘Word2Vec’, and some of the top algorithms that do that were introduced here (https://arxiv.org/abs/1301.3781) and here (https://arxiv.org/abs/1310.4546). The neat thing is that we can actually do arithmetics with these words encoded as vectors, so that the network learns (with no other input than text!):

  • King – Man + Woman = Queen
  • Paris – France + Italy = Rome

In that spirit, I wondered if we can achieve the same thing with physics jargon. Everyone knows, namely, that “electrons + two dimensions + magnetic field = Landau levels”. But is that clear from condensed matter titles?

Try it yourself

If you decide at this point that the rest of the blog is too long, at least have a look here: everthemore.pythonanywhere.com or skip to the last section. That website demonstrates the main point of this post. If that sparks your curiosity, read on!

This post is mainly for entertainment, and so a small disclaimer is in order: in all of the results below, I am sure things can be improved upon. Consider this a ‘proof of principle’. However, I would be thrilled to see what kind of trained models you can come up with yourself! So for that purpose, all of the code (plus some bonus content!) can be found on this github repository: https://github.com/everthemore/physics2vec.

Harvesting the arXiv

The perfect dataset for our endeavor can be found in the form of the arXiv. I’ve written a small script (see github repository) that harvests the titles of a given section from the arXiv. It also has options for getting the abstracts, but I’ll leave that for a separate investigation. Note that in principle we could also get the source-files of all of these papers, but doing that in bulk requires a payment; and getting them one by one will 1) take forever and 2) probably get us banned.

Collecting all this data of the physics:cond-mat subsection took right about 1.5 hours and resulted in 240737 titles and abstracts (I last ran this script on November 20th, 2017). I’ve filtered them by year and month, and you can see the result in Fig.1 below. Seems like we have some catching up to do in 2017 still (although as the inset shows, we have nothing to fear. November is almost over, but we still have the ‘getting things done before x-mas’ rush coming up!).

numpapers

Figure 1: The number of papers in the cond-mat arXiv section over the years. We’re behind, but the year isn’t over yet! (Data up to Nov 20th 2017)

Analyzing n-grams

After tidying up the titles (removing LaTeX, converting everything to lowercase, etc.), the next thing to do is to train a language model on finding n-grams. N-grams are basically fixed n-word expressions such as ‘cooper pair’ (bigram) or ‘metal insulator transition’ (trigram). This makes it easier to train a Word2Vec encoding, since these phrases are fixed and can be considered a single word. The python module we’ll use for Word2Vec is gensim (https://radimrehurek.com/gensim/), and it conveniently has phrase-detection built-in. The language model it builds reports back to us the n-grams it finds, and assigns them a score indicating how certain it is about them. Notice that this is not the same as how frequently it appears in the dataset. Hence an n-gram can appear fewer times than another, but have a higher certainty because it always appears in the same combination. For example, ‘de-haas-van-alphen’ appears less than, but is more certain than, ‘cooper-pair’, because ‘pair’ does not always come paired (pun intended) with ‘cooper’. I’ve analyzed up to 4-grams in the analysis below.

I can tell you’re curious by now to find out what some of the most certain n-grams in cond-mat are (again, these are not necessarily the most frequent), so here are some interesting findings:

  • The most certain n-grams are all surname combo’s, Affleck-Kennedy-Lieb-Tasaki being the number 1. Kugel-Khomskii is the most certain 2-name combo and Einstein-Podolksi-Rosen the most certain 3-name combo.
  • The first certain non-name n-gram is a ‘quartz tuning fork’, followed by a ‘superconducting coplanar waveguide resonator’. Who knew.
  • The bigram ‘phys. rev.’ and trigram ‘phys. rev. lett.’ are relatively high up in the confidence lists. These seem to come from the “Comment on […]”-titles on the arXiv.
  • I learned that there is such a thing as a Lefschetz thimble. I also learned that those things are called thimbles in English (we (in Holland) call them ‘finger-hats’!).

In terms of frequency however, which is probably more of interest to us, the most dominant n-grams are Two-dimensional, Quantum dot, Phase transition, Magnetic field, One dimensional and Bose-Einstein (in descending order). It seems 2D is still more popular than 1D, and all in all the top n-grams do a good job at ‘defining’ condensed matter physics. I’ll refer you to the github repository code if you want to see a full list! You’ll find there a piece of code that produces wordclouds from the dominant words and n-grams too, such as this one:

caltechwordcloud.png

For fun though, before we finally get to the Word2Vec encoding, I’ve also kept track of all of these as a function of year, so that we can now turn to finding out which bigrams have been gaining the most popularity. The table below shows the top 5 n-grams for the period 2010 – 2016 (not including 2017) and for the period 2015 – Nov 20th 2017.

2010-2016

2015 – November 20th 2017

Spin liquids  Topological phases & transitions
 Weyl semimetals  Spin chains
 Topological phases & transitions  Machine learning
 Surface states  Transition metal dichalcogenides
 Transition metal dichalcogenides  Thermal transport
 Many-body localization  Open quantum systems

Actually, the real number 5 in the left column was ‘Topological insulators’, but given number 3 I skipped it. Also, this top 5 includes a number 6 (!), which I just could not leave off given that everyone seems to have been working on MBL. If we really want to be early adopters though, taking only the last 1.8 years (2015 – now, Nov 20th 2017)  in the right column of the table shows some interesting newcomers. Surprisingly, many-body localization is not even in the top 20 anymore. Suffice it to say, if you have been working on anything topology-related, you have nothing to worry about. Machine learning is indeed gaining lots of attention, but we’ve yet to see if it doesn’t go the MBL-route (I certainly don’t hope so!). Quantum computing does not seem to be on the cond-mat radar, but I’m certain we would find that high up in the quant-ph arXiv section.

CondMat2Vec

Alright, finally time to use some actual neural networks for machine learning. As I started this post, what we’re about to do is try to train a network to encode/decode words into vectors, while simultaneously making sure that similar words (by meaning!) have similar vectors. Now that we have the n-grams, we want the Word2Vec algorithm to treat these as words by themselves (they are, after all, fixed combinations).

In the Word2Vec algorithm, we get to decide the length of the vectors that encode words ourselves. Larger vectors means more freedom in encoding words, but also makes it harder to learn similarity. In addition, we get to choose a window-size, indicating how many words the algorithm will look ahead to analyze relations between words. Both of these parameters are free for you to play with if you have a look at the source code repository. For the website everthemore.pythonanywhere.com, I’ve uploaded a size 100 with window-size 10 model, which I found to produce sensible results. Sensible here means “based on my expectations”, such as the previous example of “2D + electrons + magnetic field = Landau levels”. Let’s ask our network some questions.

First, as a simple check, let’s see what our encoding thinks some jargon is similar to:

  • Superconductor ~ Superconducting, Cuprate superconductor, Superconductivity, Layered superconductor, Unconventional superconductor, Superconducting gap, Cuprate, Weyl semimetal, …
  • Majorana ~ Majorana fermion, Majorana mode, Non-abelian, Zero-energy, braiding, topologically protected, …

It seems we could start to cluster words based on this. But the real test comes now, in the form of arithmetics. According to our network (I am listing the top two choices in some cases; the encoder outputs a list of similar vectors, ordered by similarity):

  • Majorana + Braiding = Non-Abelian
  • Electron + Hole = Exciton, Carrier
  • Spin + Magnetic field = Magnetization, Antiferromagnetic
  • Particle + Charge = Electron, Charged particle

And, sure enough:

  • 2D + electrons + magnetic field = Landau level, Magnetoresistance oscillation

The above is just a small sample of the things I’ve tried. See the link in the try it yourself section above if you want to have a go. Not all of the examples work nicely. For example, neither lattice + wave nor lattice + excitation nor lattice + force seem to result in anything related to the word ‘phonon’. I would guess that increasing the window size will help remedy this problem. Even better probably would be to include abstracts!

Outlook

I could play with this for hours, and I’m sure that by including the abstracts and tweaking the vector size (plus some more parameters I haven’t even mentioned) one could optimize this more. Once we have an optimized model, we could start to cluster the vectors to define research fields, visualizing the relations between n-grams (both suggestions thanks to Thomas Vidick and John Preskill!), and many other things. This post has become rather long already however, and I will leave further investigation to a possible future post. I’d be very happy to incorporate anything cool you find yourselves though, so please let me know!

Gently yoking yin to yang

The architecture at the University of California, Berkeley mystified me. California Hall evokes a Spanish mission. The main library consists of white stone pillared by ionic columns. A sea-green building scintillates in the sunlight like a scarab. The buildings straddle the map of styles.

Architecture.001

So do Berkeley’s quantum scientists, information-theory users, and statistical mechanics.

The chemists rove from abstract quantum information (QI) theory to experiments. Physicists experiment with superconducting qubits, trapped ions, and numerical simulations. Computer scientists invent algorithms for quantum computers to perform.

Few activities light me up more than bouncing from quantum group to info-theory group to stat-mech group, hunting commonalities. I was honored to bounce from group to group at Berkeley this September.

What a trampoline Berkeley has.

The groups fan out across campus and science, but I found compatibility. Including a collaboration that illuminated quantum incompatibility.

Quantum incompatibility originated in studies by Werner Heisenberg. He and colleagues cofounded quantum mechanics during the early 20th century. Measuring one property of a quantum system, Heisenberg intuited, can affect another property.

The most famous example involves position and momentum. Say that I hand you an electron. The electron occupies some quantum state represented by | \Psi \rangle. Suppose that you measure the electron’s position. The measurement outputs one of many possible values x (unless | \Psi \rangle has an unusual form, the form a Dirac delta function).

But we can’t say that the electron occupies any particular point x = x_0 in space. Measurement devices have limited precision. You can measure the position only to within some error \varepsilon: x = x_0 \pm \varepsilon.

Suppose that, immediately afterward, you measure the electron’s momentum. This measurement, too, outputs one of many possible values. What probability q(p) dp does the measurement have of outputting some value p? We can calculate q(p) dp, knowing the mathematical form of | \Psi \rangle and knowing the values of x_0 and \varepsilon.

q(p) is a probability density, which you can think of as a set of probabilities. The density can vary with p. Suppose that q(p) varies little: The probabilities spread evenly across the possible p values. You have no idea which value your momentum measurement will output. Suppose, instead, that q(p) peaks sharply at some value p = p_0. You can likely predict the momentum measurement’s outcome.

The certainty about the momentum measurement trades off with the precision \varepsilon of the position measurement. The smaller the \varepsilon (the more precisely you measured the position), the greater the momentum’s unpredictability. We call position and momentum complementary, or incompatible.

You can’t measure incompatible properties, with high precision, simultaneously. Imagine trying to do so. Upon measuring the momentum, you ascribe a tiny range of momentum values p to the electron. If you measured the momentum again, an instant later, you could likely predict that measurement’s outcome: The second measurement’s q(p) would peak sharply (encode high predictability). But, in the first instant, you measure also the position. Hence, by the discussion above, q(p) would spread out widely. But we just concluded that q(p) would peak sharply. This contradiction illustrates that you can’t measure position and momentum, precisely, at the same time.

But you can simultaneously measure incompatible properties weakly. A weak measurement has an enormous \varepsilon. A weak position measurement barely spreads out q(p). If you want more details, ask a Quantum Frontiers regular; I’ve been harping on weak measurements for months.

Blame Berkeley for my harping this month. Irfan Siddiqi’s and Birgitta Whaley’s groups collaborated on weak measurements of incompatible observables. They tracked how the measured quantum state | \Psi (t) \rangle evolved in time (represented by t).

Irfan’s group manipulates superconducting qubits.1 The qubits sit in the physics building, a white-stone specimen stamped with an egg-and-dart motif. Across the street sit chemists, including members of Birgitta’s group. The experimental physicists and theoretical chemists teamed up to study a quantum lack of teaming up.

Phys. & chem. bldgs

The experiment involved one superconducting qubit. The qubit has properties analogous to position and momentum: A ball, called the Bloch ball, represents the set of states that the qubit can occupy. Imagine an arrow pointing from the sphere’s center to any point in the ball. This Bloch vector represents the qubit’s state. Consider an arrow that points upward from the center to the surface. This arrow represents the qubit state | 0 \rangle. | 0 \rangle is the quantum analog of the possible value 0 of a bit, or unit of information. The analogous downward-pointing arrow represents the qubit state | 1 \rangle, analogous to 1.

Infinitely many axes intersect the sphere. Different axes represent different observables that Irfan’s group can measure. Nonparallel axes represent incompatible observables. For example, the x-axis represents an observable \sigma_x analogous to position. The y-axis represents an observable \sigma_y analogous to momentum.

Tug-of-war

Siddiqi lab, decorated with the trademark for the paper’s tug-of-war between incompatible observables. Photo credit: Leigh Martin, one of the paper’s leading authors.

Irfan’s group stuck their superconducting qubit in a cavity, or box. The cavity contained light that interacted with the qubit. The interactions transferred information from the qubit to the light: The light measured the qubit’s state. The experimentalists controlled the interactions, controlling the axes “along which” the light was measured. The experimentalists weakly measured along two axes simultaneously.

Suppose that the axes coincided—say, at the x-axis \hat{x}. The qubit would collapse to the state | \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle + | 1 \rangle ), represented by the arrow that points along \hat{x} to the sphere’s surface, or to the state | \Psi \rangle = \frac{1}{ \sqrt{2} } ( | 0 \rangle - | 1 \rangle ), represented by the opposite arrow.

0 deg

(Projection of) the Bloch Ball after the measurement. The system can access the colored points. The lighter a point, the greater the late-time state’s weight on the point.

Let \hat{x}' denote an axis near \hat{x}—say, 18° away. Suppose that the group weakly measured along \hat{x} and \hat{x}'. The state would partially collapse. The system would access points in the region straddled by \hat{x} and \hat{x}', as well as points straddled by - \hat{x} and - \hat{x}'.

18 deg

Finally, suppose that the group weakly measured along \hat{x} and \hat{y}. These axes stand in for position and momentum. The state would, loosely speaking, swirl around the Bloch ball.

90 deg

The Berkeley experiment illuminates foundations of quantum theory. Incompatible observables, physics students learn, can’t be measured simultaneously. This experiment blasts our expectations, using weak measurements. But the experiment doesn’t just destroy. It rebuilds the blast zone, by showing how | \Psi (t) \rangle evolves.

“Position” and “momentum” can hang together. So can experimentalists and theorists, physicists and chemists. So, somehow, can the California mission and the ionic columns. Maybe I’ll understand the scarab building when we understand quantum theory.2

With thanks to Birgitta’s group, Irfan’s group, and the rest of Berkeley’s quantum/stat-mech/info-theory community for its hospitality. The Bloch-sphere figures come from http://www.nature.com/articles/nature19762.

1The qubit is the quantum analog of a bit. The bit is the basic unit of information. A bit can be in one of two possible states, which we can label as 0 and 1. Qubits can manifest in many physical systems, including superconducting circuits. Such circuits are tiny quantum circuits through which current can flow, without resistance, forever.

2Soda Hall dazzled but startled me.

Majorana update

If you are, by any chance, following progress in the field of Majorana bound states, then you are for sure super excited about ample Majorana results arriving this Fall. On the other hand, if you just heard about these elusive states recently, it is time for an update. For physicists working in the field, this Fall was perhaps the most exciting time since the first experimental reports from 2012. In the last few weeks there was not only one, but at least three interesting manuscripts reporting new insightful data which may finally provide a definitive experimental verification of the existence of these states in condensed matter systems.

But before I dive into these new results, let me give a brief history on the topic of  Majorana states and their experimental observation. The story starts with the young talented physicist Ettore Majorana, who hypothesized back in 1937 the existence of fermionic particles which were their own antiparticles. These hypothetical particles, now called Majorana fermions, were proposed in the context of elementary particle physics, but never observed. Some 60 years later, in the early 2000s, theoretical work emerged showing that Majorana fermionic states can exist as the quasiparticle excitations in certain low-dimensional superconducting systems (not a real particle as originally proposed, but otherwise having the exact same properties). Since then theorists have proposed half a dozen possible ways to realize Majorana modes using readily available materials such as superconductors, semiconductors, magnets, as well as topological insulators (for curious readers, I recommend manuscripts [1, 2, 3] for an overview of the different proposed methods to realize Majorana states in the lab).

The most fascinating thing about Majorana states is that they belong to the class of anyons, which means that they behave neither as bosons nor as fermions upon exchange. For example, if you have two identical fermionic (or bosonic) states and you exchange their positions, the quantum mechanical function describing the two states will acquire a phase factor of -1 (or +1). Anyons, on the other hand, can have an arbitrary phase factor eiφ upon exchange. For this reason, they are considered to be a starting point for topological quantum computation. If you want to learn more about anyons, check out the video below featuring IQIM’s Gil Refael and Jason Alicea.

 

Back in 2012, a group in Delft (led by Prof. Leo Kouwenhoven) announced the observation of zero-energy states in a nanoscale device consisting of a semiconductor nanowire coupled to a superconductor. These states behaved very similarly to the Majoranas that were previously predicted to occur in this system. The key word here is ‘similar’, since the behavior of these modes was not fully consistent with the theoretical predictions. Namely, the electrical conductance carried through the observed zero energy states was only about ~5% of the expected perfect transmission value for Majoranas. This part of the data was very puzzling, and immediately cast some doubts throughout the community. The physicists were quickly divided into what I will call enthusiasts (believers that these initial results indeed originated from Majorana states) and skeptics (who were pointing out that effects, other than Majoranas, can result in similarly looking zero energy peaks). And thus a great debate started.

In the coming years, experimentalists tried to observe zero energy features in improved devices, track how these features evolve with external parameters, such as gate voltages, length of the wires, etc., or focus on completely different platforms for hosting Majorana states, such as magnetic flux vortices in topological superconductors and magnetic atomic chains placed on a superconducting surface.  However, these results were not enough to convince skeptics that the observed states indeed originated from the Majoranas and not some other yet-to-be-discovered phenomenon. And so, the debate continued. With each generation of the experiments some of the alternative proposed scenarios were ruled out, but the final verification was still missing.

Fast forward to the events of this Fall and the exciting recent results. The manuscript I would like to invite you to read was just posted on ArXiv a couple of weeks ago. The main result is the observation of the perfectly quantized 2e2/h conductance at zero energy, the long sought signature of the Majorana states. This quantization implies that in this latest generation of semiconducting-superconducting devices zero-energy states exhibit perfect electron-hole symmetry and thus allow for perfect Andreev reflection. These remarkable results may finally end the debate and convince most of the skeptics out there.

Fig_blog

Figure 1. (a,b) Comparison between devices and measurements from 2012 and 2017. (a) In 2012 a device made by combining a superconductor (Niobium Titanium Nitride alloy) and Indium Antimonide nanowire resulted in the first signature of zero energy states but the conductance peak was only about 0.1 x e2/h. Adapted from Mourik et al. Science 2012. (b) Similar device from 2017 made by carefully depositing superconducting Aluminum on Indium Arsenide. The fully developed 2e2/h conductance peak was observed. Adapted from Zhang et. al. ArXiv 2017. (c) Schematics of the Andreev reflection through the Normal (N)/Superconductor (S) interface. (d,e) Alternative view of the Andreev reflection process as a tunneling through a double barrier without and with Majorana modes (shown in yellow).

To fully appreciate these results, it is useful to quickly review the physics of Andreev reflection (Fig. 1c-e) that occurs at the interface between a normal region with a superconductor [4]. As the electron (blue) in the normal region enters a superconductor and pulls an additional electron with it to form a Copper pair, an extra hole (red) is left behind (Fig. 1(c)). You can also think about this process as the transmission through two leads, one connecting the superconductor to the electrons and the other to the holes (Fig. 1d). This allows us to view this problem as a transmission through the double barrier that is generally low. In the presence of a Majorana state, however, there is a resonant level at zero energy which is coupled with the same amplitude with both electrons and holes. This in turn results in the resonant Andreev reflection with a perfect quantization of 2e2/h (Fig. 1e). Note that, even in the configuration without Majorana modes, perfect quantization is possible but highly unlikely as it requires very careful tuning of the barrier potential (the authors did show that their quantization is robust against tuning the voltages on the gates, ruling out this possibility).

Going back to the experiments, you may wonder what made this breakthrough possible? It seems to be the combination of various factors, including using epitaxially grown  superconductors and more sophisticated fabrication methods. As often happens in experimental physics, this milestone did not come from one ingenious idea, but rather from numerous technical improvements obtained by several generations of hard-working grad students and postdocs.

If you are up for more Majorana reading, you can find two more recent eye-catching manuscripts here and here. Note that the list of interesting recent Majorana papers is a mere selection by the author and not complete by any means. A few months ago, my IQIM colleagues wrote a nice blog entry about topological qubits arriving in 2018. Although this may sound overly optimistic, the recent results suggest that the field is definitely taking off. While there are certainly many challenges to be solved, we may see the next generation of experiments designed to probe control over the Majorana states quite soon. Stay tuned for more!!!!!!

Paradise

The word dominates chapter one of Richard Holmes’s book The Age of WonderHolmes writes biographies of Romantic-Era writers: Mary Wollstonecraft, Percy Shelley, and Samuel Taylor Coleridge populate his bibliography. They have cameos in Age. But their scientific counterparts star.

“Their natural-philosopher” counterparts, I should say. The word “scientist” emerged as the Romantic Era closed. Romanticism, a literary and artistic movement, flourished between the 1700s and the 1800s. Romantics championed self-expression, individuality, and emotion over convention and artificiality. Romantics wondered at, and drew inspiration from, the natural world. So, Holmes argues, did Romantic-Era natural philosophers. They explored, searched, and innovated with Wollstonecraft’s, Shelley’s, and Coleridge’s zest.

Age of Wonder

Holmes depicts Wilhelm and Caroline Herschel, a German brother and sister, discovering the planet Uranus. Humphry Davy, an amateur poet from Penzance, inventing a lamp that saved miners’ lives. Michael Faraday, a working-class Londoner, inspired by Davy’s chemistry lectures.

Joseph Banks in paradise.

So Holmes entitled chapter one.

Banks studied natural history as a young English gentleman during the 1760s. He then sailed around the world, a botanist on exploratory expeditions. The second expedition brought Banks aboard the HMS Endeavor. Captain James Cook steered the ship to Brazil, Tahiti, Australia, and New Zealand. Banks brought a few colleagues onboard. They studied the native flora, fauna, skies, and tribes.

Banks, with fellow botanist Daniel Solander, accumulated over 30,000 plant samples. Artist Sydney Parkinson drew the plants during the voyage. Parkinson’s drawings underlay 743 copper engravings that Banks commissioned upon returning to England. Banks planned to publish the engravings as the book Florilegium. He never succeeded. Two institutions executed Banks’s plan more than 200 years later.

Banks’s Florilegium crowns an exhibition at the University of California at Santa Barbara (UCSB). UCSB’s Special Research Collections will host “Botanical Illustrations and Scientific Discovery—Joseph Banks and the Exploration of the South Pacific, 1768–1771” until May 2018. The exhibition features maps of Banks’s journeys, biographical sketches of Banks and Cook, contemporary art inspired by the engravings, and the Florilegium.

online poster

The exhibition spotlights “plants that have subsequently become important ornamental plants on the UCSB campus, throughout Santa Barbara, and beyond.” One sees, roaming Santa Barbara, slivers of Banks’s paradise.

2 bouganvilleas

In Santa Barbara resides the Kavli Institute for Theoretical Physics (KITP). The KITP is hosting a program about the physics of quantum information (QI). QI scientists are congregating from across the world. Everyone visits for a few weeks or months, meeting some participants and missing others (those who have left or will arrive later). Participants attend and present tutorials, explore beyond their areas of expertise, and initiate research collaborations.

A conference capstoned the program, one week this October. Several speakers had founded subfields of physics: quantum error correction (how to fix errors that dog quantum computers), quantum computational complexity (how quickly quantum computers can solve hard problems), topological quantum computation, AdS/CFT (a parallel between certain gravitational systems and certain quantum systems), and more. Swaths of science exist because of these thinkers.

KITP

One evening that week, I visited the Joseph Banks exhibition.

Joseph Banks in paradise.

I’d thought that, by “paradise,” Holmes had meant “physical attractions”: lush flowers, vibrant colors, fresh fish, and warm sand. Another meaning occurred to me, after the conference talks, as I stood before a glass case in the library.

Joseph Banks, disembarking from the Endeavour, didn’t disembark onto just an island. He disembarked onto terra incognita. Never had he or his colleagues seen the blossoms, seed pods, or sprouts before him. Swaths of science awaited. What could the natural philosopher have craved more?

QI scientists of a certain age reminisce about the 1990s, the cowboy days of QI. When impactful theorems, protocols, and experiments abounded. When they dangled, like ripe fruit, just above your head. All you had to do was look up, reach out, and prove a pineapple.

Cowboy

Typical 1990s quantum-information scientist

That generation left mine few simple theorems to prove. But QI hasn’t suffered extinction. Its frontiers have advanced into other fields of science. Researchers are gaining insight into thermodynamics, quantum gravity, condensed matter, and chemistry from QI. The KITP conference highlighted connections with quantum gravity.

…in paradise.

What could a natural philosopher crave more?

Contemporary

Artwork commissioned by the UCSB library: “Sprawling Neobiotic Chimera (After Banks’ Florilegium),” by Rose Briccetti

Most KITP talks are recorded and released online. You can access talks from the conference here. My talk, about quantum chaos and thermalization, appears here. 

With gratitude to the KITP, and to the program organizers and the conference organizers, for the opportunity to participate. 

A Few Words With Caltech Research Scientist, David Boyd

Twenty years ago, David Boyd began his career at Caltech as a Postdoctoral Scholar with Dave Goodwin, and since 2012 has held the position of Research Scientist in the Division of Physics, Mathematics and Astronomy.  A 20 year career at Caltech is in itself a significant achievement considering Caltech’s flair for amassing the very best scientists from around the world.  Throughout Boyd’s career he has secured 7 patents, and most recently discovered a revolutionary single-step method for growing graphene.  The method allows for unprecedented continuity in graphene growth essential to significantly scaling-up production capacity.  Boyd worked with a number of great scientists at the outset of his career.  Notably, he gained a passion for science from Professor Thomas Wdowiak (Mars’ Wdowiak Ridge is named in his honor) at the University of Alabama at Birmingham as an undergraduate, and worked as David Goodwin’s (best known for developing methods for growing thin film high-purity diamonds) postdoc at Caltech.  Currently, Boyd is formulating a way to apply Goodwin’s reaction modeling code to graphene.  Considering Boyd’s accomplishments and extensive scientific knowledge, I feel fortunate to have been afforded the opportunity to work in his lab the past six summers. I have learned much from Boyd, but I still have more questions (not all scientific), so I requested an interview and he graciously accepted.

On the day of the interview, I meet Boyd at his office on campus at Caltech.  We walk a ways down a sunlit hallway and out to a balcony through two glass doors.  There’s a slight breeze in the air, a smell of nearby roses, and the temperature is perfect.  It’s a picturesque day in Pasadena.  We sit at a table and I ask my first question.

How many patents do you own?

I have seven patents.  The graphene patent was really hard to get, but we got it.  We just got it executed in China, so they are allowed to use it.  This is particularly exciting because of all the manufacturing in China.  The patent system has changed a bit, so it’s getting harder and harder.  You can come up with the idea, but if disparate components have already been patented, then you can’t get the patent for combining them in a unique way.  The invention has to provide a result that is unexpected or not obvious, and the patent for growing graphene with a one step process was just that.  The one step process refers to cleaning the copper substrate and growing graphene under the same chemistry in a continuous manner.  What used to be a two step process can be done in one.

You don’t have to anneal the substrate to 1000 degrees before growing.

Exactly.  Annealing the copper first and then growing doesn’t allow for a nice continuous process.  Removing the annealing step means the graphene is growing in an environment with significantly lower temperatures, which is important for CMOS or computer chip manufacturing.

Which patents do you hold most dear?

Usually in the research areas that are really cutting edge.  I have three patents in plasmonics, and that was a fun area 10 years ago.  It was a new area and we were doing something really exciting.  When you patent something, an application may never be realized, sometimes they get used and sometimes they don’t.  The graphene patent has already been licensed, so we’ve received quite a bit of traction.  As far as commercial success, the graphene has been much more successful than the other ones, but plasmonics were a lot of fun.  Water desalinization may be one application, and now there is a whole field of plasmonic chemistry.  A company has not yet licensed it, so it may have been too far ahead of its time for application anytime soon.

When did you realize you wanted to be a scientist?

I liked Physics in high school, and then I had a great mentor in college, Thomas Wdowiak.  Wdowiak showed me how to work in the lab.  Science is one of those things where an initial spark of interest drives you into action.  I became hooked, because of my love for science, the challenge it offers, and the simple fact I have fun with it.  I feel it’s very important to get into the lab and start learning science as early as possible in your education.

Were you identified as a gifted student?

I don’t think that’s a good marker.  I went to a private school early on, but no, I don’t think I was good at what they were looking for, no I wasn’t.  It comes down to what you want to do.  If you want to do something and you’re motivated to do it, you’ll find ways to make it happen.  If you want to code, you start coding, and that’s how you get good at it.  If you want to play music and have a passion for it, at first it may be your parents saying you have to go practice, but in the end it’s the passion that drives everything else.

Did you like high school?

I went to high school in Alabama and I had a good Physics teacher.  It was not the most academic of places, and if you were into academics the big thing there was to go to medical school.  I just hated memorizing things so I didn’t go that route.

Were AP classes offered at your high school, and if so, were you an AP student?

Yeah, I did take AP classes.  My high school only had AP English and AP Math, but it was just coming onboard at that time.  I took AP English because I liked the challenge and I love reading.

Were you involved in any extracurricular activities in school?

I earned the rank of Eagle Scout in the Boy Scouts.  I also raced bicycles in high school, and I was a several time state champion.  I finished high school (in America) and wanted to be a professional cyclist.  So, I got involved in the American Field Service (AFS), and did an extra year of high school in Italy as an exchange student where I ended up racing with some of the best cyclists in the world all through Italy.  It was a fantastic experience.

Did you have a college in mind for your undergraduate studies?  

No, I didn’t have a school in mind.  I had thought about the medical school path, so I considered taking pre-med courses at the local college, University of Alabama at Birmingham (UAB), because they have a good medical school.  Then UAB called me and said I earned an academic scholarship.  My father advised me that it would be a good idea to go there since it’s paid for.  I could take pre-med courses and then go to medical school afterwards if I wanted.  Well, I was in an honors program at the university and met an astronomer by the name Thomas Wdowiak.  I definitely learned from him how to be a scientist.  He also gave me a passion for being a scientist.  So, after working with Wdowiak for a while, I decided I didn’t want to go to medical school, I wanted to study Physics.  They just named a ridge on Mars after him, Wdowiak Ridge.  He was a very smart guy, and a great experimentalist who really grew my interest in science… he was great.

Did you do research while earning your undergraduate degree?  

Yes, Wdowiak had me in the lab working all the time.  We were doing real stuff in the lab.  I did a lot of undergraduate research in Astronomy, and the whole point was to get in the lab and work on science.  Because I worked with Wdowiak I had one or two papers published by the time I graduated.  Wdowiak taught me how to do science.   And that’s the thing, you have to want to do science, have a lab or a place to practice, and then start working.  

So, he was professor and experimentalist.

He was a very hands-on lab guy.  I was in the lab breaking things and fixing things. Astronomers are fun to work with.  He was an experimental astronomer who taught me, among other things, spectroscopy, vacuum technology, and much about the history of science.  In fact, it was Professor Wdowiak who told me about Millikan’s famous “Machine Shop in a Vacuum” experiment that inspired the graphene discovery… it all comes back to Caltech!

Name another scientist, other than Wdowiak, who has influenced you.

Richard Feynman also had a big influence on me.  I did not know him, but I love his books.

Were you focused solely on academics in college, or did you have a social life as well?

I was part of a concert committee that brought bands to the college.  We had some great bands like R.E.M. and the Red Hot Chili Peppers play, and I would work as a stagehand and a roadie for the shows.

So, you weren’t doing keg stands at fraternity parties?

No, it wasn’t like that.  I liked to go out and socialize, but no keg stands.  Though, I have had friends that were very successful that did do keg stands.

What’s your least favorite part of your job?

You’re always having to raise funds for salaries, equipment, and supplies.  It can be difficult, but once you get the funding it is a relief for the moment.  As a scientist, your focus isn’t always on just the science.

What are your responsibilities related to generating revenue for the university?

I raise funds for my projects via grants.  Part of the money goes to Caltech as overhead to pay for the facilities, lab space, and to keep the lights on.

What do you wish you could do more of in your job?

Less raising money.  I like working in the lab, which is fun.  Now that I have worked out the technique to grow graphene, I’m looking for applications.  I’m searching for the next impactful thing, and then I’ll figure out the necessary steps that need to be taken to get there.

Is there an aspect of your job that you believe would surprise people?

You have to be entrepreneurial, you have to sell your ideas to raise money for these projects.  You have to go with what’s hot in research.  There are certain things that get funded and things that don’t.

There may be some things you’re interested in, but other people aren’t, so there’s no funding.

Yeah, there may not be a need, therefore, no funding.  Right now, graphene is a big thing, because there are many applications and problems to be solved.  For example, diamonds were huge back in the ‘80’s.  But once they solved all the problems, research cooled off and industrial application took over.

Is there something else you’d really rather be researching, or are the trending ideas right now in line with your interests?

There is nothing else I’d rather be researching.  I’m in a good place right now.  We’re trying to commercialize the graphene research.  You try to do research projects that are complementary to one another.  For example, there’s a project underway, where graphene is being used for hydrogen storage in cars, that really interests me.  I do like the graphene work, it’s exciting, we’ll see where that goes.

What are the two most important personality traits essential to being a good scientist?

Creativity.  You have to think outside the box.  Perseverance.  I’m always reading and trying to understand something better.  Curiosity is, of course, a huge part of it as well. You gotta be obsessive too, I guess.  That’s more than two, sorry.

What does it take for someone to become a scientist?

You must have the desire to be a scientist, otherwise you’ll go be a stockbroker or something else.  It’s more of a passion thing, your personality.  You do have to have an aptitude for it though.  If you’re getting D’s in math, physics is probably not the place for you.  There’s an old joke, the medical student in physics class asks the professor, “Why do we have to take physics?  We’ll never use it.”  The Physics professor answers, “Physics saves lives, because it keeps idiots out of medical school.”  If you like science, but you’re not so good at math, then look at less quantitative areas of science where math is not as essential.  Computational physics and experimental physics will require you to be very good at math.  It takes a different temperament, a different set of skills.  Same curiosity, same drive and intelligence, but different temperament.

Do you ever doubt your own abilities?  Do you have insecurities about not being smart enough?

Sure, but there’s always going to be someone out there smarter.  Although, you really don’t want to ask yourself these types of questions.  If you do, you’re looking down the wrong end of the telescope.  Everyone has their doubts, but you need to listen to the feedback from the universe.  If you’re doing something for a long time and not getting results, then that’s telling you something.  Like I said, you must have a passion for what you’re doing.  If people are in doubt they should read biographies of scientists and explore their mindset to discover if science seems to be a good fit for them.  For a lot of people, it’s not the most fun job, it’s not the most social job, and certainly not the most glamorous type of job.  Some people need more social interaction, researchers are usually a little more introverted.  Again, it really depends on the person’s temperament. There are some very brilliant people in business, and it’s definitely not the case that only the brilliant people in a society go into science.  It doesn’t mean you can’t be doing amazing things just because you’re not in a scientific field.  If you like science and building things, then follow that path.  It’s also important not to force yourself to study something you don’t enjoy.

Scientists are often thought to work with giant math problems that are far above the intellectual capabilities of mere mortals.  Have you ever been in a particular situation where the lack of a solution to a math problem was impeding progress in the lab?  If so, what was the problem and did you discover the solution?

I’m attempting to model the process of graphene growth, so I’m facing this situation right now.  That’s why I have this book here.  I’m trying to adapt Professor Dave Goodwin’s Cantera reactor modeling code to model the reaction kinetics in graphene (Goodwin originally developed and wrote the modeling software called Cantera).  Dave was a big pioneer in diamond and he died almost 5 years ago here in Pasadena.  He developed a reaction modeling code for diamond, and I’m trying to apply that to graphene.  So, yeah, it’s a big math problem that I’ve been spending weeks on trying to figure out.  It’s not that I’m worried about the algebra or the coding, it’s trying to figure things out conceptually.

Do you love your job?

I do, I’ve done it for awhile, it’s fun, and I really enjoy it.  When it works, it’s great. Discovering stuff is fun and possesses a great sense of satisfaction.  But it’s not always that way, it can be very frustrating.  Like any good love affair, it has its peaks and valleys.  Sometimes you hate it, but that’s part of the relationship, it’s like… aaarrgghh!!

 

Standing back at Stanford

T-shirt 1

This T-shirt came to mind last September. I was standing in front of a large silver-colored table littered with wires, cylinders, and tubes. Greg Bentsen was pointing at components and explaining their functions. He works in Monika Schleier-Smith’s lab, as a PhD student, at Stanford.

Monika’s group manipulates rubidium atoms. A few thousand atoms sit in one of the cylinders. That cylinder contains another cylinder, an optical cavity, that contains the atoms. A mirror caps each of the cavity’s ends. Light in the cavity bounces off the mirrors.

Light bounces off your bathroom mirror similarly. But we can describe your bathroom’s light accurately with Maxwellian electrodynamics, a theory developed during the 1800s. We describe the cavity’s light with quantum electrodynamics (QED). Hence we call the lab’s set-up cavity QED.

The light interacts with the atoms, entangling with them. The entanglement imprints information about the atoms on the light. Suppose that light escaped from the cavity. Greg and friends could measure the light, then infer about the atoms’ quantum state.

A little light leaks through the mirrors, though most light bounces off. From leaked light, you can infer about the ensemble of atoms. You can’t infer about individual atoms. For example, consider an atom’s electrons. Each electron has a quantum property called a spin. We sometimes imagine the spin as an arrow that points upward or downward. Together, the electrons’ spins form the atom’s joint spin. You can tell, from leaked light, whether one atom’s spin points upward. But you can’t tell which atom’s spin points upward. You can’t see the atoms for the ensemble.

Monika’s team can. They’ve cut a hole in their cylinder. Light escapes the cavity through the hole. The light from the hole’s left-hand edge carries information about the leftmost atom, and so on. The team develops a photograph of the line of atoms. Imagine holding a photograph of a line of people. You can point to one person, and say, “Aha! She’s the xkcd fan.” Similarly, Greg and friends can point to one atom in their photograph and say, “Aha! That atom has an upward-pointing spin.” Monika’s team is developing single-site imaging.

Solvay

Aha! She’s the xkcd fan.

Monika’s team plans to image atoms in such detail, they won’t need for light to leak through the mirrors. Light leakage creates problems, including by entangling the atoms with the world outside the cavity. Suppose you had to diminish the amount of light that leaks from a rubidium cavity. How should you proceed?

Tell the mirrors,

T-shirt 2

You should lengthen the cavity. Why? Imagine a photon, a particle of light, in the cavity. It zooms down the cavity’s length, hits a mirror, bounces off, retreats up the cavity’s length, hits the other mirror, and bounces off. The photon repeats this process until a mirror hit fails to generate a bounce. The mirror transmits the photon to the exterior; the photon leaks out. How can you reduce leaks? By preventing photons from hitting mirrors so often, by forcing the photons to zoom longer, by lengthening the cavity, by shifting the mirrors outward.

So Greg hinted, beside that silver-colored table in Monika’s lab. The hint struck a chord: I recognized the impulse to

T-shirt 3

The impulse had led me to Stanford.

Weeks earlier, I’d written my first paper about quantum chaos and information scrambling. I’d sat and read and calculated and read and sat and emailed and written. I needed to stand up, leave my cavity, and image my work from other perspectives.

Stanford physicists had written quantum-chaos papers I admired. So I visited, presented about my work, and talked. Patrick Hayden introduced me to a result that might help me apply my result to another problem. His group helped me simplify a mathematical expression. Monika reflected that a measurement scheme I’d proposed sounded not unreasonable for cavity QED.

And Greg led me to recognize the principle behind my visit: Sometimes, you have to

T-shirt 4

to move forward.

With gratitude to Greg, Monika, Patrick, and the rest of Monika’s and Patrick’s groups for their time, consideration, explanations, and feedback. With thanks to Patrick and Stanford’s Institute for Theoretical Physics for their hospitality.