The math of multiboundary wormholes

Xi Dong, Alex Maloney, Henry Maxfield and I recently posted a paper to the arXiv with the title: Phase Transitions in 3D Gravity and Fractal Dimension. In other words, we’ll get about ten readers per year for the next few decades. Despite the heady title, there’s deep geometrical beauty underlying this work. In this post I want to outline the origin story and motivation behind this paper.

There are two different branches to the origin story. The first was my personal motivation and the second is related to how I came into contact with my collaborators (who began working on the same project but with different motivation, namely to explain a phase transition described in this paper by Belin, Keller and Zadeh.)

During the first year of my PhD at Caltech I was working in the mathematics department and I had a few brief but highly influential interactions with Nikolai Makarov while I was trying to find a PhD advisor. His previous student, Stanislav Smirnov, had recently won a Fields Medal for his work studying Schramm-Loewner evolution (SLE) and I was captivated by the beauty of these objects.

SLE3.jpg

SLE example from Scott Sheffield’s webpage. SLEs are the fractal curves that form at the interface of many models undergoing phase transitions in 2D, such as the boundary between up and down spins in a 2D magnet (Ising model.)

One afternoon, I went to Professor Makarov’s office for a meeting and while he took a brief phone call I noticed a book on his shelf called Indra’s Pearls, which had a mesmerizing image on its cover. I asked Professor Makarov about it and he spent 30 minutes explaining some of the key results (which I didn’t understand at the time.) When we finished that part of our conversation Professor Makarov described this area of math as “math for the future, ahead of the tools we have right now” and he offered for me to borrow his copy. With a description like that I was hooked. I spent the next six months devouring this book which provided a small toehold as I tried to grok the relevant mathematics literature. This year or so of being obsessed with Kleinian groups (the underlying objects in Indra’s Pearls) comes back into the story soon. I also want to mention that during that meeting with Professor Makarov I was exposed to two other ideas that have driven my research as I moved from mathematics to physics: quasiconformal mappings and the simultaneous uniformization theorem, both of which will play heavy roles in the next paper I release.  In other words, it was a pretty important 90 minutes of my life.

indrasPearls.png

Google image search for “Indra’s Pearls”. The math underlying Indra’s Pearls sits at the intersection of hyperbolic geometry, complex analysis and dynamical systems. Mathematicians oftentimes call this field the study of “Kleinian groups”. Most of these figures were obtained by starting with a small number of Mobius transformations (usually two or three) and then finding the fixed points for all possible combinations of the initial transformations and their inverses. Indra’s Pearls was written by David Mumford, Caroline Series and David Wright. I couldn’t recommend it more highly.

My life path then hit a discontinuity when I was recruited to work on a DARPA project, which led to taking an 18 month leave of absence from Caltech. It’s an understatement to say that being deployed in Afghanistan led to extreme introspection. While “down range” I had moments of clarity where I knew life was too short to work on anything other than ones’ deepest passions. Before math, the thing that got me into science was a childhood obsession with space and black holes. I knew that when I returned to Caltech I wanted to work on quantum gravity with John Preskill. I sent him an e-mail from Afghanistan and luckily he was willing to take me on as a student. But as a student in the mathematics department, I knew it would be tricky to find a project that involved all of: black holes (my interest), quantum information (John’s primary interest at the time) and mathematics (so I could get the degree.)

I returned to Caltech in May of 2012 which was only two months before the Firewall Paradox was introduced by Almheiri, Marolf, Polchinski and Sully. It was obvious that this was where most of the action would be for the next few years so I spent a great deal of time (years) trying to get sharp enough in the underlying concepts to be able to make comments of my own on the matter. Black holes are probably the closest things we have in Nature to the proverbial bottomless pit, which is an apt metaphor for thinking about the Firewall Paradox. After two years I was stuck. I still wasn’t close to confident enough with AdS/CFT to understand a majority of the promising developments. And then at exactly the right moment, in the summer of 2014, Preskill tipped my hat to a paper titled Multiboundary Wormholes and Holographic Entanglement by Balasubramanian, Hayden, Maloney, Marolf and Ross. It was immediately obvious to me that the tools of Indra’s Pearls (Kleinian groups) provided exactly the right language to study these “multiboundary wormholes.” But despite knowing a bridge could be built between these fields, I still didn’t have the requisite physics mastery (AdS/CFT) to build it confidently.

Before mentioning how I met my collaborators and describing the work we did together, let me first describe the worlds that we bridged together.

3D Gravity and Universality

As the media has sensationalized to death, one of the most outstanding questions in modern physics is to discover and then understand a theory of quantum gravity.  As a quick aside, Quantum gravity is just a placeholder name for such a theory. I used italics because physicists have already discovered candidate theories, such as string theory and loop quantum gravity (I’m not trying to get into politics, just trying to demonstrate that there are multiple candidate theories). But understanding these theories — carrying out all of the relevant computations to confirm that they are consistent with Nature and then doing experiments to verify their novel predictions — is still beyond our ability. Surprisingly, without knowing the specific theory of quantum gravity that guides Nature’s hand, we’re still able to say a number of universal things that must be true for any theory of quantum gravity. The most prominent example being the holographic principle which comes from the entropy of black holes being proportional to the surface area encapsulated by the black hole’s horizon (a naive guess says the entropy should be proportional to the volume of the black hole; such as the entropy of a glass of water.) Universal statements such as this serve as guideposts and consistency checks as we try to understand quantum gravity.

It’s exceedingly rare to find universal statements that are true in physically realistic models of quantum gravity. The holographic principle is one such example but it pretty much stands alone in its power and applicability. By physically realistic I mean: 3+1-dimensional and with the curvature of the universe being either flat or very mildly positively curved.  However, we can make additional simplifying assumptions where it’s easier to find universal properties. For example, we can reduce the number of spatial dimensions so that we’re considering 2+1-dimensional quantum gravity (3D gravity). Or we can investigate spacetimes that are negatively curved (anti-de Sitter space) as in the AdS/CFT correspondence. Or we can do BOTH! As in the paper that we just posted. The hope is that what’s learned in these limited situations will back-propagate insights towards reality.

The motivation for going to 2+1-dimensions is that gravity (general relativity) is much simpler here. This is explained eloquently in section II of Steve Carlip’s notes here. In 2+1-dimensions, there are no “local”/”gauge” degrees of freedom. This makes thinking about quantum aspects of these spacetimes much simpler.

The standard motivation for considering negatively curved spacetimes is that it puts us in the domain of AdS/CFT, which is the best understood model of quantum gravity. However, it’s worth pointing out that our results don’t rely on AdS/CFT. We consider negatively curved spacetimes (negatively curved Lorentzian manifolds) because they’re related to what mathematicians call hyperbolic manifolds (negatively curved Euclidean manifolds), and mathematicians know a great deal about these objects. It’s just a helpful coincidence that because we’re working with negatively curved manifolds we then get to unpack our statements in AdS/CFT.

Multiboundary wormholes

Finding solutions to Einstein’s equations of general relativity is a notoriously hard problem. Some of the more famous examples include: Minkowski space, de-Sitter space, anti-de Sitter space and Schwarzschild’s solution (which describes perfectly symmetrical and static black holes.) However, there’s a trick! Einstein’s equations only depend on the local curvature of spacetime while being insensitive to global topology (the number of boundaries and holes and such.) If M is a solution of Einstein’s equations and \Gamma is a discrete subgroup of the isometry group of M, then the quotient space M/\Gamma will also be a spacetime that solves Einstein’s equations! Here’s an example for intuition. Start with 2+1-dimensional Minkowski space, which is just a stack of flat planes indexed by time. One example of a “discrete subgroup of the isometry group” is the cyclic group generated by a single translation, say the translation along the x-axis by ten meters. Minkowski space quotiented by this group will also be a solution of Einstein’s equations, given as a stack of 10m diameter cylinders indexed by time.

Cylinder

Start with 2+1-dimensional Minkowski space which is just a stack of flat planes index by time. Think of the planes on the left hand side as being infinite. To “quotient” by a translation means to glue the green lines together which leaves a cylinder for every time slice. The figure on the right shows this cylinder universe which is also a solution to Einstein’s equations.

D+1-dimensional Anti-de Sitter space (AdS_{d+1}) is the maximally symmetric d+1-dimensional Lorentzian manifold with negative curvature. Our paper is about 3D gravity in negatively curved spacetimes so our starting point is AdS_3 which can be thought of as a stack of Poincare disks (or hyperbolic sheets) with the time dimension telling you which disk (sheet) you’re on. The isometry group of AdS_3 is a group called SO(2,2) which in turn is isomorphic to the group SL(2, R) \times SL(2, R). The group SL(2,R) \times SL(2,R) isn’t a very common group but a single copy of SL(2,R) is a very well-studied group. Discrete subgroups of it are called Fuchsian groups. Every element in the group should be thought of as a 2×2 matrix which corresponds to a Mobius transformation of the complex plane. The quotients that we obtain from these Fuchsian groups, or the larger isometry group yield a rich infinite family of new spacetimes, which are called multiboundary wormholes. Multiboundary wormholes have risen in importance over the last few years as powerful toy models when trying to understand how entanglement is dispersed near black holes (Ryu-Takayanagi conjecture) and for how the holographic dictionary works in terms of mapping operators in the boundary CFT to fields in the bulk (entanglement wedge reconstruction.)

AdS3

Three dimensional AdS can be thought of as a stack of hyperboloids indexed by time. It’s convenient to use the Poincare disk model for the hyperboloids so that the entire spacetime can be pictured in a compact way. Despite how things appear, all of the triangles have the same “area”.

I now want to work through a few examples.

BTZ black hole: this is the simplest possible example. It’s obtained by quotienting AdS_3 by a cyclic group \langle A \rangle, generated by a single matrix A \in SL(2,R) which for example could take the form A = \begin{pmatrix} e^{\lambda} & 0 \\ 0 & e^{-\lambda} \end{pmatrix}. The matrix A acts by fractional linear transformation on the complex plane, so in this case the point z \in \mathbb{C} gets mapped to z\mapsto (e^{\lambda}z + 0)/(0z + e^{-\lambda}) =  e^{2\lambda} z. In this case

torus Wormhole

Start with AdS_3 as a stack of hyperbolic half planes indexed by time. A quotient by A means that each hyperbolic half plane gets quotiented. Quotienting a constant time slice by the map z \mapsto e^{2\lambda}z gives a surface that’s topologically a cylinder. Using the picture above this means you glue together the solid black curves. The green and red segments become two boundary regions. We call it the BTZ black hole because when you add “time” it becomes impossible to send a signal from the green boundary to the red boundary, or vica versa. The dotted line acts as an event horizon.

Three boundary wormhole: 

There are many parameterizations that we can choose to obtain the three boundary wormhole. I’ll only show schematically how the gluings go. A nice reference with the details is this paper by Henry Maxfield.

Three Boundary Wormhole

This is a picture of a constant time slice of AdS_3 quotiented by the A and B above. Each time slice is given as a copy of the hyperbolic half plane with the black arcs and green arcs glued together (by the maps A and B). These gluings yield a pair of pants surface. Each of the boundary regions are causally disconnected from the others. The dotted lines are black hole horizons that illustrate where the causal disconnection happens.

Torus wormhole: 

It’s simpler to write down generators for the torus wormhole; but following along with the gluings is more complicated. To obtain the three boundary wormhole we quotient AdS_3 by the free group \langle A, B \rangle where A = \begin{pmatrix} e^{\lambda} & 0 \\ 0 & e^{-\lambda} \end{pmatrix} and B = \begin{pmatrix} \cosh \lambda & \sinh \lambda \\ \sinh \lambda & \cosh \lambda \end{pmatrix}. (Note that this is only one choice of generators, and a highly symmetrical one at that.)

Torus Wormhole (1)

This is a picture of a constant time slice of AdS_3 quotiented by the A and B above. Each time slice is given as a copy of the hyperbolic half plane with the black arcs and green arcs glued together (by the maps A and B). These gluings yield what’s called the “torus wormhole”. Topologically it’s just a donut with a hole cut out. However, there’s a causal structure when you add time to the mix where the dotted lines act as a black hole horizon, so that a message sent from behind the horizon will never reach the boundary.

Lorentzian to Euclidean spacetimes

So far we have been talking about negatively curved Lorentzian manifolds. These are manifolds that have a notion of both “time” and “space.” The technical definition involves differential geometry and it is related to the signature of the metric. On the other hand, mathematicians know a great deal about negatively curved Euclidean manifolds. Euclidean manifolds only have a notion of “space” (so no time-like directions.) Given a multiboundary wormhole, which by definition, is a quotient of AdS_3/\Gamma where \Gamma is a discrete subgroup of Isom(AdS_3), there’s a procedure to analytically continue this to a Euclidean hyperbolic manifold of the form H^3/ \Gamma_E where H^3 is three dimensional hyperbolic space and \Gamma_E is a discrete subgroup of the isometry group of H^3, which is PSL(2, \mathbb{C}). This analytic continuation procedure is well understood for time-symmetric spacetimes but it’s subtle for spacetimes that don’t have time-reversal symmetry. A discussion of this subtlety will be the topic of my next paper. To keep this blog post at a reasonable level of technical detail I’m going to need you to take it on a leap of faith that to every Lorentzian 3-manifold multiboundary wormhole there’s an associated Euclidean hyperbolic 3-manifold. Basically you need to believe that given a discrete subgroup \Gamma of SL(2, R) \times SL(2, R) there’s a procedure to obtain a discrete subgroup \Gamma_E of PSL(2, \mathbb{C}). Discrete subgroups of PSL(2, \mathbb{C}) are called Kleinian groups and quotients of H^3 by groups of this form yield hyperbolic 3-manifolds. These Euclidean manifolds obtained by analytic continuation arise when studying the thermodynamics of these spacetimes or also when studying correlation functions; there’s a sense in which they’re physical.

TLDR: you start with a 2+1-d Lorentzian 3-manifold obtained as a quotient AdS_3/\Gamma and analytic continuation gives a Euclidean 3-manifold obtained as a quotient H^3/\Gamma_E where H^3 is 3-dimensional hyperbolic space and \Gamma_E is a discrete subgroup of PSL(2,\mathbb{C}) (Kleinian group.) 

Limit sets: 

Every Kleinian group \Gamma_E = \langle A_1, \dots, A_g \rangle \subset PSL(2, \mathbb{C}) has a fractal that’s naturally associated with it. The fractal is obtained by finding the fixed points of every possible combination of generators and their inverses. Moreover, there’s a beautiful theorem of Patterson, Sullivan, Bishop and Jones that says the smallest eigenvalue \lambda_0 of the spectrum of the Laplacian on the quotient Euclidean spacetime H^3 / \Gamma_E is related to the Hausdorff dimension of this fractal (call it D) by the formula \lambda_0 = D(2-D). This smallest eigenvalue controls a number of the quantities of interest for this spacetime but calculating it directly is usually intractable. However, McMullen proposed an algorithm to calculate the Hausdorff dimension of the relevant fractals so we can get at the spectrum efficiently, albeit indirectly.

Screen Shot 2018-03-23 at 1.19.46 PM

This is a screen grab of Figure 2 from our paper. These are two examples of fractals that emerge when studying these spacetimes. Both of these particular fractals have a 3-fold symmetry. They have this symmetry because these particular spacetimes came from looking at something called “n=3 Renyi entropies”. The number q indexes a one complex dimensional family of spacetimes that have this 3-fold symmetry. These Kleinian groups each have two generators that are described in section 2.3 of our paper.

What we did

Our primary result is a generalization of the Hawking-Page phase transition for multiboundary wormholes. To understand the thermodynamics (from a 3d quantum gravity perspective) one starts with a fixed boundary Riemann surface and then looks at the contributions to the partition function from each of the ways to fill in the boundary (each of which is a hyperbolic 3-manifold). We showed that the expected dominant contributions, which are given by handlebodies, are unstable when the kinetic operator (\nabla^2 - m^2) is negative, which happens whenever the Hausdorff dimension of the limit set of \Gamma_E is greater than the lightest scalar field living in the bulk. One has to go pretty far down the quantum gravity rabbit hole (black hole) to understand why this is an interesting research direction to pursue, but at least anyone can appreciate the pretty pictures!

Majorana update

If you are, by any chance, following progress in the field of Majorana bound states, then you are for sure super excited about ample Majorana results arriving this Fall. On the other hand, if you just heard about these elusive states recently, it is time for an update. For physicists working in the field, this Fall was perhaps the most exciting time since the first experimental reports from 2012. In the last few weeks there was not only one, but at least three interesting manuscripts reporting new insightful data which may finally provide a definitive experimental verification of the existence of these states in condensed matter systems.

But before I dive into these new results, let me give a brief history on the topic of  Majorana states and their experimental observation. The story starts with the young talented physicist Ettore Majorana, who hypothesized back in 1937 the existence of fermionic particles which were their own antiparticles. These hypothetical particles, now called Majorana fermions, were proposed in the context of elementary particle physics, but never observed. Some 60 years later, in the early 2000s, theoretical work emerged showing that Majorana fermionic states can exist as the quasiparticle excitations in certain low-dimensional superconducting systems (not a real particle as originally proposed, but otherwise having the exact same properties). Since then theorists have proposed half a dozen possible ways to realize Majorana modes using readily available materials such as superconductors, semiconductors, magnets, as well as topological insulators (for curious readers, I recommend manuscripts [1, 2, 3] for an overview of the different proposed methods to realize Majorana states in the lab).

The most fascinating thing about Majorana states is that they belong to the class of anyons, which means that they behave neither as bosons nor as fermions upon exchange. For example, if you have two identical fermionic (or bosonic) states and you exchange their positions, the quantum mechanical function describing the two states will acquire a phase factor of -1 (or +1). Anyons, on the other hand, can have an arbitrary phase factor eiφ upon exchange. For this reason, they are considered to be a starting point for topological quantum computation. If you want to learn more about anyons, check out the video below featuring IQIM’s Gil Refael and Jason Alicea.

 

Back in 2012, a group in Delft (led by Prof. Leo Kouwenhoven) announced the observation of zero-energy states in a nanoscale device consisting of a semiconductor nanowire coupled to a superconductor. These states behaved very similarly to the Majoranas that were previously predicted to occur in this system. The key word here is ‘similar’, since the behavior of these modes was not fully consistent with the theoretical predictions. Namely, the electrical conductance carried through the observed zero energy states was only about ~5% of the expected perfect transmission value for Majoranas. This part of the data was very puzzling, and immediately cast some doubts throughout the community. The physicists were quickly divided into what I will call enthusiasts (believers that these initial results indeed originated from Majorana states) and skeptics (who were pointing out that effects, other than Majoranas, can result in similarly looking zero energy peaks). And thus a great debate started.

In the coming years, experimentalists tried to observe zero energy features in improved devices, track how these features evolve with external parameters, such as gate voltages, length of the wires, etc., or focus on completely different platforms for hosting Majorana states, such as magnetic flux vortices in topological superconductors and magnetic atomic chains placed on a superconducting surface.  However, these results were not enough to convince skeptics that the observed states indeed originated from the Majoranas and not some other yet-to-be-discovered phenomenon. And so, the debate continued. With each generation of the experiments some of the alternative proposed scenarios were ruled out, but the final verification was still missing.

Fast forward to the events of this Fall and the exciting recent results. The manuscript I would like to invite you to read was just posted on ArXiv a couple of weeks ago. The main result is the observation of the perfectly quantized 2e2/h conductance at zero energy, the long sought signature of the Majorana states. This quantization implies that in this latest generation of semiconducting-superconducting devices zero-energy states exhibit perfect electron-hole symmetry and thus allow for perfect Andreev reflection. These remarkable results may finally end the debate and convince most of the skeptics out there.

Fig_blog

Figure 1. (a,b) Comparison between devices and measurements from 2012 and 2017. (a) In 2012 a device made by combining a superconductor (Niobium Titanium Nitride alloy) and Indium Antimonide nanowire resulted in the first signature of zero energy states but the conductance peak was only about 0.1 x e2/h. Adapted from Mourik et al. Science 2012. (b) Similar device from 2017 made by carefully depositing superconducting Aluminum on Indium Arsenide. The fully developed 2e2/h conductance peak was observed. Adapted from Zhang et. al. ArXiv 2017. (c) Schematics of the Andreev reflection through the Normal (N)/Superconductor (S) interface. (d,e) Alternative view of the Andreev reflection process as a tunneling through a double barrier without and with Majorana modes (shown in yellow).

To fully appreciate these results, it is useful to quickly review the physics of Andreev reflection (Fig. 1c-e) that occurs at the interface between a normal region with a superconductor [4]. As the electron (blue) in the normal region enters a superconductor and pulls an additional electron with it to form a Copper pair, an extra hole (red) is left behind (Fig. 1(c)). You can also think about this process as the transmission through two leads, one connecting the superconductor to the electrons and the other to the holes (Fig. 1d). This allows us to view this problem as a transmission through the double barrier that is generally low. In the presence of a Majorana state, however, there is a resonant level at zero energy which is coupled with the same amplitude with both electrons and holes. This in turn results in the resonant Andreev reflection with a perfect quantization of 2e2/h (Fig. 1e). Note that, even in the configuration without Majorana modes, perfect quantization is possible but highly unlikely as it requires very careful tuning of the barrier potential (the authors did show that their quantization is robust against tuning the voltages on the gates, ruling out this possibility).

Going back to the experiments, you may wonder what made this breakthrough possible? It seems to be the combination of various factors, including using epitaxially grown  superconductors and more sophisticated fabrication methods. As often happens in experimental physics, this milestone did not come from one ingenious idea, but rather from numerous technical improvements obtained by several generations of hard-working grad students and postdocs.

If you are up for more Majorana reading, you can find two more recent eye-catching manuscripts here and here. Note that the list of interesting recent Majorana papers is a mere selection by the author and not complete by any means. A few months ago, my IQIM colleagues wrote a nice blog entry about topological qubits arriving in 2018. Although this may sound overly optimistic, the recent results suggest that the field is definitely taking off. While there are certainly many challenges to be solved, we may see the next generation of experiments designed to probe control over the Majorana states quite soon. Stay tuned for more!!!!!!

A Few Words With Caltech Research Scientist, David Boyd

Twenty years ago, David Boyd began his career at Caltech as a Postdoctoral Scholar with Dave Goodwin, and since 2012 has held the position of Research Scientist in the Division of Physics, Mathematics and Astronomy.  A 20 year career at Caltech is in itself a significant achievement considering Caltech’s flair for amassing the very best scientists from around the world.  Throughout Boyd’s career he has secured 7 patents, and most recently discovered a revolutionary single-step method for growing graphene.  The method allows for unprecedented continuity in graphene growth essential to significantly scaling-up production capacity.  Boyd worked with a number of great scientists at the outset of his career.  Notably, he gained a passion for science from Professor Thomas Wdowiak (Mars’ Wdowiak Ridge is named in his honor) at the University of Alabama at Birmingham as an undergraduate, and worked as David Goodwin’s (best known for developing methods for growing thin film high-purity diamonds) postdoc at Caltech.  Currently, Boyd is formulating a way to apply Goodwin’s reaction modeling code to graphene.  Considering Boyd’s accomplishments and extensive scientific knowledge, I feel fortunate to have been afforded the opportunity to work in his lab the past six summers. I have learned much from Boyd, but I still have more questions (not all scientific), so I requested an interview and he graciously accepted.

On the day of the interview, I meet Boyd at his office on campus at Caltech.  We walk a ways down a sunlit hallway and out to a balcony through two glass doors.  There’s a slight breeze in the air, a smell of nearby roses, and the temperature is perfect.  It’s a picturesque day in Pasadena.  We sit at a table and I ask my first question.

How many patents do you own?

I have seven patents.  The graphene patent was really hard to get, but we got it.  We just got it executed in China, so they are allowed to use it.  This is particularly exciting because of all the manufacturing in China.  The patent system has changed a bit, so it’s getting harder and harder.  You can come up with the idea, but if disparate components have already been patented, then you can’t get the patent for combining them in a unique way.  The invention has to provide a result that is unexpected or not obvious, and the patent for growing graphene with a one step process was just that.  The one step process refers to cleaning the copper substrate and growing graphene under the same chemistry in a continuous manner.  What used to be a two step process can be done in one.

You don’t have to anneal the substrate to 1000 degrees before growing.

Exactly.  Annealing the copper first and then growing doesn’t allow for a nice continuous process.  Removing the annealing step means the graphene is growing in an environment with significantly lower temperatures, which is important for CMOS or computer chip manufacturing.

Which patents do you hold most dear?

Usually in the research areas that are really cutting edge.  I have three patents in plasmonics, and that was a fun area 10 years ago.  It was a new area and we were doing something really exciting.  When you patent something, an application may never be realized, sometimes they get used and sometimes they don’t.  The graphene patent has already been licensed, so we’ve received quite a bit of traction.  As far as commercial success, the graphene has been much more successful than the other ones, but plasmonics were a lot of fun.  Water desalinization may be one application, and now there is a whole field of plasmonic chemistry.  A company has not yet licensed it, so it may have been too far ahead of its time for application anytime soon.

When did you realize you wanted to be a scientist?

I liked Physics in high school, and then I had a great mentor in college, Thomas Wdowiak.  Wdowiak showed me how to work in the lab.  Science is one of those things where an initial spark of interest drives you into action.  I became hooked, because of my love for science, the challenge it offers, and the simple fact I have fun with it.  I feel it’s very important to get into the lab and start learning science as early as possible in your education.

Were you identified as a gifted student?

I don’t think that’s a good marker.  I went to a private school early on, but no, I don’t think I was good at what they were looking for, no I wasn’t.  It comes down to what you want to do.  If you want to do something and you’re motivated to do it, you’ll find ways to make it happen.  If you want to code, you start coding, and that’s how you get good at it.  If you want to play music and have a passion for it, at first it may be your parents saying you have to go practice, but in the end it’s the passion that drives everything else.

Did you like high school?

I went to high school in Alabama and I had a good Physics teacher.  It was not the most academic of places, and if you were into academics the big thing there was to go to medical school.  I just hated memorizing things so I didn’t go that route.

Were AP classes offered at your high school, and if so, were you an AP student?

Yeah, I did take AP classes.  My high school only had AP English and AP Math, but it was just coming onboard at that time.  I took AP English because I liked the challenge and I love reading.

Were you involved in any extracurricular activities in school?

I earned the rank of Eagle Scout in the Boy Scouts.  I also raced bicycles in high school, and I was a several time state champion.  I finished high school (in America) and wanted to be a professional cyclist.  So, I got involved in the American Field Service (AFS), and did an extra year of high school in Italy as an exchange student where I ended up racing with some of the best cyclists in the world all through Italy.  It was a fantastic experience.

Did you have a college in mind for your undergraduate studies?  

No, I didn’t have a school in mind.  I had thought about the medical school path, so I considered taking pre-med courses at the local college, University of Alabama at Birmingham (UAB), because they have a good medical school.  Then UAB called me and said I earned an academic scholarship.  My father advised me that it would be a good idea to go there since it’s paid for.  I could take pre-med courses and then go to medical school afterwards if I wanted.  Well, I was in an honors program at the university and met an astronomer by the name Thomas Wdowiak.  I definitely learned from him how to be a scientist.  He also gave me a passion for being a scientist.  So, after working with Wdowiak for a while, I decided I didn’t want to go to medical school, I wanted to study Physics.  They just named a ridge on Mars after him, Wdowiak Ridge.  He was a very smart guy, and a great experimentalist who really grew my interest in science… he was great.

Did you do research while earning your undergraduate degree?  

Yes, Wdowiak had me in the lab working all the time.  We were doing real stuff in the lab.  I did a lot of undergraduate research in Astronomy, and the whole point was to get in the lab and work on science.  Because I worked with Wdowiak I had one or two papers published by the time I graduated.  Wdowiak taught me how to do science.   And that’s the thing, you have to want to do science, have a lab or a place to practice, and then start working.  

So, he was professor and experimentalist.

He was a very hands-on lab guy.  I was in the lab breaking things and fixing things. Astronomers are fun to work with.  He was an experimental astronomer who taught me, among other things, spectroscopy, vacuum technology, and much about the history of science.  In fact, it was Professor Wdowiak who told me about Millikan’s famous “Machine Shop in a Vacuum” experiment that inspired the graphene discovery… it all comes back to Caltech!

Name another scientist, other than Wdowiak, who has influenced you.

Richard Feynman also had a big influence on me.  I did not know him, but I love his books.

Were you focused solely on academics in college, or did you have a social life as well?

I was part of a concert committee that brought bands to the college.  We had some great bands like R.E.M. and the Red Hot Chili Peppers play, and I would work as a stagehand and a roadie for the shows.

So, you weren’t doing keg stands at fraternity parties?

No, it wasn’t like that.  I liked to go out and socialize, but no keg stands.  Though, I have had friends that were very successful that did do keg stands.

What’s your least favorite part of your job?

You’re always having to raise funds for salaries, equipment, and supplies.  It can be difficult, but once you get the funding it is a relief for the moment.  As a scientist, your focus isn’t always on just the science.

What are your responsibilities related to generating revenue for the university?

I raise funds for my projects via grants.  Part of the money goes to Caltech as overhead to pay for the facilities, lab space, and to keep the lights on.

What do you wish you could do more of in your job?

Less raising money.  I like working in the lab, which is fun.  Now that I have worked out the technique to grow graphene, I’m looking for applications.  I’m searching for the next impactful thing, and then I’ll figure out the necessary steps that need to be taken to get there.

Is there an aspect of your job that you believe would surprise people?

You have to be entrepreneurial, you have to sell your ideas to raise money for these projects.  You have to go with what’s hot in research.  There are certain things that get funded and things that don’t.

There may be some things you’re interested in, but other people aren’t, so there’s no funding.

Yeah, there may not be a need, therefore, no funding.  Right now, graphene is a big thing, because there are many applications and problems to be solved.  For example, diamonds were huge back in the ‘80’s.  But once they solved all the problems, research cooled off and industrial application took over.

Is there something else you’d really rather be researching, or are the trending ideas right now in line with your interests?

There is nothing else I’d rather be researching.  I’m in a good place right now.  We’re trying to commercialize the graphene research.  You try to do research projects that are complementary to one another.  For example, there’s a project underway, where graphene is being used for hydrogen storage in cars, that really interests me.  I do like the graphene work, it’s exciting, we’ll see where that goes.

What are the two most important personality traits essential to being a good scientist?

Creativity.  You have to think outside the box.  Perseverance.  I’m always reading and trying to understand something better.  Curiosity is, of course, a huge part of it as well. You gotta be obsessive too, I guess.  That’s more than two, sorry.

What does it take for someone to become a scientist?

You must have the desire to be a scientist, otherwise you’ll go be a stockbroker or something else.  It’s more of a passion thing, your personality.  You do have to have an aptitude for it though.  If you’re getting D’s in math, physics is probably not the place for you.  There’s an old joke, the medical student in physics class asks the professor, “Why do we have to take physics?  We’ll never use it.”  The Physics professor answers, “Physics saves lives, because it keeps idiots out of medical school.”  If you like science, but you’re not so good at math, then look at less quantitative areas of science where math is not as essential.  Computational physics and experimental physics will require you to be very good at math.  It takes a different temperament, a different set of skills.  Same curiosity, same drive and intelligence, but different temperament.

Do you ever doubt your own abilities?  Do you have insecurities about not being smart enough?

Sure, but there’s always going to be someone out there smarter.  Although, you really don’t want to ask yourself these types of questions.  If you do, you’re looking down the wrong end of the telescope.  Everyone has their doubts, but you need to listen to the feedback from the universe.  If you’re doing something for a long time and not getting results, then that’s telling you something.  Like I said, you must have a passion for what you’re doing.  If people are in doubt they should read biographies of scientists and explore their mindset to discover if science seems to be a good fit for them.  For a lot of people, it’s not the most fun job, it’s not the most social job, and certainly not the most glamorous type of job.  Some people need more social interaction, researchers are usually a little more introverted.  Again, it really depends on the person’s temperament. There are some very brilliant people in business, and it’s definitely not the case that only the brilliant people in a society go into science.  It doesn’t mean you can’t be doing amazing things just because you’re not in a scientific field.  If you like science and building things, then follow that path.  It’s also important not to force yourself to study something you don’t enjoy.

Scientists are often thought to work with giant math problems that are far above the intellectual capabilities of mere mortals.  Have you ever been in a particular situation where the lack of a solution to a math problem was impeding progress in the lab?  If so, what was the problem and did you discover the solution?

I’m attempting to model the process of graphene growth, so I’m facing this situation right now.  That’s why I have this book here.  I’m trying to adapt Professor Dave Goodwin’s Cantera reactor modeling code to model the reaction kinetics in graphene (Goodwin originally developed and wrote the modeling software called Cantera).  Dave was a big pioneer in diamond and he died almost 5 years ago here in Pasadena.  He developed a reaction modeling code for diamond, and I’m trying to apply that to graphene.  So, yeah, it’s a big math problem that I’ve been spending weeks on trying to figure out.  It’s not that I’m worried about the algebra or the coding, it’s trying to figure things out conceptually.

Do you love your job?

I do, I’ve done it for awhile, it’s fun, and I really enjoy it.  When it works, it’s great. Discovering stuff is fun and possesses a great sense of satisfaction.  But it’s not always that way, it can be very frustrating.  Like any good love affair, it has its peaks and valleys.  Sometimes you hate it, but that’s part of the relationship, it’s like… aaarrgghh!!

 

Teacher Research at Caltech

The Yeh Lab group’s research activities at Caltech have been instrumental in studying semiconductors and making two-dimensional materials such as graphene, as highlighted on a BBC Horizons show.  

An emerging sub-field of semiconductor and two-dimensional research is that of Transition metal dichalcogenide (TDMC) monolayers. In particular, a monolayer of Tungsten disulfide, a TDMC, is believed to exhibit interesting semiconductor properties when exposed to circularly polarized light. My role in the Yeh Lab, as a visiting high school Physics Teacher intern,  for the Summer of 2017 has been to help research and set up a vacuum chamber to study Tungsten disulfide samples under circularly polarized light.

What makes semiconductors unique is that conductivity can be controlled by doping or changes in temperature. Higher temperatures or doping can bridge the energy gap between the valence and conduction bands; in other words, electrons can start moving from one side of the material to the other. Like graphene, Tungsten disulfide has a hexagonal, symmetric crystal structure. Monolayers of transition metal dichalcogenides in such a honeycomb structure have two valleys of energy. One valley can interact with another valley. Circularly polarized light is used to populate one valley versus another. This gives a degree of control over the population of electrons by polarized light.

The Yeh Lab Group prides itself on making in-house the materials and devices needed for research. For example, in order to study high temperature superconductors, the Yeh Group designed and built their own scanning tunneling microscope. When they began researching graphene, instead of buying vast quantities of graphene, they pioneered new ways of fabricating it. This research topic has been no different: Wei-hsiang Lin, a Caltech graduate student, has been busy fabricating Tungsten disulfide samples via chemical vapor deposition (CVD) using Tungsten oxide and sulfur powder.  

IMG_0722

Wei-hsiang Lin’s area for using PLD to form the TDMC samples

The first portion of my assignment was spent learning more about vacuum chambers and researching what to order to confine our sample into the chamber. One must determine how the electronic feeds should be attached, how many are necessary, which vacuum pump will be used, how many flanges and gaskets of each size must be purchased in order to prepare the vacuum chamber.

There were also a number of flanges and parts already in the lab that needed to be examined for possible use. After triple checking the details the order was set with Kurt J. Lesker. Following a sufficient amount of anti-seize lubricant and numerous nuts, washers, and bolts, we assembled the vacuum chamber that will hold the TDMC sample.

IMG_0056

The original vacuum chamber


IMG_0630

Fun in the lab


IMG_0672 (1)

The prepped vacuum chamber

IMG_0673IMG_0674

The second part of my assignment was spent researching how to set up the optics for our experiment and ordering the necessary equipment. Once the experiment is up and running we will be using a milliWatt broad spectrum light source that is directed into a monochromator to narrow down the light to specific wavelengths for testing. Ultimately we will be evaluating the giant wavelength range of 300 nm through 1800 nm. Following the monochromator, light will be refocused by a planoconvex lens. Next, light will pass through a linear polarizer and then a circular polarizer (quarter wave plate). Lastly, the light will be refocused by a biconvex lens into the vacuum chamber and onto a 1 mm by 1 mm area of the sample.  

Soon, we are excited to verify how tungsten disulfide responds to circularly polarized light.  Does our sample resonate at the exact same wavelengths as the first labs found? Why or why not?  What other unique properties are observed?  How can they be explained?  How is the Hall Effect observed?  What does this mean for the possible applications of semiconductors? How can the transfer of information from one valley to another be used in advanced electronics for communication?  Then, similar exciting experimentation will take place with graphene under circularly polarized light.

I love the sharp contrast of the high-energy, adolescent classroom to the quiet, calm of the lab.  I am grateful for getting to learn a different and new-to-me area of Physics during the summer.  Yes, I remember studying polarization and semiconductors in high school and as an undergraduate.  But it is completely different to set up an experiment from scratch, to be a part of groundbreaking research in these areas.  And it is just fun to get to work with your hands and build research equipment at a world leading research university.  Sometimes Science teachers can get bogged down with all the paperwork and meetings.  I am grateful to have had this fabulous opportunity during the summer to work on applied Science and to be re-energized in my love for Physics.  I look forward to meeting my new batch of students in a few short weeks to share my curiosity and joy for learning how the world works with them.

Two Views of the Eclipse

I am sure many of us are thinking about the eclipse.

It all starts with how far are we going to drive in order to see totality. My family and I are currently in Colorado, so we are relatively close to the path of darkness in Wyoming. I thought about trying to book a hotel room. But if you’d like to see the dusk in Lusk, here is what you get:

Let us just say that I became quite acquainted with small-town WY and any-ville NE before giving up. Driving in the same day for 10 hours with my two children, ages 4 and 5, was not an option. So I will have to be content with 90% coverage.

90% coverage sounds like it is good enough… But when you think about the sun and its output, you realize that it won’t actually be very dark. The sun gives out about 1kW of light and heat per square meter. 90% of that still leaves us with 100W per meter squared. Imagine a room lit by a square array of 100W incandescent bulbs at one meter apart from each other. Not so dark. Luckily, we have really dark eclipse glasses.

All things considered, it is a huge coincidence that the moon is just about the right size and distance from the earth to block the sun exactly, \frac{\mbox{sun radius}}{\mbox{sun-Earth distance}}=\frac{0.7\cdot 10^6 km}{150\cdot 10^6 km}\approx \frac{\mbox{luna radius}}{\mbox{luna-Earth distance}}=\frac{1.7\cdot 10^3 km}{385\cdot 10^3 km}.

On a more personal note, another coincidence of a lesser cosmic meaning is that my wife, Jocelyn Holland, a professor of comparative literature at UCSB and Caltech, has also done research on eclipses. She has recently published an essay that shows how, for nineteenth-century observers, and astronomers in particular, the unique darkness associated with the eclipse during totality shook their subjective experience of time. Readers might want to share their own personal experiences at the end of this blog so that we can see how a twenty-first century perspective compares.

As for Jocelyn’s paper, here is a redacted ‘poetry for scientists’ excerpt from it.

Eclipses are well-known objects of scientific study but it is just as true that, throughout history, they have been perceived as the most supernatural of events, permitting superstition and fear to intrude. As a result, eclipses have frequently been used across cultures, in particular, by the community of scientists and scholars, as an index of “enlightenment.” Astronomers in the nineteenth century – an epoch that witnessed several mathematical advances in the calculation of solar and lunar eclipses, as exemplified in the work of Friedrich Bessel – looked back at prior centuries with scorn, mocking the irrational fears of times past. The German astronomer Ludwig August Busch, in text published shortly before a total eclipse in 1851, points out with some smugness that scarcely 200 years before then, in Germany, “the majority of the population threw itself upon its knees in desperation during a total eclipse,” and that the composure with which the next eclipse will be greeted is “the most certain proof how only science is able to conquer prejudices and superstition which prior centuries have gone through.”

Two solar eclipses were witnessed by Europeans in the mid-nineteenth century, on July 8th, 1842 and July 28th, 1851, when the first photographic image of an eclipse was made by Julius Berkowski (see below).

What Berkowski’s daguerreotype cannot convey, however, is a particular perception shared by both professional astronomers and amateur observers of these eclipses: that the darkness of the eclipse’s totality is unlike any darkness they had experienced before. As it turns out, this perception posed a challenge to their self-proclaimed enlightenment.

There was already a historical record in place describing the strange darkness of a total eclipse. As another nineteenth-century astronomer, Jacob Lehmann, phrased it, “How is it now to be explained, namely what several observers report during the eclipse of 1706, that the darkness at the time of the total occultation of the sun compares neither to night nor to dusk, but rather is of a particular kind. What is this particular kind?” The strange darkness of the eclipse presents a problem that one can state quite simply in temporal terms: it corresponds to no prior experience of natural light or time of day.

It might strike us as odd that August Ludwig Busch, the same astronomer who derided the superstition of prior generations, writes the following with reference to eclipses past, and in anticipation of the eclipse of 1851:

You will all remember the inexplicable melancholic frame of mind which one already experiences during large if not even total eclipses, when all objects appear in a dull, unusual light, there lies namely in the sight of great plains and far-spread drifts, upon which trees and rocks, although still illuminated by sunlight, still seem to cast no shadow, such a thing which causes mourning, that one is involuntarily overcome by horror. This feeling should occur more intensely in people when, during the total eclipse, a very peculiar darkness arrives which can be named neither night nor dusk.

August Ludwig Busch.

One can say that the perceived relationship between the quality of light and time of day is based on expectations that are so innate as to be taken as infallible until experience teaches otherwise. It is natural for us to use the available light in the sky as the basis for a measure of time when no time-keeping piece is on hand. The cyclical predictability of a steady increase and decrease in available light during the course of the day, however, in addition to all the nuances of how the midday light differs from dawn and twilight, is less than helpful in the rare event of an eclipse. The quality of light does not correspond to any experience of lived time. As a consequence, not only August Ludwig Busch, but also numerous other observers, attributed it to death, as if for lack of an alternative.

For all their claims of rationality, nineteenth-century observers were troubled by this darkness that conformed to no experienced time of day. It signaled to them, among other things, that time and light are out of joint. In short, as natural and as it may be, a full solar eclipse has, historically, posed a real challenge: not to the predictability of mechanical time-keeping, but rather to a very human experience of time.

Taming wave functions with neural networks

Note from Nicole Yunger Halpern: One sunny Saturday this spring, I heard Sam Greydanus present about his undergraduate thesis. Sam was about to graduate from Dartmouth with a major in physics. He had worked with quantum-computation theorist Professor James Whitfield. The presentation — about applying neural networks to quantum computation — so intrigued me that I asked him to share his research on Quantum Frontiers. Sam generously agreed; this is his story.

Wave functions in the wild

ski_interference

The wave function, \psi , is a mixed blessing. At first, it causes unsuspecting undergrads (me) some angst via the Schrodinger’s cat paradox. This angst morphs into full-fledged panic when they encounter concepts such as nonlocality and Bell’s theorem (which, by the way, is surprisingly hard to verify experimentally). The real trouble with \psi , though, is that it grows exponentially with the number of entangled particles in a system. We couldn’t even hope to write the wavefunction of 100 entangled particles, much less perform computations on it…but there’s a lot to gain from doing just that.

The thing is, we (a couple of luckless physicists) love \psi . Manipulating wave functions can give us ultra-precise timekeeping, secure encryption, and polynomial-time factoring of integers (read: break RSA). Harnessing quantum effects can also produce better machine learning, better physics simulations, and even quantum teleportation.

Taming the beast

Though \psi grows exponentially with the number of particles in a system, most physical wave functions can be described with a lot less information. Two algorithms for doing this are the Density Matrix Renormalization Group (DMRG) and Quantum Monte Carlo (QMC).

bonsai

Density Matrix Renormalization Group (DMRG). Imagine we want to learn about trees, but studying a full-grown, 50-foot tall tree in the lab is too unwieldy. One idea is to keep the tree small, like a bonsai tree. DMRG is an algorithm which, like a bonsai gardener, prunes the wave function while preserving its most important components. It produces a compressed version of the wave function called a Matrix Product State (MPS). One issue with DMRG is that it doesn’t extend particularly well to 2D and 3D systems.

Screen Shot 2017-07-29 at 12.01.23 AM

Quantum Monte Carlo (QMC). Another way to study the concept of “tree” in a lab (bear with me on this metaphor) would be to study a bunch of leaf, seed, and bark samples. Quantum Monte Carlo algorithms do this with wave functions, taking “samples” of a wave function (pure states) and using the properties and frequencies of these samples to build a picture of the wave function as a whole. The difficulty with QMC is that it treats the wave function as a black box. We might ask, “how does flipping the spin of the third electron affect the total energy?” and QMC wouldn’t have much of a physical answer.

Brains \gg Brawn

Neural Quantum States (NQS). Some state spaces are far too large for even Monte Carlo to sample adequately. Suppose now we’re studying a forest full of different species of trees. If one type of tree vastly outnumbers the others, choosing samples from random trees isn’t an efficient way to map biodiversity. Somehow, we need to make the sampling process “smarter”. Last year, Google DeepMind used a technique called deep reinforcement learning to do just that – and achieved fame for defeating the world champion human Go player. A recent Science paper by Carleo and Troyer (2017) used the same technique to make QMC “smarter” and effectively compress wave functions with neural networks. This approach, called “Neural Quantum States (NQS)”, produced several state-of-the-art results.

mps-learn-schema

The general idea of my thesis.

My thesis. My undergraduate thesis centered upon much the same idea. In fact, I had to abandon some of my initial work after reading the NQS paper. I then focused on using machine learning techniques to obtain MPS coefficients. Like Carleo and Troyer, I used neural networks to approximate  \psi . Unlike Carleo and Troyer, I trained my model to output a set of Matrix Product State coefficients which have physical meaning (MPS coefficients always correspond to a certain state and site, e.g. “spin up, electron number 3”).

Cool – but does it work?

Yes – for small systems. In my thesis, I considered a toy system of 4 spin-\frac{1}{2} particles interacting via the Heisenberg Hamiltonian. Solving this system is not difficult so I was able to focus on fitting the two disparate parts – machine learning and Matrix Product States – together.

Success! My model solved for ground states with arbitrary precision. Even more interestingly, I used it to automatically obtain MPS coefficients. Shown below, for example, is a visualization of my model’s coefficients for the GHZ state, compared with coefficients taken from the literature.

Screen Shot 2017-07-28 at 11.46.45 PM

A visual comparison of a 4-site Matrix Product State for the GHZ state a) listed in the literature b) obtained from my neural network model. Colored squares correspond to real-valued elements of 2×2 matrices.

Limitations. The careful reader might point out that, according to the schema of my model (above), I still have to write out the full wave function. To scale my model up, I instead trained it variationally over a subspace of the Hamiltonian (just as the authors of the NQS paper did). Results are decent for larger (10-20 particle) systems, but the training itself is still unstable. I’ll finish ironing out the details soon, so keep an eye on arXiv* :).

Outside the ivory tower

qcomputer

A quantum computer developed by Joint Quantum Institute, U. Maryland.

Quantum computing is a field that’s poised to take on commercial relevance. Taming the wave function is one of the big hurdles we need to clear before this happens. Hopefully my findings will have a small role to play in making this happen.

On a more personal note, thank you for reading about my work. As a recent undergrad, I’m still new to research and I’d love to hear constructive comments or criticisms. If you found this post interesting, check out my research blog.

*arXiv is an online library for electronic preprints of scientific papers

Entropy Avengers

As you already know if you read my rare (but highly refined!) blog samples, I have spent a big chunk of my professorial career teaching statistical mechanics. And if you teach statistical mechanics, there is pretty much one thing you obsess about: entropy.

So you can imagine my joy of finally seeing a fully anti-entropic superhero appearing on my facebook account (physics enthusiasts out there – the project is seeking support on Kickstarter):

Apart from the plug for Assa Auerbach’s project (which, for full disclosure, I have just supported), I would like to use this as an excuse to share my lessons about entropy. With the same level of seriousness. Here they are, in order of increasing entropy.

1. Cost of entropy. Entropy is always marketed as a very palpable thing. Disorder. In class, however, it is calculated via an enumeration of the ‘microscopic states of the system’. For an atomic gas I know how to calculate the entropy (throw me at the blackboard in the middle of the night, no problem. Bosons or Fermions – anytime!) But how can the concept be applied to our practical existence? I have a proposal:

Quantify entropy by the cost (in $’s) of cleaning up the mess!

Examples can be found at all scales. For anything household-related, we should use the H_k constant. H_k=$25/hour for my housekeeper. You break a glass – it takes about 10 minutes to clean. That puts the entropy of the wreckage at $4.17. Having a birthday party takes about 2 hours to clean up: $50 entropy.

Another insight which my combined experience as professor and parent has produced:

2. Conjecture: Babies are maximally efficient topological entropy machines. If you raised a 1 year-old you know exactly what I mean. You can at least guess why maximum efficiency. But why topological? A baby sauntering through the house leaves a string of destruction behind itself. The baby is a mess-creation string-operator! If you start lagging behind, doom will emerge – hence the maximum efficiency. By the way, the only strategy viable is to undo the damage as it happens. But this blog post is about entropy, not about parenting.

In fact, this allows us to establish a conversion of entropy measured in k_B units, to its, clearly more natural, measure in dollar units. A baby eats about 1000kCal/day=4200kJ/day. To fully deal with the consequences, we need a housekeeper to visit about once a week. 4200kJ/day times 7 days=29400 kJoules. These are consumed at T=300K. So an entropy of S=Q/T~100J/K, which is also S~6 \times 10^{24} (Q/k_B T) in dimensionless units, converts to S~$120, which is the cost of our weekly housekeeper visit. This gives a value of $ 10^{-23} per entropy of a two-level system. Quite a reasonable bang for the buck, don’t you think?

3. My conjecture (2) fails. The second law of thermodynamics is an inequality. Entropy \geq Q/T. Why does the conjecture fail? Babies are not ‘maximal’. Consider presidents. Consider the mess that the government can make. It is at the scale of trillions per year. $ 10^{12}. Using the rigorous conversion rule established above, this corresponds to 10^{35} two-level systems. Which happens to quite precisely match the combined number of electrons present in the human bodies of all our military personnel. But the mess, however, is created by very few individuals.

Given the large amounts of taxpayer money we dish out to deal with entropy in the world, Auerbach’s book is bound to make a big impact. In fact, maybe Max the demon would one day be nominated for the national medal of freedom, or at least be inducted into the National Academy of Sciences.