About gilrefael

Condensed matter theorist. At Caltech since 2005.

Two Views of the Eclipse

I am sure many of us are thinking about the eclipse.

It all starts with how far are we going to drive in order to see totality. My family and I are currently in Colorado, so we are relatively close to the path of darkness in Wyoming. I thought about trying to book a hotel room. But if you’d like to see the dusk in Lusk, here is what you get:

Let us just say that I became quite acquainted with small-town WY and any-ville NE before giving up. Driving in the same day for 10 hours with my two children, ages 4 and 5, was not an option. So I will have to be content with 90% coverage.

90% coverage sounds like it is good enough… But when you think about the sun and its output, you realize that it won’t actually be very dark. The sun gives out about 1kW of light and heat per square meter. 90% of that still leaves us with 100W per meter squared. Imagine a room lit by a square array of 100W incandescent bulbs at one meter apart from each other. Not so dark. Luckily, we have really dark eclipse glasses.

All things considered, it is a huge coincidence that the moon is just about the right size and distance from the earth to block the sun exactly, \frac{\mbox{sun radius}}{\mbox{sun-Earth distance}}=\frac{0.7\cdot 10^6 km}{150\cdot 10^6 km}\approx \frac{\mbox{luna radius}}{\mbox{luna-Earth distance}}=\frac{1.7\cdot 10^3 km}{385\cdot 10^3 km}.

On a more personal note, another coincidence of a lesser cosmic meaning is that my wife, Jocelyn Holland, a professor of comparative literature at UCSB and Caltech, has also done research on eclipses. She has recently published an essay that shows how, for nineteenth-century observers, and astronomers in particular, the unique darkness associated with the eclipse during totality shook their subjective experience of time. Readers might want to share their own personal experiences at the end of this blog so that we can see how a twenty-first century perspective compares.

As for Jocelyn’s paper, here is a redacted ‘poetry for scientists’ excerpt from it.

Eclipses are well-known objects of scientific study but it is just as true that, throughout history, they have been perceived as the most supernatural of events, permitting superstition and fear to intrude. As a result, eclipses have frequently been used across cultures, in particular, by the community of scientists and scholars, as an index of “enlightenment.” Astronomers in the nineteenth century – an epoch that witnessed several mathematical advances in the calculation of solar and lunar eclipses, as exemplified in the work of Friedrich Bessel – looked back at prior centuries with scorn, mocking the irrational fears of times past. The German astronomer Ludwig August Busch, in text published shortly before a total eclipse in 1851, points out with some smugness that scarcely 200 years before then, in Germany, “the majority of the population threw itself upon its knees in desperation during a total eclipse,” and that the composure with which the next eclipse will be greeted is “the most certain proof how only science is able to conquer prejudices and superstition which prior centuries have gone through.”

Two solar eclipses were witnessed by Europeans in the mid-nineteenth century, on July 8th, 1842 and July 28th, 1851, when the first photographic image of an eclipse was made by Julius Berkowski (see below).

What Berkowski’s daguerreotype cannot convey, however, is a particular perception shared by both professional astronomers and amateur observers of these eclipses: that the darkness of the eclipse’s totality is unlike any darkness they had experienced before. As it turns out, this perception posed a challenge to their self-proclaimed enlightenment.

There was already a historical record in place describing the strange darkness of a total eclipse. As another nineteenth-century astronomer, Jacob Lehmann, phrased it, “How is it now to be explained, namely what several observers report during the eclipse of 1706, that the darkness at the time of the total occultation of the sun compares neither to night nor to dusk, but rather is of a particular kind. What is this particular kind?” The strange darkness of the eclipse presents a problem that one can state quite simply in temporal terms: it corresponds to no prior experience of natural light or time of day.

It might strike us as odd that August Ludwig Busch, the same astronomer who derided the superstition of prior generations, writes the following with reference to eclipses past, and in anticipation of the eclipse of 1851:

You will all remember the inexplicable melancholic frame of mind which one already experiences during large if not even total eclipses, when all objects appear in a dull, unusual light, there lies namely in the sight of great plains and far-spread drifts, upon which trees and rocks, although still illuminated by sunlight, still seem to cast no shadow, such a thing which causes mourning, that one is involuntarily overcome by horror. This feeling should occur more intensely in people when, during the total eclipse, a very peculiar darkness arrives which can be named neither night nor dusk.

August Ludwig Busch.

One can say that the perceived relationship between the quality of light and time of day is based on expectations that are so innate as to be taken as infallible until experience teaches otherwise. It is natural for us to use the available light in the sky as the basis for a measure of time when no time-keeping piece is on hand. The cyclical predictability of a steady increase and decrease in available light during the course of the day, however, in addition to all the nuances of how the midday light differs from dawn and twilight, is less than helpful in the rare event of an eclipse. The quality of light does not correspond to any experience of lived time. As a consequence, not only August Ludwig Busch, but also numerous other observers, attributed it to death, as if for lack of an alternative.

For all their claims of rationality, nineteenth-century observers were troubled by this darkness that conformed to no experienced time of day. It signaled to them, among other things, that time and light are out of joint. In short, as natural and as it may be, a full solar eclipse has, historically, posed a real challenge: not to the predictability of mechanical time-keeping, but rather to a very human experience of time.

Entropy Avengers

As you already know if you read my rare (but highly refined!) blog samples, I have spent a big chunk of my professorial career teaching statistical mechanics. And if you teach statistical mechanics, there is pretty much one thing you obsess about: entropy.

So you can imagine my joy of finally seeing a fully anti-entropic superhero appearing on my facebook account (physics enthusiasts out there – the project is seeking support on Kickstarter):

Apart from the plug for Assa Auerbach’s project (which, for full disclosure, I have just supported), I would like to use this as an excuse to share my lessons about entropy. With the same level of seriousness. Here they are, in order of increasing entropy.

1. Cost of entropy. Entropy is always marketed as a very palpable thing. Disorder. In class, however, it is calculated via an enumeration of the ‘microscopic states of the system’. For an atomic gas I know how to calculate the entropy (throw me at the blackboard in the middle of the night, no problem. Bosons or Fermions – anytime!) But how can the concept be applied to our practical existence? I have a proposal:

Quantify entropy by the cost (in $’s) of cleaning up the mess!

Examples can be found at all scales. For anything household-related, we should use the H_k constant. H_k=$25/hour for my housekeeper. You break a glass – it takes about 10 minutes to clean. That puts the entropy of the wreckage at $4.17. Having a birthday party takes about 2 hours to clean up: $50 entropy.

Another insight which my combined experience as professor and parent has produced:

2. Conjecture: Babies are maximally efficient topological entropy machines. If you raised a 1 year-old you know exactly what I mean. You can at least guess why maximum efficiency. But why topological? A baby sauntering through the house leaves a string of destruction behind itself. The baby is a mess-creation string-operator! If you start lagging behind, doom will emerge – hence the maximum efficiency. By the way, the only strategy viable is to undo the damage as it happens. But this blog post is about entropy, not about parenting.

In fact, this allows us to establish a conversion of entropy measured in k_B units, to its, clearly more natural, measure in dollar units. A baby eats about 1000kCal/day=4200kJ/day. To fully deal with the consequences, we need a housekeeper to visit about once a week. 4200kJ/day times 7 days=29400 kJoules. These are consumed at T=300K. So an entropy of S=Q/T~100J/K, which is also S~6 \times 10^{24} (Q/k_B T) in dimensionless units, converts to S~$120, which is the cost of our weekly housekeeper visit. This gives a value of $ 10^{-23} per entropy of a two-level system. Quite a reasonable bang for the buck, don’t you think?

3. My conjecture (2) fails. The second law of thermodynamics is an inequality. Entropy \geq Q/T. Why does the conjecture fail? Babies are not ‘maximal’. Consider presidents. Consider the mess that the government can make. It is at the scale of trillions per year. $ 10^{12}. Using the rigorous conversion rule established above, this corresponds to 10^{35} two-level systems. Which happens to quite precisely match the combined number of electrons present in the human bodies of all our military personnel. But the mess, however, is created by very few individuals.

Given the large amounts of taxpayer money we dish out to deal with entropy in the world, Auerbach’s book is bound to make a big impact. In fact, maybe Max the demon would one day be nominated for the national medal of freedom, or at least be inducted into the National Academy of Sciences.

Modern Physics Education?

Being the physics department executive officer (on top of being a quantum physicist) makes me think a lot about our physics college program. It is exciting. We start with mechanics, and then go to electromagnetism (E&M) and relativity, then to quantum and statistical mechanics, and then to advanced mathematical methods, analytical mechanics and more E&M. The dessert is usually field theory, astrophysics and advanced lab. You can take some advanced courses, introducing condensed matter, quantum computation, particle theory, AMO, general relativity, nuclear physics, etc. By the time we are done with college, we definitely feel like we know a lot.

But in the end of all that, what do we know about modern physics? Certainly we all took a class called ‘modern physics’. Or should I say ‘”modern” physics’? Because, I’m guessing, the modern physics class heavily featured the Stern-Gerlach experiment (1922) and mentions of De-Broglie, Bohr, and Dirac quite often. Don’t get me wrong: great physics, and essential. But modern?

So what would be modern physics? What should we teach that does not predate 1960? By far the biggest development in my neck of the woods is easy access to computing power. Even I can run simulations for a Schroedinger equation (SE) with hundreds of sites and constantly driven. Even I can diagonalize a gigantic matrix that corresponds to a Mott-Hubbard model of 15 or maybe even 20 particles. What’s more, new approximate algorithms capture the many-body quantum dynamics, and ground states of chains with 100s of sites. These are the DMRG (density matrix renormalization group) and MPS (matrix product states) (see https://arxiv.org/abs/cond-mat/0409292 for a review of DMRG, and https://arxiv.org/pdf/1008.3477.pdf for a review of MPS, both by the inspiring Uli Schollwoeck).

Should we teach that? Isn’t it complicated? Yes and no. Respectively – not simultaneously. We should absolutely teach it. And no – it is really not complicated. That’s the point – it is simpler than Schroedinger’s equation! How do we teach it? I am not sure yet, but certainly there is a junior level time slot for computational quantum mechanics somewhere.

What else? Once we think about it, the flood gates open. Condensed matter just gave us a whole new paradigm for semi-conductors: topological insulators. Definitely need to teach that – and it is pure 21st century! Tough? Not at all, just solving SE on a lattice. Not tough? Well, maybe not trivial, but is it any tougher than finding the orbitals of Hydrogen? (at the risk of giving you nightmares, remember Laguerre polynomials? Oh – right – you won’t get any nightmares, because, most likely, you don’t remember!)

With that let me take a shot at the standard way that quantum mechanics is taught. Roughly a quantum class goes like this: wave-matter duality; SE; free particle; box; harmonic oscillator, spin, angular momentum, hydrogen atom. This is a good program for atomic physics, and possibly field theory. But by and large, this is the quantum mechanics of vacuum. What about quantum mechanics of matter? Is Feynman path integral really more important than electron waves in solids? All physics is beautiful. But can’t Feynman wait while we teach tight binding models?

And I’ll stop here, before I get started on hand-on labs, as well as the fragmented nature of our programs.

Question to you all out there: Suppose we go and modernize (no quotes) our physics program. What should we add? What should we take away? And we all agree – all physics is Beautiful! I’m sure I have my blind spots, so please comment!

The physics of Trump?? Election renormalization.


Two things were high in my mind this last quarter: My course on advanced statistical mechanics and phase transitions, and the bizarre general elections that raged all around. It is no wonder then, that I would start to conflate the Ising model, Landau mean field, and renormalization group, with the election process, and just think of each and every one of us as a tiny magnet, that needs to say up or down – Trump or Cruz, Clinton or Sanders (a more appetizing choice, somehow), and .. you get the drift.

Elections and magnetic phase transitions are very much alike. The latter, I will argue, teaches us something very important about the former.

The physics of magnetic phase transitions is amazing. If I hadn’t thought this way, I wouldn’t be a condensed matter physicist. Models of magnets consider a bunch of spins – each one a small magnet – that talk only to their nearest neighbor, as happens in typical magnets. At the onset of magnetic order (the Curie temperature), when the symmetry of the spins becomes broken, it turns out that the spin correlation length diverges. Even though Interaction length = lattice constant, we get correlation length = infinity.

To understand how ridiculous this is, you should understand what a correlation length is. The correlation tells you a simple thing. If you are a spin, trying to make it out in life, and trying to figure out where to point, your pals around you are certainly going to influence you. Their pals will influence them, and therefore you. The correlation length tells you how distant can a spin be, and still manage to nudge you to point up or down. In physics-speak, it is the reduced correlation length. It makes sense that somebody in you neighborhood, or your office, or even your town, will do something that will affect you – after all – you always interact with people that distant. But the analogy to the spins is that there is always a given circumstance where some random person in Incheon, South Korea, could influence your vote. A diverging correlation length is the Butterfly effect for real.

And yet, spins do this. At the critical temperature, just as the spins decide whether they want to point along the north pole or towards Venus, every nonsense of a fluctuation that one of them makes leagues away may galvanize things one way or another. Without ever even remotely directly talking to even their father’s brother’s nephew’s cousin’s former roommate! Every fluctuation, no matter where, factors into the symmetry breaking process.

A bit of physics, before I’m blamed for being crude in my interpretation. The correlation length at the Curie point, and almost all symmetry-breaking continuous transitions, diverges as some inverse power of the temperature difference to the critical point: \frac{1}{|T-T_c|}^{\nu}. The faster it diverges (the higher the power \nu) , actually the more feeble the symmetry breaking is. Why is that? After I argued that this is an amazing phenomenon? Well, if 10^2 voices can shift you one way or another, each voice is worth something. If 10^{20} voices are able to push you around, I’m not really buying influence on you by bribing ten of these. Each voice is worth less. Why? The correlation length is also a measure of the uncertainty before the moment of truth – when the battle starts and we don’t know who wins. Big correlation length – any little element of the battlefield can change something, and many souls are involved and active. Small correlation length – the battle was already decided since one of the sides has a single bomb that will evaporate the world. Who knew that Dr. Strangelove could be a condensed matter physicist?

This lore of correlations led to one of the most breathtaking developments of 20th century physics. I’m a condensed matter guy, so it is natural that Ken Wilson, as well as Ben Widom, Michael Fisher, and Leo Kadanoff are my superheros. They came up with an idea so simple yet profound – scaling. If you have a system (say, of spins) that you can’t figure out – maybe because it is fluctuating, and because it is interacting – regardless, all you need to do is to move away from it. Let averaging (aka, central limit theorem) do the job and suppress fluctuations. Let us just zoom out. If we change the scale by a factor of 2, so that all spins look more crowded, then the correlation length also look half as big. The system looks less critical. It is as if we managed to move away from the critical temperature – either cooling towards T=0 , or heating up towards T=\infty. Both limits are easy to solve. How do we make this into a framework? If the pre-zoom-out volume had 8 spins, we can average them into a representative single spin. This way you’ll end up with a system that looks pretty much like the one you had before – same spin density, same interaction, same physics – but at a different temperature, and further from the phase transition. It turns out you can do this, and you can figure out how much changed in the process. Together, this tells you how the correlation length depends on T-T_c. This is the renormalization group, aka, RG.

Interestingly, this RG procedure informs us that criticality and symmetry breaking are more feeble the lower the dimension. There are no 1d permanent magnets, and magnetism in 2d is very frail. Why? Well, the more dimensions there are, the more nearest neighbors each spin has, and more neighbors your neighbors have. Think about the 6-degrees of separation game. 3d is okay for magnets, as we know. It turns out, however, that in physical systems above 4 dimensions, critical phenomena is the same as that of a fully connected (infinite dimensional) network. The uncertainty stage is very small, correlations length diverge slowly. Even at distance 1 there are enough people or spins to bend your will one way or another. Magnetization is just a question of time elapsed from the beginning of the experiment.

Spins, votes, what’s the difference? You won’t be surprised to find that the term renormalization has permeated every aspect of economics and social science as well. What is voting Republican vs Democrat if not a symmetry breaking? Well, it is not that bad yet – the parties are different. No real symmetry there, you would think. Unless you ask the ‘undecided voter’.

And if elections are affected by such correlated dynamics, what about revolutions? Here the analogy with phase transitions is so much more prevalent even in our language – resistance to a regime solidifies, crystallizes, and aligns – just like solids and magnets. When people are fed up with a regime, the crucial question is – if I would go to the streets, will I be joined by enough people to affect a change?

Revolutions, therefore, seem to rise out of strong fluctuations in the populace. If you wish, think of revolutions as domains where the frustration is so high, which give a political movement the inertia it needs.

Domains-: that’s exactly what the correlation length is about. The correlation length is the size of correlated magnetic domains, i.e.,groups of spins that point in the same direction. And now we remember that close to a phase transition, the correlation length diverges as some power of the distance ot the transition: \frac{1}{|T-T_c|^{\nu}}. Take a magnet just above its Curie temperature. The closer we are to a phase transition, the larger the correlation length is, and the bigger are the fluctuating magnetized domains. The parameter \nu is the correlation-length critical exponent and something of a holy grail for practitioners of statistical mechanics. Everyone wants to calculate it for various phase transition. It is not that easy. That’s partially why I have a job.

The correlation length aside, how many spins are involved in a domain? \left[1/|T-T_c|^d\right]^{\nu} . Actually, we know roughly what \nu is. For systems with dimension $latex  d>4$, it is ½. For systems with a lower dimensionality it is roughly $latex  2/d$. (Comment for the experts: I’m really not kidding – this fits the Ising model for 2 and 3 dimensions, and it fits the xy model for 3d).

So the number of spins in a domain in systems below 4d is 1/|T-T_c|^2, independent of dimension. On the other hand, four d and up it is 1/|T-T_c|^{d/2}. Increasing rapidly with dimension, when we are close to the critical point.

Back to voters. In a climate of undecided elections, analogous to a magnet near its Curie point, the spins are the voters, and domain walls are the crowds supporting this candidate or that policy; domain walls are what becomes large demonstrations in the Washington Mall. And you would think that the world we live in is clearly 2d – a surface of a 3d sphere (and yes – that includes Manhattan!). So a political domain size just diverges as a simple moderate 1/|T-T_c|^2 during times of contested elections.

Something happened, however, in the past two decades: the internet. The connectivity of the world has changed dramatically.

No more 2d. Now, our effective dimension is determined by our web based social network. Facebook perhaps? Roughly speaking, the dimensionality of the Facebook network is that number of friends we have, divided by the number of mutual friends. I venture to say this averages at about 10. With about a 150 friends in tow, out of which 15 are mutual. So our world, for election purposes, is 10 dimensional big!

Let’s simulate what this means for our political system. Any event – a terrorist attack, or a recession, etc. will cause a fluctuation that will involve a large group of people – a domain. Take a time when T-T_c is a healthy 0.1 for instance. In the good old 2d world this would involve 100 friends times 1/0.1^2\sim 10000 people. Now it would be more like 100\cdot 1/0.1^{10/2}\sim 10-millions. So any small perturbation of conditions could make entire states turn one way or another.

When response to slight shifts in prevailing conditions encompasses entire states, rather than entire neighborhoods, polarization follows. Over all, a state where each neighborhood has a slightly different opinion will be rather moderate – extreme opinions will only resonate locally. Single voices could only sway so many people. But nowadays, well – we’ve all seen Trump and the like on the march. Millions. It’s not even their fault – its physics!

Can we do anything about it? It’s up for debate. Maybe cancel the electoral college, to make the selecting unit larger than the typical size of a fluctuating domain. Maybe carry out a time averaged election: make an election year where each month there is a contest for the grand prize. Or maybe just move to Canada.

Quantum mechanics – it’s all in our mind!

Last week was the final week of classes, and I brought my ph12b class, aka baby-quantum, to conclusion. Just like the last time I taught the class, I concluded with what should make the students honor the quantum gods – the EPR paradox and Bell’s inequality. Even before these common conundrums of quantum mechanics, the students had already picked up on the trouble with measurement theory and had started hammering me with questions on the “many-worlds interpretation”. The many-worlds interpretation, pioneered by Everett, stipulates that whenever a quantum measurement is made of a state in a quantum superposition, the universe will split into several copies where each possible result will be realized in one of the copies. All results come to pass, but if we are cats, in some universes, we won’t survive to meaow about it.

Questions on the many-worlds interpretation always make me think back to my early student days, when I also obsessed over these issues. In fact, I got so frustrated with the question, that I started having heretic thoughts: What if it is all in our minds? What if the quantum superposition is always there, but maybe evolution had consciousness always zoom in on one possible outcome. Maybe hunting a duck is just easier if the duck is not in a superposition of flying south and swimming in a pond. Of course, this requires that at least you and the duck, and probably other bystanders, all agree on which quantum reality it is that you are operating in. No problem – maybe evolution equipped all of our consciousnesses with the ability to zoom in on a common reality where all of us agree on the results of experiments, but there are other possibilities for this reality, which still live side by side to ‘our’ reality, since – hey – it’s all in our minds!
Continue reading

The Eiger et al.

When I was a graduate student, on my second year I was put in an office that was shared with two postdocs – Arne Brataas and Stefan Kehrein. It made me really feel like I was being initiated into this community of theoretical physicists – something I had been dreaming of since I was a teenager. The most conspicuous thing in the office (Harvard’s Lyman 332 if I recall correctly), was a big three or four panel poster of an astounding mountain range, craggy peaks, glaciers, steep drops. There was a small note on the corner: “The Eiger et al. – the amazing history of this poster is recounted in the book ‘Who Got Polchinski’s Office’ ”*
Continue reading