Making your way to the cutting edge of any field is a daunting challenge. But especially when the edge of the field is expanding; and even harder still when the rate of expansion is accelerating. John recently helped *Physics World* create a special 25th anniversary issue where they identified the five biggest breakthroughs in physics over the past 25 years, and also the five biggest open questions. In pure John fashion, at his group meeting on Wednesday night, he made us work before revealing the answers. The photo below shows our guesses, where the asterisks denote *Physics World*‘s selections. This is the blog post I wish I had when I was a fifteen year-old aspiring physicist–this is an attempt to survey and provide a tiny toehold on the edge (from my biased, incredibly naive, and still developing perspective.)

**The biggest breakthroughs of the past 25 years:**

***Neutrino Mass:** *surprisingly, **neutrinos have a nonzero mass, which provides a window into particle physics beyond the standard model. *THE STANDARD MODEL has been getting a lot of attention recently. This is well deserved in my opinion, considering that the vast majority of its predictions have come true, most of which were made by the end of the 1960s. Last year’s discovery of the Higgs Boson is the feather in its cap. However, it’s boring when things work too perfectly, because then we don’t know what path to continue on. That’s where the neutrino mass comes in. First, what are neutrinos? Neutrinos are a fundamental particle that have the special property that they barely interact with other particles. There are four fundamental forces in nature: electromagnetism, gravity, strong (holds quarks together to create neutrons and protons), and weak (responsible for radioactivity and nuclear fusion.) We can design experiments which allow us to observe neutrinos. We have learned that they are electrically neutral, so they aren’t affected by electromagnetism. They are barely affected by the strong force, if at all. They have an extremely small mass, so gravity acts on them only subtly. The main way in which they interact with their environment is through the weak force. Here’s the amazing thing: only really clunky versions of the standard model can allow for a nonzero neutrino mass! Hence, when a small but nonzero mass was experimentally established in 1998, we gained one of our first toeholds into particle physics beyond the standard model. This is particularly important today, because to the best of my knowledge, the LHC hasn’t yet discovered any other new physics beyond the standard model. The mechanism behind the neutrino mass is not yet understood. Moreover, neutrinos have a bunch of other bizarre properties which we understand empirically, but not their theoretical origins. The strangest of which goes by the name neutrino oscillations. In one sentence: there are three different kinds of neutrinos, and they can spontaneously transmute themselves from one type to another. This happens because physics is formulated in the language of mathematics, and the math says that the eigenstates corresponding to ‘flavors’ are not the same as the eigenstates corresponding to ‘mass.’ Words, words, words. Maybe the Caltech particle theory people should have a blog?

**Shor’s Algorithm:** *a quantum computer can factor N=1433301577 into 37811*37907 exponentially faster than a classical computer.* This result from Peter Shor in 1994 is near and dear to our quantum hearts. It opened the floodgates showing that there are tasks a quantum computer could perform exponentially faster than a classical computer, and therefore that we should get BIG$$$ from the world over in order to advance our field!! The task here is factoring large numbers into their prime factors; the difficulty of which has been the basis for many cryptographic protocols. In one sentence, Shor’s algorithm achieves this exponential speed-up because there is a step in the factoring algorithm (period finding) which can be performed in parallel via the quantum Fourier transform.

***Accelerating universe :** *the universe is expanding, and the rate of this expansion is increasing.* This result has been the source of an incredible number of misconceptions. First, how do we know this is happening? In the 1920s astronomers discovered that some of the really faint ‘stars’ that we see in the night sky are actually distant galaxies. Shortly thereafter, it was discovered that these galaxies are actually moving away from us, and away from each other. Astronomers had discovered that the universe is expanding. Now, remember that photons travel at a fixed speed, the speed of light. So the photons that hit our telescopes coming from a galaxy 1 billion light years away, have been traveling for approximately 1 billion years (you have to correct for the expanding universe, so yes, we make assumptions and build a model.) Also, type IA supernovas have the special property that different instantiations produce a consistent peak luminosity, so we can use this fact to accurately determine their distance. In 1998, by studying type IA supernovas, astronomers were able to compare the rate of expansion of the universe between galaxies of very different ages. From this, they learned that galaxies are moving away from each other much faster now, than billions of years ago, and therefore the rate of expansion is accelerating.

The question becomes, how does this happen? The short answer is that we don’t know, but we think the answer should be called “dark energy”, and we’ll talk about this below. Another thing to point out is that Einstein’s equations of general relativity naturally included a term () which retrospectively, appropriately describe the physics of an accelerating universe. Einstein and other physicists were famously unsettled by this term at the time. Another point is that there’s a whole web of interrelated ideas, which have tricky names, and this interplay oftentimes trips people up. Some of these ideas include: inflation, accelerating universe, cosmological constant, big bang, vacuum energy, dark energy, etc. Here is my CliffsNotes-style overview:

*The Big Bang.*The universe started somehow. It was probably some type of quantum fluctuation.*Inflation.*During the first few moments after the big bang, the universe, which is synonymous for a ‘spacetime manifold’, expanded INCREDIBLY RAPIDLY! This was a short-lived phenomenon and it is different than the ‘accelerating universe.’ One of the major surprises about this period of inflation is that, spacetime CAN expand FASTER THAN THE SPEED OF LIGHT! It’s like having a box, where anything that lives within the box has an upper bound on how fast it can travel, but the box itself can change sizes, and in particular, it can expand faster than the upper bound of local travel speeds for the particles within. I had nightmares as a child trying to reconcile these ideas. You can ask my mom, she’ll verify! These ideas start to make sense after you’re able to play with the FLRW metric. This has all kinds of crazy consequences about our observable universe and the cosmological horizon. Inflation is still speculative but it has made many experimentally verified predictions, most notably about the structure of the cosmic microwave background (CMB), and explains why the universe is so homogeneous, isotropic and flat. Another interesting question is, what is the global geometry of our universe?*Accelerating universe.*After the initial inflationary period, the universe continued to expand, and this rate of expansion has been increasing. This is very well tested experimentally. The mechanism for this has many different names: dark energy, the cosmological constant, vacuum energy, etc.

**Extrasolar Planets:** *over the past two decades, we have detected ~1000 planets outside of our own solar system. *As a prerequisite for finding extrasolar life–unless they find us first–we need to discover candidate homes. We’ve only had the capability to find planets outside of our solar system for the past two decades or so. The pace of these discoveries quickened as space based observatories like Kepler and COROT came online. The methods we use to detect exoplanets are extremely cool. Caltech is heavily involved in these efforts. Unfortunately, as of this past August, 2 out of 4 of Kepler’s reaction wheels are no longer operational. Here’s an idea for an ambitious Intel Science and Engineering Fair (ISEF) project: propose a new mission plan for Kepler. Bonus points if it involves finding exoplanets. Double word score if the mission gets flown. Triple word score if you find an exoplanet. You are my hero if there is life on said exoplanet.

***Higgs Boson: ***quantum field theory is hard to explain in one sentence. The Higgs “field” permeates all of space; excitations in this field are interpreted as particles (Higgs bosons); these particles give other particles mass. *What can I say that hasn’t been said infinitely more eloquently over the past month?

**Quantum Error Correction (QEC):** *we want to protect quantum information from noise.* We also face this challenge with classical computers. However, things are more difficult with quantum computers/information for at least two reasons. The no-cloning theorem tells us that we cannot copy unknown quantum information, which is a necessary step for many classical error correction protocols, such as encoding the bitstring 01011 into 000111000111111. We can’t do that with quantum information. It also turns out that our enemy is formidable: we are battling decoherence. One way to think about decoherence is that every quantum system interacts with its environment, creating entanglement between the two – since we can’t control the environment (it both large and unknown), we lost control of our quantum system. It’s hard to imagine a better place to learn about QEC than from John’s notes. These ideas are also starting to infiltrate other areas of physics, such as the way that people think about black holes.

**Topological Insulators (TIs): ***we’ve known for a long time that **solids, liquids, gases and plasmas aren’t the only phases of matter; but only recently, we’ve unexpectedly discovered a huge new class of phases. *How can you distinguish different phases of matter, such as: liquids from solids; ferromagnets from antiferromagnets; or nematic liquid crystals from smectic liquid crystals? This is a tricky problem and you should think about it the next time you fly cross country on an airplane! However, for the phases mentioned above, there is a nice classification simply based upon local observations. Imagine putting each of these materials under a microscope. You can differentiate the phases simply via the local observations you get from looking through the window. For example, in its liquid phase, H2O molecules are densely packed, but in a random fashion. In its solid phase, the H2O molecules are densely packed, but arranged in a regular crystalline pattern. Before topological phases, we classified phases based upon their local symmetries. In the early 1980s, experimentalists discovered quantum hall systems, which were the first materials whose ground states couldn’t be differentiated by only using a local description. Entanglement allows different phases to look the same locally, even though they are different globally. It’s kind of like an ant trying to tell you whether it lives on a mobius strip or a simple chain. The ‘phases of matter can’ had a few dents in it, but the lid was blown sky-high when topological Insulators were discovered in 2006. These materials have bizarre properties; they provide the foundation for a multitude of cousin systems; they are shedding light on questions from fundamental physics; and they will probably be widely utilized in the electronics of the 21st century. Topological insulators can be constructed from certain homogeneous slabs of material, such as bismuth telluride. Imagine a cube of this material. Despite having a homogenous composition, the cube would be electrically insulating on the interior, but conductive on the surface. Moreover, the mechanism for how electric charge is carried on the surface is different than in conventional insulators. Namely, it is SYMMETRY protected, and this makes it so that charge carrying particles don’t want to scatter off of impurities. This is one of dozens of properties that makes these materials interesting.

**AdS/CFT:** *there is a mapping between gravitational problems in n+1-dimensions and critical quantum system in n-dimensions; maybe the universe is holographic? *This is one of my personal favorites, and something that I’ve devoted the past few months towards understanding. These ideas are incredibly deep and mathematical, but I’d like to argue through analogy. Imagine you are sitting on a balcony in Malibu, CA overlooking the vast Pacific Ocean (I was asked about this at a 4th of July BBQ and that’s how I came up with this analogy.) By only looking at the surface, how much do you think you could predict about the complicated fluid dynamics below? Obviously, you wouldn’t have enough information to predict the behavior of the biology, and we know that jellyfish, for example, move tons of water. However, I’d argue that with complete enough information about the surface waves and a powerful enough computer, you’d be able to solve the corresponding inverse problem and learn quite a bit. AdS/CFT, which sometimes gets called the holographic principle, is basically a mathematical toolkit which says that in certain situations, there is an exact correspondence between gravity problems in n+1-dimensions and strongly correlated electron systems in n-dimensions. Sometimes it’s easier to think about black holes in two spatial dimensions + time, or to think about electrons living in a system undergoing a phase transition in two dimensions (such as the fractional quantum hall system or superfluids.) AdS/CFT provides a mapping which allows you to convert problems back and forth between these settings. String theorists and the black hole information community originally came up with these ideas, but they are also connected to many other areas of physics.

Here is an example of how holography has been useful. The concept of entropy has been remarkably successful and has permeated many areas of science: from the second law of thermodynamics to designing codes used in computers. However, entropy is usually defined in terms of closed systems, and when dealing with quantum mechanics, entanglement implies that systems are very rarely closed. This has made generalizing the ideas of entropy to quantum systems very difficult. We call this generalization ‘entanglement entropy’. It’s easy to write down the definition, but exact calculations have only been carried out in a handful of settings, and mainly only when the tools of conformal field theory can be brought to bear. However, Ryu and Takayanagi conjectured that there is a geometric interpretation, called holographic entanglement entropy, which is oftentimes extremely easy to compute. Faulkner, Lewkowycz and Maldacena recently showed that Ryu and Takayanagi’s conjecture isn’t exactly right in all settings, but these holographic ideas are still very powerful. Entanglement entropy is also related to other deep ideas from the past 25 years or so, such as the c and a-theorems.

***Bose Einstein Condensates (BECs):** *quantum experiments that we can see with our eyeballs (more factually, by looking at images from microscopes.) *People all over the place are constantly talking about how weird quantum mechanics is. Hence, building systems where we can see quantum effects with our eyeballs is incredibly helpful. How do BECs work? Here are a few stepping stones. First, I mentioned above that symmetry is incredibly important in physics. For example, point particles (perfect spheres), have rotational symmetry. We then plug this fact into the mathematical machinery involved with quantizing a classical theory, and this rotational symmetry gets turned into a ‘representation theoretic‘ fact that particles have ‘spin.’ Every particle has a spin number associated to it. These numbers are integers or half integers. We call particles with integer spin bosons, and half integer spin fermions. The spin-statistics theorem then tells us that bosonic particles can stack on top of each other, as many as we’d like. On the other hand, no two identical Fermions can occupy the same region of space. BECs are a laboratory demonstration of our predictions about bosons. One of the original BEC experiments involved cooling thousands of Rubidium atoms to extremely low temperatures (a few nanokelvin above absolute zero), at which point their behavior is described by quantum mechanics. The Rubidium atoms behave as predicted, where the thousands of atoms coalesce into a very small area. Moreover, the Heisenberg uncertainty principle tells us that we cannot know BOTH the atoms’ exact locations and momenta, so the atoms are localized, but also moving at a slow speed. This interplay is right up against the limits provided by the Heisenberg uncertainty principle.

***Quantum Teleportation:** *using off-the-shelf quantum and classical resources, and strategically positioning them in advance of a task, we can then teleport custom quantum states anywhere in the universe, only governed by our ability to send pulses of laser light to the same location. *I’ve already written a series of blog posts about this! Why is this amazing? Well, teleportation would certainly be amazing, but that’s a bit of a misnomer, and a point I tried to clarify in my posts. Quantum teleportation IS NOT an all-purpose teleportation protocol. But it is incredibly awesome, and will undoubtably have major technological significance someday. Basically, it’s easy to send photons all over the universe (we are very good at building and operating lasers), but it’s very hard to send more exotic forms of matter, especially when the matter is supposed to stay in a specific quantum state. Quantum teleportation allows us to first spread entangled matter throughout space. Then, at a later time, we can exploit this resource to move delicate quantum states to the location of our entangled matter. Better yet, we don’t need to use quantum information in order to do this, we can simply send a classical signal describing the outcome of a measurement at the base station (00, 01, 10 or 11). Another reason why quantum teleportation is so important, is because it’s one of those rare revolutionary ideas, from which thousands of other amazing ideas have been built. It’s counterintuitive, but it works, and we do experiments verifying this on a regular basis.

If you are interested in learning from an expert, instead of a noob like myself, then you might want to read Steven Weinberg’s recent popular-level article in the New York Review of Books, where he talks about ~5 of these results in a historical comparison of the standard models of cosmology and particle physics. He also has an interesting popular-level article from 2011 about the role of symmetry in physics.

In **PART II **of this post, I’ll take a crack at our list of **the biggest open problems:**

***Dark Matter/Dark Energy:**

***Quantum Gravity:**

**Room Temperature Superconductors:**

***Harnessing quantum weirdness:**

***Extrasolar life:**

**Spacetime=Entanglement?**

**Is the vacuum topologically ordered?**

**Black hole information/firewalls:**

***What is time?:**

“The Higgs “field” permeates all of space; excitations in this field are interpreted as particles (Higgs bosons); these particles give other particles mass.”

Hmm…

“[...] Notice, by the way, that the Higgs particle has nothing to do with this. [...]”

http://profmattstrassler.com/articles-and-posts/particle-physics-basics/how-the-higgs-field-works-with-math/1-the-basic-idea/

Some more pedantry ;-) “One of the major surprises about this period of inflation is that, spacetime CAN expand FASTER THAN THE SPEED OF LIGHT!” is also misleading. Inflation (which I’d say occurred /before/ the BB) is irrelevant:

“[...] any expansion described by Hubble’s law has superluminal expansion for sufficiently distant objects. Even during inflation, objects within the Hubble sphere (D c/H) recede faster than the speed of light. This is identical to the situation during non-inflationary expansion, except the Hubble constant during inflation was much larger than subsequent values.”

http://arxiv.org/abs/astro-ph/0310808

Fixed quote:

“[...] any expansion described by Hubble’s law has superluminal expansion for sufficiently distant objects. Even during inflation, objects within the Hubble sphere (D less than c/H) recede at less than the speed of light, while objects beyond the Hubble sphere (D greater than c/H) recede faster than the speed of light. This is identical to the situation during non-inflationary expansion, except the Hubble constant during inflation was much larger than subsequent values.”

Paul, maybe I’m misunderstanding your concerns? Inflation is proposed to have occurred after the Big Bang. One of its consequences is that matter that was causally connected to (let’s call it the Earth), could have left Earth’s causal cone, only to again become causally connected at a much later time.

This has many weird/cool ramifications. For example, if there was a mechanism for matter to coalesce into a black hole in the very early universe, then maybe this black hole left the Earth’s observable universe, then reentered at a later date. During the intermediate phase, the black hole would have radiated away an unknown amount of its mass, and we therefore wouldn’t be able to predict the (albeit miniscule) gravitational effects it would have when it reenters our observable universe.

Thanks for replying. My main concern was the (wrong) association of super-luminal expansion with inflation. As for why I would say inflation occurred before the Big Bang…

“1. The theory of reheating of the Universe after inflation is the most important application of the quantum theory of particle creation, since almost all matter constituting the Universe at the subsequent radiation-dominated stage was created during this process [1]. At the stage of inflation all energy was concentrated in a classical slowly moving inflaton field φ. Soon after the end of inflation this field began to oscillate near the minimum of its effective potential. Gradually it produced many elementary particles, they interacted with each other and came to a state of thermal equilibrium with some temperature Tr, which was called the reheating temperature.” http://arxiv.org/abs/hep-th/9405187

(see also the description of inflation in chapter 4 of Weinberg’s Cosmology).

I don’t get it. I see nothing in that quote that has anything to do with “before the Big Bang”.

Hi vmarko. Well it’s there but I suppose it’s my fault that you didn’t recognise it. That “theory of reheating [...]” /is/ the Big Bang! The term reheating “is a historical relic of early theories of inflation in which it was assumed that the zero-temperature slow roll of the inflaton field folllowed an earlier period of high temperature” [Weinberg].

Ethan Siegel:

“The Big Bang is the first moment in the history of the Universe where we can describe it as a hot, dense, expanding state, full of matter, antimatter and radiation. It has a temperature of at least a quadrillion Kelvin (but no more than 1029 Kelvin), and it coincides with the time where inflation ends and the Universe’s expansion rate is dominated by the matter and radiation density.

If you go back to this picture, the GUT era may or may not be something that happened in our Universe since the Big Bang; the Planck era certainly is not. The Big Bang does not include inflation nor anything else that happened before it; it also did not occur at one particular place (although it did occur at one particular time, everywhere in space). It occurred roughly 13.7 billion years ago, and it occurred in a Universe that had the same temperature everywhere (to just a few parts in 105) and was (and still is) spatially flat. In the context of inflationary cosmology, the Big Bang coincides with the end of inflation and the cosmic reheating of the Universe.”

http://scienceblogs.com/startswithabang/2011/06/06/defining-the-big-bang/

As far as I am aware, the most common definition of the Big Bang is the moment of initial singularity of Einstein equations. Everything else — GUT phase, inflation, baryogenesis, nucleosynthesis, etc. — came after that point. See for example

http://en.wikipedia.org/wiki/Graphical_timeline_of_the_Big_Bang

for a nice overview.

Since I first heard about the Big Bang, I have never ever ever seen it being defined in any other way than this. Ethan Siegel’s definition is different, and today is the first time that I have seen Big Bang being defined as he describes in his blog. What he is talking about is the post-inflation reheating phase of the early Universe, but in my opinion it is an abuse of terminology to call that phase “the Big Bang”. For the vast majority of cosmologists, “the Big Bang” means the point of initial singularity, not the reheating phase.

HTH, :-)

Marko

Paul, thanks for reading the post so carefully! I can’t say that I deeply understand the Higgs mechanism, but I was intentional with my language. I was hinting at two things:

1. Before discovering the Higgs boson, there were two mechanisms, relevant to the Standard Model, which could have endowed other particles with mass. There was a mechanism proposed by Nambu in 1960, which showed that a condensate of fields could give mass to an otherwise massless pion. The other mechanism, which the Nobel committee called the BEH mechanism, required a new type of particle, the Higgs boson. Detecting the Higgs boson showed that the Standard Model indeed uses the latter mechanism. This is outlined in the recent Nobel citation that I linked to: http://www.nobelprize.org/nobel_prizes/physics/laureates/2013/advanced-physicsprize2013.pdf

2. In Englert and Brout’s 1964 paper, which was one of the major steps in understanding how the BEH(Higgs) mechanism fits into the Standard Model, they wrote down an interaction hamiltonian for a complex scalar field. They showed that by giving a vacuum expectation value (VEV) to the field, that a symmetry was broken, and that in terms of this VEV, the interaction hamiltonian had a new term proportional to VEV^2*A^2, where A is a vector field. The Feynman diagram corresponding to this term shows that you can interpret interactions between Higgs bosons and the scalar field as endowing the scalar field with mass. This is outlined on page 8 of the Nobel scientific background materials linked to above.

Regarding your link, I didn’t go through it carefully, but Matt Strassler obviously understands these things way better than myself, so… maybe there are different interpretations or maybe I’m totally wrong?

“The Feynman diagram corresponding to this term shows that you can interpret interactions between Higgs bosons and the scalar field as endowing the scalar field with mass.”

Yes, I believe that’s the important term but that it’s a term which represents the (non-zero expectation of the) scalar field endowing the vector field with mass!

“The first term is a coupling where a vector field goes over to the Nambu-Goldstone field and vice versa, while the second term is a pure mass term for the vector field. This is the key observation.”

The masses of particles come from the vacuum value of the Higgs field. The Higgs particle is an excitation above this vacuum, and itself it does not give masses to other particles, only the VEV does.

It is a fact that we cannot be certain that the Higgs field really exists until we see it oscillating, i.e. until we detect the Higgs particle. This has now been achieved at the LHC. But this particle has nothing to do with masses of other particles — the VEV does that part of the job.

So in your sentence — “The Higgs “field” permeates all of space; excitations in this field are interpreted as particles (Higgs bosons); these particles give other particles mass.” — the last statement is not quite correct. I guess that is what Paul was hinting at.

HTH, :-)

Marko

Nice to see that my adviser’s most recent paper is on this list :)

Great job Shaun! I really enjoyed this post.

Kudos for writing such an ambitious post, Shaun. :)

Strong force hold quarks to form protons and neutrons. Electromagnetic force hold protons and electrons in an atom. Is it right to understand that maybe neutrino oscillation is the form of weak force to hold neutrons with protons and electrons of an atom and becasue the weak force is small, weak force takes this form?

Pingback: Assorted links

Reblogged this on Pink Iguana.

A quibbles. We do NOT know if Shor’s algorithm factors numbers “exponentially faster than a classical computer.” The issue is that we do not know what the best classical algorithm is for factoring. An accurate statement would read “Shor’s algorithm factors numbers exponentially faster than known algorithms for classical computers.”

*quibble

Austin, great point! Thanks for catching this. To your statement, I’d also add a caveat saying:

“However, very clever people have been heavily incentivized ($$$) to find faster factoring algorithms. No better algorithms have yet been found (or at least revealed to the public), so many people suspect that Shor’s algorithm indeed factors exponentially faster than the best possible classical algorithm. Another challenging extra credit problem is to prove this.”

Thanks again for pointing this out!

At least two of these are not breakthroughs. It is typical of the attitude of people in current physics media that when we find something we don’t understand, they call it a breakthrough. Or when we find a speculative idea or possibility, they call it a breakthrough. The bias is always towards projecting a picture in which we know more than we actually do.

The first thing that is not a breakthrough is finding out that if we are to keep the idea that the universe is expanding, we have to invoke an unknown force speeding the expansion up. I’m not saying the universe isn’t expanding, but finding out that we don’t understand how it’s expanding, and can’t explain it easily at all, is hardly a breakthrough!

The other thing that isn’t a breakthrough is the holographic principle. Finding connections between bits of mathematics, however interesting they are, is not a breakthrough when we don’t yet know if that has any physical significance in the real universe. The self-satisfaction of this article is considerable, as physics flounders around in the present crisis, looking for a real direction, looking for real testable science, and with many writers pretending it knows more than it does.