I have a complicated existence. I'm a partner at GV (formerly Google Ventures) but also finishing my PhD at Caltech. It's astonishing that they gave the keys to this blog to hooligans like myself.

# The math of multiboundary wormholes

Xi Dong, Alex Maloney, Henry Maxfield and I recently posted a paper to the arXiv with the title: Phase Transitions in 3D Gravity and Fractal Dimension. In other words, we’ll get about ten readers per year for the next few decades. Despite the heady title, there’s deep geometrical beauty underlying this work. In this post I want to outline the origin story and motivation behind this paper.

There are two different branches to the origin story. The first was my personal motivation and the second is related to how I came into contact with my collaborators (who began working on the same project but with different motivation, namely to explain a phase transition described in this paper by Belin, Keller and Zadeh.)

During the first year of my PhD at Caltech I was working in the mathematics department and I had a few brief but highly influential interactions with Nikolai Makarov while I was trying to find a PhD advisor. His previous student, Stanislav Smirnov, had recently won a Fields Medal for his work studying Schramm-Loewner evolution (SLE) and I was captivated by the beauty of these objects.

SLE example from Scott Sheffield’s webpage. SLEs are the fractal curves that form at the interface of many models undergoing phase transitions in 2D, such as the boundary between up and down spins in a 2D magnet (Ising model.)

One afternoon, I went to Professor Makarov’s office for a meeting and while he took a brief phone call I noticed a book on his shelf called Indra’s Pearls, which had a mesmerizing image on its cover. I asked Professor Makarov about it and he spent 30 minutes explaining some of the key results (which I didn’t understand at the time.) When we finished that part of our conversation Professor Makarov described this area of math as “math for the future, ahead of the tools we have right now” and he offered for me to borrow his copy. With a description like that I was hooked. I spent the next six months devouring this book which provided a small toehold as I tried to grok the relevant mathematics literature. This year or so of being obsessed with Kleinian groups (the underlying objects in Indra’s Pearls) comes back into the story soon. I also want to mention that during that meeting with Professor Makarov I was exposed to two other ideas that have driven my research as I moved from mathematics to physics: quasiconformal mappings and the simultaneous uniformization theorem, both of which will play heavy roles in the next paper I release.  In other words, it was a pretty important 90 minutes of my life.

Google image search for “Indra’s Pearls”. The math underlying Indra’s Pearls sits at the intersection of hyperbolic geometry, complex analysis and dynamical systems. Mathematicians oftentimes call this field the study of “Kleinian groups”. Most of these figures were obtained by starting with a small number of Mobius transformations (usually two or three) and then finding the fixed points for all possible combinations of the initial transformations and their inverses. Indra’s Pearls was written by David Mumford, Caroline Series and David Wright. I couldn’t recommend it more highly.

My life path then hit a discontinuity when I was recruited to work on a DARPA project, which led to taking an 18 month leave of absence from Caltech. It’s an understatement to say that being deployed in Afghanistan led to extreme introspection. While “down range” I had moments of clarity where I knew life was too short to work on anything other than ones’ deepest passions. Before math, the thing that got me into science was a childhood obsession with space and black holes. I knew that when I returned to Caltech I wanted to work on quantum gravity with John Preskill. I sent him an e-mail from Afghanistan and luckily he was willing to take me on as a student. But as a student in the mathematics department, I knew it would be tricky to find a project that involved all of: black holes (my interest), quantum information (John’s primary interest at the time) and mathematics (so I could get the degree.)

I returned to Caltech in May of 2012 which was only two months before the Firewall Paradox was introduced by Almheiri, Marolf, Polchinski and Sully. It was obvious that this was where most of the action would be for the next few years so I spent a great deal of time (years) trying to get sharp enough in the underlying concepts to be able to make comments of my own on the matter. Black holes are probably the closest things we have in Nature to the proverbial bottomless pit, which is an apt metaphor for thinking about the Firewall Paradox. After two years I was stuck. I still wasn’t close to confident enough with AdS/CFT to understand a majority of the promising developments. And then at exactly the right moment, in the summer of 2014, Preskill tipped my hat to a paper titled Multiboundary Wormholes and Holographic Entanglement by Balasubramanian, Hayden, Maloney, Marolf and Ross. It was immediately obvious to me that the tools of Indra’s Pearls (Kleinian groups) provided exactly the right language to study these “multiboundary wormholes.” But despite knowing a bridge could be built between these fields, I still didn’t have the requisite physics mastery (AdS/CFT) to build it confidently.

Before mentioning how I met my collaborators and describing the work we did together, let me first describe the worlds that we bridged together.

3D Gravity and Universality

As the media has sensationalized to death, one of the most outstanding questions in modern physics is to discover and then understand a theory of quantum gravity.  As a quick aside, Quantum gravity is just a placeholder name for such a theory. I used italics because physicists have already discovered candidate theories, such as string theory and loop quantum gravity (I’m not trying to get into politics, just trying to demonstrate that there are multiple candidate theories). But understanding these theories — carrying out all of the relevant computations to confirm that they are consistent with Nature and then doing experiments to verify their novel predictions — is still beyond our ability. Surprisingly, without knowing the specific theory of quantum gravity that guides Nature’s hand, we’re still able to say a number of universal things that must be true for any theory of quantum gravity. The most prominent example being the holographic principle which comes from the entropy of black holes being proportional to the surface area encapsulated by the black hole’s horizon (a naive guess says the entropy should be proportional to the volume of the black hole; such as the entropy of a glass of water.) Universal statements such as this serve as guideposts and consistency checks as we try to understand quantum gravity.

It’s exceedingly rare to find universal statements that are true in physically realistic models of quantum gravity. The holographic principle is one such example but it pretty much stands alone in its power and applicability. By physically realistic I mean: 3+1-dimensional and with the curvature of the universe being either flat or very mildly positively curved.  However, we can make additional simplifying assumptions where it’s easier to find universal properties. For example, we can reduce the number of spatial dimensions so that we’re considering 2+1-dimensional quantum gravity (3D gravity). Or we can investigate spacetimes that are negatively curved (anti-de Sitter space) as in the AdS/CFT correspondence. Or we can do BOTH! As in the paper that we just posted. The hope is that what’s learned in these limited situations will back-propagate insights towards reality.

The motivation for going to 2+1-dimensions is that gravity (general relativity) is much simpler here. This is explained eloquently in section II of Steve Carlip’s notes here. In 2+1-dimensions, there are no “local”/”gauge” degrees of freedom. This makes thinking about quantum aspects of these spacetimes much simpler.

The standard motivation for considering negatively curved spacetimes is that it puts us in the domain of AdS/CFT, which is the best understood model of quantum gravity. However, it’s worth pointing out that our results don’t rely on AdS/CFT. We consider negatively curved spacetimes (negatively curved Lorentzian manifolds) because they’re related to what mathematicians call hyperbolic manifolds (negatively curved Euclidean manifolds), and mathematicians know a great deal about these objects. It’s just a helpful coincidence that because we’re working with negatively curved manifolds we then get to unpack our statements in AdS/CFT.

Multiboundary wormholes

Finding solutions to Einstein’s equations of general relativity is a notoriously hard problem. Some of the more famous examples include: Minkowski space, de-Sitter space, anti-de Sitter space and Schwarzschild’s solution (which describes perfectly symmetrical and static black holes.) However, there’s a trick! Einstein’s equations only depend on the local curvature of spacetime while being insensitive to global topology (the number of boundaries and holes and such.) If M is a solution of Einstein’s equations and $\Gamma$ is a discrete subgroup of the isometry group of $M$, then the quotient space $M/\Gamma$ will also be a spacetime that solves Einstein’s equations! Here’s an example for intuition. Start with 2+1-dimensional Minkowski space, which is just a stack of flat planes indexed by time. One example of a “discrete subgroup of the isometry group” is the cyclic group generated by a single translation, say the translation along the x-axis by ten meters. Minkowski space quotiented by this group will also be a solution of Einstein’s equations, given as a stack of 10m diameter cylinders indexed by time.

Start with 2+1-dimensional Minkowski space which is just a stack of flat planes index by time. Think of the planes on the left hand side as being infinite. To “quotient” by a translation means to glue the green lines together which leaves a cylinder for every time slice. The figure on the right shows this cylinder universe which is also a solution to Einstein’s equations.

D+1-dimensional Anti-de Sitter space ($AdS_{d+1}$) is the maximally symmetric d+1-dimensional Lorentzian manifold with negative curvature. Our paper is about 3D gravity in negatively curved spacetimes so our starting point is $AdS_3$ which can be thought of as a stack of Poincare disks (or hyperbolic sheets) with the time dimension telling you which disk (sheet) you’re on. The isometry group of $AdS_3$ is a group called $SO(2,2)$ which in turn is isomorphic to the group $SL(2, R) \times SL(2, R)$. The group $SL(2,R) \times SL(2,R)$ isn’t a very common group but a single copy of $SL(2,R)$ is a very well-studied group. Discrete subgroups of it are called Fuchsian groups. Every element in the group should be thought of as a 2×2 matrix which corresponds to a Mobius transformation of the complex plane. The quotients that we obtain from these Fuchsian groups, or the larger isometry group yield a rich infinite family of new spacetimes, which are called multiboundary wormholes. Multiboundary wormholes have risen in importance over the last few years as powerful toy models when trying to understand how entanglement is dispersed near black holes (Ryu-Takayanagi conjecture) and for how the holographic dictionary works in terms of mapping operators in the boundary CFT to fields in the bulk (entanglement wedge reconstruction.)

Three dimensional AdS can be thought of as a stack of hyperboloids indexed by time. It’s convenient to use the Poincare disk model for the hyperboloids so that the entire spacetime can be pictured in a compact way. Despite how things appear, all of the triangles have the same “area”.

I now want to work through a few examples.

BTZ black hole: this is the simplest possible example. It’s obtained by quotienting $AdS_3$ by a cyclic group $\langle A \rangle$, generated by a single matrix $A \in SL(2,R)$ which for example could take the form $A = \begin{pmatrix} e^{\lambda} & 0 \\ 0 & e^{-\lambda} \end{pmatrix}$. The matrix A acts by fractional linear transformation on the complex plane, so in this case the point $z \in \mathbb{C}$ gets mapped to $z\mapsto (e^{\lambda}z + 0)/(0z + e^{-\lambda}) = e^{2\lambda} z$. In this case

Start with $AdS_3$ as a stack of hyperbolic half planes indexed by time. A quotient by A means that each hyperbolic half plane gets quotiented. Quotienting a constant time slice by the map $z \mapsto e^{2\lambda}z$ gives a surface that’s topologically a cylinder. Using the picture above this means you glue together the solid black curves. The green and red segments become two boundary regions. We call it the BTZ black hole because when you add “time” it becomes impossible to send a signal from the green boundary to the red boundary, or vica versa. The dotted line acts as an event horizon.

Three boundary wormhole:

There are many parameterizations that we can choose to obtain the three boundary wormhole. I’ll only show schematically how the gluings go. A nice reference with the details is this paper by Henry Maxfield.

This is a picture of a constant time slice of $AdS_3$ quotiented by the A and B above. Each time slice is given as a copy of the hyperbolic half plane with the black arcs and green arcs glued together (by the maps A and B). These gluings yield a pair of pants surface. Each of the boundary regions are causally disconnected from the others. The dotted lines are black hole horizons that illustrate where the causal disconnection happens.

Torus wormhole:

It’s simpler to write down generators for the torus wormhole; but following along with the gluings is more complicated. To obtain the three boundary wormhole we quotient $AdS_3$ by the free group $\langle A, B \rangle$ where $A = \begin{pmatrix} e^{\lambda} & 0 \\ 0 & e^{-\lambda} \end{pmatrix}$ and $B = \begin{pmatrix} \cosh \lambda & \sinh \lambda \\ \sinh \lambda & \cosh \lambda \end{pmatrix}$. (Note that this is only one choice of generators, and a highly symmetrical one at that.)

This is a picture of a constant time slice of $AdS_3$ quotiented by the A and B above. Each time slice is given as a copy of the hyperbolic half plane with the black arcs and green arcs glued together (by the maps A and B). These gluings yield what’s called the “torus wormhole”. Topologically it’s just a donut with a hole cut out. However, there’s a causal structure when you add time to the mix where the dotted lines act as a black hole horizon, so that a message sent from behind the horizon will never reach the boundary.

Lorentzian to Euclidean spacetimes

So far we have been talking about negatively curved Lorentzian manifolds. These are manifolds that have a notion of both “time” and “space.” The technical definition involves differential geometry and it is related to the signature of the metric. On the other hand, mathematicians know a great deal about negatively curved Euclidean manifolds. Euclidean manifolds only have a notion of “space” (so no time-like directions.) Given a multiboundary wormhole, which by definition, is a quotient of $AdS_3/\Gamma$ where $\Gamma$ is a discrete subgroup of Isom($AdS_3$), there’s a procedure to analytically continue this to a Euclidean hyperbolic manifold of the form $H^3/ \Gamma_E$ where $H^3$ is three dimensional hyperbolic space and $\Gamma_E$ is a discrete subgroup of the isometry group of $H^3$, which is $PSL(2, \mathbb{C})$. This analytic continuation procedure is well understood for time-symmetric spacetimes but it’s subtle for spacetimes that don’t have time-reversal symmetry. A discussion of this subtlety will be the topic of my next paper. To keep this blog post at a reasonable level of technical detail I’m going to need you to take it on a leap of faith that to every Lorentzian 3-manifold multiboundary wormhole there’s an associated Euclidean hyperbolic 3-manifold. Basically you need to believe that given a discrete subgroup $\Gamma$ of $SL(2, R) \times SL(2, R)$ there’s a procedure to obtain a discrete subgroup $\Gamma_E$ of $PSL(2, \mathbb{C})$. Discrete subgroups of $PSL(2, \mathbb{C})$ are called Kleinian groups and quotients of $H^3$ by groups of this form yield hyperbolic 3-manifolds. These Euclidean manifolds obtained by analytic continuation arise when studying the thermodynamics of these spacetimes or also when studying correlation functions; there’s a sense in which they’re physical.

TLDR: you start with a 2+1-d Lorentzian 3-manifold obtained as a quotient $AdS_3/\Gamma$ and analytic continuation gives a Euclidean 3-manifold obtained as a quotient $H^3/\Gamma_E$ where $H^3$ is 3-dimensional hyperbolic space and $\Gamma_E$ is a discrete subgroup of $PSL(2,\mathbb{C})$ (Kleinian group.)

Limit sets:

Every Kleinian group $\Gamma_E = \langle A_1, \dots, A_g \rangle \subset PSL(2, \mathbb{C})$ has a fractal that’s naturally associated with it. The fractal is obtained by finding the fixed points of every possible combination of generators and their inverses. Moreover, there’s a beautiful theorem of Patterson, Sullivan, Bishop and Jones that says the smallest eigenvalue $\lambda_0$ of the spectrum of the Laplacian on the quotient Euclidean spacetime $H^3 / \Gamma_E$ is related to the Hausdorff dimension of this fractal (call it $D$) by the formula $\lambda_0 = D(2-D)$. This smallest eigenvalue controls a number of the quantities of interest for this spacetime but calculating it directly is usually intractable. However, McMullen proposed an algorithm to calculate the Hausdorff dimension of the relevant fractals so we can get at the spectrum efficiently, albeit indirectly.

This is a screen grab of Figure 2 from our paper. These are two examples of fractals that emerge when studying these spacetimes. Both of these particular fractals have a 3-fold symmetry. They have this symmetry because these particular spacetimes came from looking at something called “n=3 Renyi entropies”. The number q indexes a one complex dimensional family of spacetimes that have this 3-fold symmetry. These Kleinian groups each have two generators that are described in section 2.3 of our paper.

What we did

Our primary result is a generalization of the Hawking-Page phase transition for multiboundary wormholes. To understand the thermodynamics (from a 3d quantum gravity perspective) one starts with a fixed boundary Riemann surface and then looks at the contributions to the partition function from each of the ways to fill in the boundary (each of which is a hyperbolic 3-manifold). We showed that the expected dominant contributions, which are given by handlebodies, are unstable when the kinetic operator $(\nabla^2 - m^2)$ is negative, which happens whenever the Hausdorff dimension of the limit set of $\Gamma_E$ is greater than the lightest scalar field living in the bulk. One has to go pretty far down the quantum gravity rabbit hole (black hole) to understand why this is an interesting research direction to pursue, but at least anyone can appreciate the pretty pictures!

# BTZ black holes for #BlackHoleFriday

Yesterday was a special day. And no I’m not referring to #BlackFriday — but rather to #BlackHoleFriday. I just learned that NASA spawned this social media campaign three years ago. The timing of this year’s Black Hole Friday is particularly special because we are exactly 100 years + 2 days after Einstein published his field equations of general relativity (GR). When Einstein introduced his equations he only had an exact solution describing “flat space.” These equations are notoriously difficult to solve so their introduction sent out a call-to-arms to mathematically-minded-physicists and physically-minded-mathematicians who scrambled to find new solutions.

If I had to guess, Karl Schwarzschild probably wasn’t sleeping much exactly a century ago. Not only was he deployed to the Russian Front as a solider in the German Army, but a little more than one month after Einstein introduced his equations, Schwarzschild was the first to find another solution. His solution describes the curvature of spacetime outside of a spherically symmetric mass. It has the incredible property that if the spherical mass is compact enough then spacetime will be so strongly curved that nothing will be able to escape (at least from the perspective of GR; we believe that there are corrections to this when you add quantum mechanics to the mix.) Schwarzchild’s solution took black holes from the realm of clever thought experiments to the status of being a testable prediction about how Nature behaves.

It’s worth mentioning that between 1916-1918 Reissner and Nordstrom generalized Schwarzschild’s solution to one which also has electric charge. Kerr found a solution in 1963 which describes a spinning black hole and this was generalized by Newman et al in 1965 to a solution which includes both spin (angular momentum) and electric charge. These solutions are symmetric about their spin axis. It’s worth mentioning that we can also write sensible equations which describe small perturbations around these solutions.

And that’s pretty much all that we’ve got in terms of exact solutions which are physically relevant to the 3+1 dimensional spacetime that we live in (it takes three spatial coordinates to specify a meeting location and another +1 to specify the time.) This is the setting that’s closest to our everyday experiences and these solutions are the jumping off points for trying to understand the role that black holes play in astrophysics. As I already mentioned, studying GR using pen and paper is quite challenging. But one exciting direction in the study of astrophysical black holes comes from recent progresses in the field of numerical relativity; which discretizes the calculations and then uses supercomputers to study approximate time dynamics.

Artist’s rendition of dust+gas in an “accretion disk” orbiting a spinning black hole. Friction in the accretion disk generates temperatures oftentimes exceeding 10M degrees C (2000 times the temperature of the Sun.) This high temperature region emits x-rays and other detectable EM radiation. The image also shows a jet of plasma. The mechanism for this plasma jet is not yet well understood. Studying processes like this requires all of tools that we have available to us: from numerical relativity; to cutting edge space observatories like NuSTAR; to LIGO in the immediate future *hopefully.* Image credit: NASA/Caltech-JPL

I don’t expect many of you to be experts in the history outlined above. And I expect even fewer of you to know that Einstein’s equations still make sense in any number of dimensions. In this context, I want to briefly introduce a 2+1 dimensional solution called the BTZ black hole and outline why it has been astonishingly important since it was introduced 23 years ago by Bañados, Teteilboim and Zanelli (their paper has been cited over 2k times which is a tremendous number for theoretical physics.)

There are many different viewpoints which yield the BTZ black hole and this is one of them. This is a  time=0 slice of the BTZ black hole obtained by gluing together special curves (geodesics) related to each other by a translation symmetry. The BTZ black hole is a solution of Einstein’s equations in 2+1d which has two asymptotic regions which are causally separated from each other by an event horizon. The arrows leading to “quantum states” come into play when you use the BTZ black hole as a toy model for thinking about quantum gravity.

One of the most striking implications of Einstein’s theory of general relativity is that our universe is described by a curved geometry which we call spacetime. Einstein’s equations describe the dynamical interplay between the curvature of spacetime and the distribution of energy+matter. This may be counterintuitive, but there are many solutions even when there is no matter or energy in the spacetime. We call these vacuum solutions. Vacuum solutions can have positive, negative or zero “curvature.

As 2d surfaces: the sphere is positively curved; a saddle has negative curvature; and a plane has zero curvature.

It came as a great surprise when BTZ showed in 1992 that there is a vacuum solution in 2+1d which has many of the same properties as the more physical 3+1d black holes mentioned above. But most excitingly — and something that I can’t imagine BTZ could have anticipated — is that their solution has become the toy model of all toy models for trying to understand “quantum gravity.

GR in 2+1d has many convenient properties. Two beautiful things that happen in 2+1d are that:

• There are no gravitational waves. Technically, this is because the Riemann tensor is fully determined by the Ricci tensor — the number of degrees of freedom in this system is exactly equal to the number of constraints given by Einstein’s equations. This makes GR in 2+1d something called a “topological field theory” which is much easier to quantize than its full blown gauge theory cousin in 3+1d.
• The maximally symmetric vacuum solution with negative curvature, which we call Anti de-Sitter space, has a beautiful symmetry. This manifold is exactly equal to the “group manifold” SL(2,R). This enables us to translate many challenging analytical questions into simple algebraic computations. In particular, it enables us to find a huge category of solutions which we call multiboundary wormholes, with BTZ being the most famous example.

Some “multiboundary wormhole” pictures that I made. The left shows the constant time=0 slice for a few different solutions and what you are left with after gluing according to the equations on the right. These are solutions to GR in 2+1d.

These properties make 2+1d GR particularly useful as a sandbox for making progress towards a theory of quantum gravity. As examples of what this might entail:

• Classically, a particle is in one definite location. In quantum mechanics, a particle can be in a superposition of places. In quantum gravity, can spacetime be in a superposition of geometries? How does this work?
• When you go from classical physics to quantum physics, tunneling becomes a thing. Can the same thing happen with quantum gravity? Where we tunnel from one spacetime geometry to another? What controls the transition amplitudes?
• The holographic principle is an incredibly important idea in modern theoretical physics. It stems from the fact that the entropy of a black hole is proportional to the area of its event horizon — whereas the entropy of a glass of water is proportional to the volume of water inside the glass. We believe that this reduction in dimensionality is wildly significant.

A few years after the holographic principle was introduced in the early 1990’s, by Gerard ‘t Hooft and Lenny Susskind, Juan Maldacena came up with a concrete manifestation which is now called the AdS/CFT correspondence. Maldacena’s paper has been cited over 14k times making it one of the most cited theoretical physics papers of all time. However, despite having a “correspondence” it’s still very hard to translate questions back and forth between the “gravity and quantum sides” in practice. The BTZ black hole is the gravity solution where this correspondence is best understood. Its quantum dual is a state called the thermofield double, which is given by: $|\Psi_{CFT}\rangle = \frac{1}{\sqrt{Z}} \sum_{n=1}^{\infty} e^{-\beta E_n/2} |n\rangle_1 \otimes |n \rangle_2$. This describes a quantum state which lives on two circles (see my BTZ picture above.) There is entanglement between the two circles. If an experimentalist only had access to one of the circles and if they were asked to try to figure out what state they have, their best guess would be a “thermal state.” A state that has been exposed to a heat-bath for too long and has lost all of its initial quantum coherence.

It is in this sense that the BTZ black hole has been hugely important. It’s also evidence of how mysterious Einstein’s equations still remain, even to this day. We still don’t have exact solutions for many settings of interest, like for two black holes merging in 3+1d. It was only in 1992 that BTZ came up with their solution–77 years after Einstein formulated his theory! Judging by historical precedence, exactly solvable toy models are profoundly useful and BTZ has already proven to be an important signpost as we continue on our quest to understand quantum gravity. There’s already broad awareness that astrophysical black holes are fascinating objects. In this post I hope I conveyed a bit of the excitement surrounding how black holes are useful in a different setting — in aiding our understanding of quantum gravity. And all of this is in the spirit of #BlackHoleFriday, of course.

# How to get more girls into STEM

Hey all, I’m back! I’ve been stuck in a black hole for the past couple years. Nobody ever said that doing a PhD in quantum gravity would be easy. Actually, my advisor John Preskill explicitly warned me that it would be exceptionally difficult (but in an encouraging manner; he was managing my expectations.) I wish I could say that I’ve returned w/ emergent spacetime figured out, but alas, I was simply inspired to write about a heady topic that is quite personal to me: how to increase gender diversity in STEM. (Maybe the key to understanding quantum gravity is to have more women thinking about these questions?)

I’ve been thinking about this topic for well over a decade but my interest bubbled over last week and I decided to write this post. Some entrepreneur friends were on a panel at Caltech (John Hering, Diego Berdakin and Joe Lonsdale) and during a wonderful sub-convo about increasing gender diversity in STEM a male undergrad asked: “as someone who’s only a student, what can I do to help with this issue?” The panel pretty much nailed it with their responses but this is an incredibly important issue and I want to capture some of their comments in writing, to frame this with broader context and to add some personal anecdotes.

Before providing a few recommendations here are some bullets which I think are important in terms of framing this issue.

1. Full stack problem: this isn’t an issue that can be tackled by targeting any specific age range. It especially can’t be tackled by only focusing on recruitment for colleges or STEM jobs. Our current lack of diversity literally starts the day children are born. We have a broad culture of pushing kids away from STEM but these pressures disproportionately target girls.

2. Implicit biases: one of the most damaging and least spoken about mechanisms through which this happens are implicit biases. Very few people understand the depth of this issue and as an extension how guilty WE ALL ARE. Implicit biases are pervasive and they are pushing girls out of STEM. Here are examples from my own childhood which highlight how subtle the issue is.

I have a younger sister who has basically the same brain as myself (truly, we can read each others minds.) I became a theoretical physicist and entrepreneur and she’s a lawyer. This is obviously a worthy profession but how did we choose these paths? For years I’ve been looking back and trying to answer this question. Upon reflection, I was astonished by the strength of my implicit biases.

a. An Uncle helped me build a computer when I was seven. No one did the same for my sister. I spent most of the ages of 7-16 hacking around on computers which provided the foundation for many of the things that I’ve done in my adult life. This gesture by my Uncle was easily one of the most impactful things that anyone has ever done for me.

b. When my sister had computer problems I would treat her like she’s stupid and simply fix the problem for her (these words are overly dramatic but I’m trying to make a point.) Whereas when my male cousin had issues I would sit next to him and patiently explain the underlying issue and teach him how to fix his problem. That teaching a man to fish metaphor is a thing.

c. When people gave us presents they would give me Legos and my sister art supplies or clothes. Gifts didn’t always fall into these categories (obviously) but they almost always had a similar gender-specific split.

d. When I was the first to finish my multiplication tables in 3rd grade, my teacher encouraged me to read science books. When my sister finished she was encouraged to draw. This teacher was female.

e. These are only a few examples of implicit biases. I wasn’t aware of the potential cause-and-effect of my actions while making them. Only after years of reflection and seeing how amplified the problem becomes by making it to the tip of the funnel was I able to connect these personal dots. These biases are so deeply engrained that addressing them requires societal-scale reprogramming — but it starts with enhanced self-awareness. I obviously feel some level of guilt for being oblivious to these actions as a kid. And I’d be delusional to think I’m beyond having similar biases today.

3. Explicit/systematic biases: there’s much broader awareness of these category of biases so I’m mainly going to explain by linking to some recent headlines. The short of it is that on their path to STEM, women have to put up with many more hurdles than men. From hiring biases to sexual harassment. These biases disproportionately adversely affect women. Here’s a tiny sample of some of the most glaring recent headlines:

a. Geoff Marcy was a serial harasser for at least twenty years” — Gizmodo.

b. Why women are poor at science, by Harvard president (Larry Summers)” — Guardian headline. Granted, his comments were more nuanced than the media portrayed. But in any case, extremely damaging and evidence of an outmoded way of thinking.

c. Could it be that researchers find a hiring bias that favors women?” — NPR. I wanted to include this example to highlight that sometimes systematic biases (this isn’t exactly an explicit bias) go the other direction. But of course if we search hard enough we will be able to find specific instances in the stack where the bias favors women. My personal interpretation of this headline is: “the fearless women that have braved decades of doubt may have a minuscule advantage when competing for STEM jobs, but only after they have been disproportionately filtered out of the applicant pool on a massive scale.” Here are some statistics which show why this headline is only scratching the surface: NGCP and Techbridge.

If we acknowledge that this is a problem that literally starts the day children are born, then what can we, as individuals, do about it?

1. Constantly run a mental loop to check your implicit biases. I’m hoping we can compile a list of examples in the comments that can serve as a check-list of things NOT TO DO! E.g. When you ask: “what do you want to be when you grow-up?” Don’t answer before kids can get back to you with something like: “be a princess?” or “be a baseball player?” Those kids might want to be mathematicians! Maryam Mirzakhani or Terry Tao!

2. Provide encouragement to young girls without being over the top or condescending. Here’s a simple example from the past week. A.K. is ~8 years old and she visited Caltech recently (yes, I got permission from her mother to use this example.) This girl is a rockstar.

The tragic reality is that A.K. is going to spend her next decade being pushed away from STEM. Don’t get me wrong, she’s lucky to have encouraging parents who are preempting this push, but they will be competing with the sway of the media and her peers.

Small gestures, such as @Caltechedu reposting the above photo on Instagram provides a powerful dosage of motivation. The way I think about it is this: kids, but especially girls, are going to face a persistent push away from STEM. They are going to get teased for being “too smart” + “not girly enough” + “weird” + “nerdy” + etc. Small votes of confidence from people that have made it through and can therefore speak with authority are like little bits of body armor. Comments sting a little bit less when the freedom+success of the other side is visible and you’re being told that you can make it too. Don’t underestimate the power of small gestures. One comment can literally make a world of difference. Do this. But it absolutely must be genuine.

3. Make a conscious effort to share your passion + enthusiasm for STEM. Our culture does an abysmal job of motivating and promoting the beauty + wonder of science. This advice applies to both girls and boys and it’s incredibly important. One of my favorite essays is “A Mathematician’s Lament” by Paul Lockhart. In it he contrasts the way that we teach mathematics compared to how we teach painting and music. Imagine if before letting kids see a finished masterwork or picking up a brush and playing around, we forced them to learn: color theory, the history of art, how to hold a brush, etc! If you’re at Caltech then invite kids to the SURF seminar day or to interesting public lectures. Go give a talk at a local school and explain via examples that science is a work in progress — there’s an infinite amount that we still don’t know! For example, a brilliant non-physicist hacker friend asked me yesterday if the Casimir effect is temperature dependent? The answer is yes, but this is still barely understood theoretically. At what temperature will a gecko’s stick stop working? Questions like this are engaging. It will only take a few hours of your time to emphasize to dozens of kids how exciting science is. Outreach is usually asymmetric.

As an aside, writing this reminded me of an outreach story from 2010. Somehow I finagled travel funds to attend the International Congress of Mathematicians (ICM) in Hyderabad, India. During our day off (one day during a two week conference), I set out early to do some sightseeing and a dude pulled up next to me on a scooter. He asked if I was there for the congress. It’s kind of a long story but after chatting for a bit I agreed to spend the day riding around on his scooter while spreading my passion for mathematics at a variety of schools in the Hyderabad area. I lectured to hundreds of kids that day. I wrote a blog post that ended up getting picked up by a few national newspapers and even made the official ICM newsletter (page six of this; FYI they condensed my post and convoluted some facts.) I’m sure that I ended up benefitting wayyyyyy more from my outreach than any of the students I spoke to. The crazy reality is that outreach is oftentimes like this.

4. There is literally nothing more rewarding than mentoring hyper talented kids and then watching them succeed. This is also incredibly asymmetric. Two hours of your time will provide direction and motivation for months. Do not discount the power of giving kids confidence and a small amount of direction.

In this post, I ignored some very important parts of the problem and also opportunities for addressing it in an attempt to focus on aspects that I think are under appreciated. Specifically how pervasive implicit biases are and how asymmetric outreach is. Increasing diversity in STEM is a societal scale problem that isn’t going to be fixed overnight. However, I believe it’s possible to make huge progress over the next two decades. We’re in the process of taking our first step, which is global-awareness of the problem. And now we need to take the next step which is broad self-awareness about the impacts of our individual actions and implicit biases. It seems to me like wildly increasing our talent pool is a useful endeavor. In the spirit of this blog, unlocking this hidden potential might even be the key to making progress with quantum gravity! And definitely towards making progress on an innumerable number of other science and engineering goals.

And, hey S, sorry for not teaching you more about computers 😦

********************************************************

Now some shameless on-topic plugs to promote my friends:

One of my roommates, Jason Porath, makes Rejected Princesses. This is a great site that all young girls should be aware of. Think badass women meet Disney glorification from a feminist perspective.

Try Goldie Blox to augment your kids’ Lego collection or as an alternative. If nothing else, watch their video featuring a Rube Goldberg inspired “Princess Machine!”

IQIM is heavily involved w/ Project Scientist which is a great program for young girls with an aptitude and interest in STEM.

# The Science that made Stephen Hawking famous

In anticipation of The Theory of Everything which comes out today, and in the spirit of continuing with Quantum Frontiers’ current movie theme, I wanted to provide an overview of Stephen Hawking’s pathbreaking research. Or at least to the best of my ability—not every blogger on this site has won bets against Hawking! In particular, I want to describe Hawking’s work during the late ‘60s and through the ’70s. His work during the ’60s is the backdrop for this movie and his work during the ’70s revolutionized our understanding of black holes.

(Portrait of Stephen Hawking outside the Department of Applied Mathematics and Theoretical Physics, Cambridge. Credit: Jason Bye)

As additional context, this movie is coming out at a fascinating time, at a time when Hawking’s contributions appear more prescient and important than ever before. I’m alluding to the firewall paradox, which is the modern reincarnation of the information paradox (which will be discussed below), and which this blog has discussed multiple times. Progress through paradox is an important motto in physics and Hawking has been at the center of arguably the most challenging paradox of the past half century. I should also mention that despite irresponsible journalism in response to Hawking’s “there are no black holes” comment back in January, that there is extremely solid evidence that black holes do in fact exist. Hawking was referring to a technical distinction concerning the horizon/boundary of black holes.

Now let’s jump back and imagine that we are all young graduate students at Cambridge in the early ‘60s. Our protagonist, a young Hawking, had recently been diagnosed with ALS, he had recently met Jane Wilde and he was looking for a thesis topic. This was an exciting time for Einstein’s Theory of General Relativity (GR). The gravitational redshift had recently been confirmed by Pound and Rebka at Harvard, which put the theory on extremely solid footing. This was the third of three “classical tests of GR.” So now that everyone was truly convinced that GR is correct, it became important to get serious about investigating its most bizarre predictions. Hawking and Penrose picked up on this theme most notably.The mathematics of GR allows for singularities which lead to things like the big bang and black holes. This mathematical possibility was known since the works of Friedmann, Lemaitre and Oppenheimer+Snyder starting all the way back in the 1920s, but these calculations involved unphysical assumptions—usually involving unrealistic symmetries. Hawking and Penrose each asked (and answered) the questions: how robust and generic are these mathematical singularities? Will they persist even if we get rid of assumptions like perfect spherical symmetry of matter? What is their interpretation in physics?

I know that I have now used the word “singularity” multiple times without defining it. However, this is for good reason—it’s very hard to assign a precise definition to the term! Some examples of singularities include regions of “infinite curvature” or with “conical deficits.”

Singularity theorems applied to cosmology: Hawking’s first major results, starting with his thesis in 1965, was proving that singularities on the cosmological scale—such as the big bang—were indeed generic phenomena and not just mathematical artifacts. This work was published immediately after, and it built upon, a seminal paper by Penrose. Also, I apologize for copping-out again, but it’s outside the scope of this post to say more about the big bang, but as a rough heuristic, imagine that if you run time backwards then you obtain regions of infinite density. Hawking and Penrose spent the next five or so years stripping away as many assumptions as they could until they were left with rather general singularity theorems. Essentially, they used MATH to say something exceptionally profound about THE BEGINNING OF THE UNIVERSE! Namely that if you start with any solution to Einstein’s equations which is consistent with our observed universe, and run the solution backwards, then you will obtain singularities (regions of infinite density at the Big Bang in this case)! However, I should mention that despite being a revolutionary leap in our understanding of cosmology, this isn’t the end of the story, and that Hawking has also pioneered an attempt to understand what happens when you add quantum effects to the mix. This is still a very active area of research.

Singularity theorems applied to black holes: the first convincing evidence for the existence of astrophysical black holes didn’t come until 1972 with the discovery of Cygnus X-1, and even this discovery was wrought with controversy. So imagine yourself as Hawking back in the late ’60s. He and Penrose had this powerful machinery which they had successfully applied to better understand THE BEGINNING OF THE UNIVERSE but there was still a question about whether or not black holes actually existed in nature (not just in mathematical fantasy land.) In the very late ‘60s and early ’70s, Hawking, Penrose, Carter and others convincingly argued that black holes should exist. Again, they used math to say something about how the most bizarre corners of the universe should behave–and then black holes were discovered observationally a few years later. Math for the win!

No hair theorem: after convincing himself that black holes exist Hawking continued his theoretical studies about their strange properties. In the early ’70s, Hawking, Carter, Israel and Robinson proved a very deep and surprising conjecture of John Wheeler–that black holes have no hair! This name isn’t the most descriptive but it’s certainly provocative. More specifically they showed that only a short time after forming, a black hole is completely described by only a few pieces of data: knowledge of its position, mass, charge, angular momentum and linear momentum (X, M, Q, J and L). It only takes a few dozen numbers to describe an exceptionally complicated object. Contrast this to, for example, 1000 dust particles where you would need tens of thousands of datum (the position and momentum of each particle, their charge, their mass, etc.) This is crazy, the number of degrees of freedom seems to decrease as objects form into black holes?

Black hole thermodynamics: around the same time, Carter, Hawking and Bardeen proved a result similar to the second law of thermodynamics (it’s debatable how realistic their assumptions are.) Recall that this is the law where “the entropy in a closed system only increases.” Hawking showed that, if only GR is taken into account, then the area of a black holes’ horizon only increases. This includes that if two black holes with areas $A_1$ and $A_2$ merge then the new area $A*$ will be bigger than the sum of the original areas $A_1+A_2$.

Combining this with the no hair theorem led to a fascinating exploration of a connection between thermodynamics and black holes. Recall that thermodynamics was mainly worked out in the 1800s and it is very much a “classical theory”–one that didn’t involve either quantum mechanics or general relativity. The study of thermodynamics resulted in the thrilling realization that it could be summarized by four laws. Hawking and friends took the black hole connection seriously and conjectured that there would also be four laws of black hole mechanics.

In my opinion, the most interesting results came from trying to understand the entropy of black hole. The entropy is usually the logarithm of the number of possible states consistent with observed ‘large scale quantities’. Take the ocean for example, the entropy is humungous. There are an unbelievable number of small changes that could be made (imagine the number of ways of swapping the location of a water molecule and a grain of sand) which would be consistent with its large scale properties like it’s temperature. However, because of the no hair theorem, it appears that the entropy of a black hole is very small? What happens when some matter with a large amount of entropy falls into a black hole? Does this lead to a violation of the second law of thermodynamics? No! It leads to a generalization! Bekenstein, Hawking and others showed that there are two contributions to the entropy in the universe: the standard 1800s version of entropy associated to matter configurations, but also contributions proportional to the area of black hole horizons. When you add all of these up, a new “generalized second law of thermodynamics” emerges. Continuing to take this thermodynamic argument seriously (dE=TdS specifically), it appeared that black holes have a temperature!

As a quick aside, a deep and interesting question is what degrees of freedom contribute to this black hole entropy? In the late ’90s Strominger and Vafa made exceptional progress towards answering this question when he showed that in certain settings, the number of microstates coming from string theory exactly reproduces the correct black hole entropy.

Black holes evaporate (Hawking Radiation): again, continuing to take this thermodynamic connection seriously, if black holes have a temperature then they should radiate away energy. But what is the mechanism behind this? This is when Hawking fearlessly embarked on one of the most heroic calculations of the 20th century in which he slogged through extremely technical calculations involving “quantum mechanics in a curved space” and showed that after superimposing quantum effects on top of general relativity, there is a mechanism for particles to escape from a black hole.

This is obviously a hard thing to describe, but for a hack-job analogy, imagine you have a hot plate in a cool room. Somehow the plate “radiates” away its energy until it has the same temperature as the room. How does it do this? By definition, the reason why a plate is hot, is because its molecules are jiggling around rapidly. At the boundary of the plate, sometimes a slow moving air molecule (lower temperature) gets whacked by a molecule in the plate and leaves with a higher momentum than it started with, and in return the corresponding molecule in the plate loses energy. After this happens an enormous number of times, the temperatures equilibrate. In the context of black holes, these boundary interactions would never happen without quantum mechanics. General relativity predicts that anything inside the event horizon is causally disconnected from anything on the outside and that’s that. However, if you take quantum effects into account, then for some very technical reasons, energy can be exchanged at the horizon (interface between the “inside” and “outside” of the black hole.)

Black hole information paradox: but wait, there’s more! These calculations weren’t done using a completely accurate theory of nature (we use the phrase “quantum gravity” as a placeholder for whatever this theory will one day be.) They were done using some nightmarish amalgamation of GR and quantum mechanics. Seminal thought experiments by Hawking led to different predictions depending upon which theory one trusted more: GR or quantum mechanics. Most famously, the information paradox considered what would happen if an “encyclopedia” were thrown into the black hole. GR predicts that after the black hole has fully evaporated, such that only empty space is left behind, that the “information” contained within this encyclopedia would be destroyed. (To readers who know quantum mechanics, replace “encylopedia” with “pure state”.) This prediction unacceptably violates the assumptions of quantum mechanics, which predict that the information contained within the encyclopedia will never be destroyed. (Maybe imagine you enclosed the black hole with perfect sensing technology and measured every photon that came out of the black hole. In principle, according to quantum mechanics, you should be able to reconstruct what was initially thrown into the black hole.)

Making all of this more rigorous: Hawking spent most of the rest of the ’70s making all of this more rigorous and stripping away assumptions. One particularly otherworldly and powerful tool involved redoing many of these black hole calculations using the euclidean path integral formalism.

I’m certain that I missed some key contributions and collaborators in this short history, and I sincerely apologize for that. However, I hope that after reading this you have a deepened appreciation for how productive Hawking was during this period. He was one of humanity’s earliest pioneers into the uncharted territory that we call quantum gravity. And he has inspired at least a few generations worth of theoretical physicists, obviously, including myself.

In addition to reading many of Hawking’s original papers, an extremely fun source for this post is a book which was published after his 60th birthday conference.

# Science at Burning Man: Say What?

Burning Man… what a controversial topic these days. The annual festival received quite a bit of media attention this year, with a particular emphasis on how the ‘tech elite’ do burning man. Now that we are no longer in the early September Black Rock City news deluge I wanted to forever out myself as a raging hippie and describe why I keep going back to the festival: for the science of course!

This is a view of my camp, the Phage, as viewed from the main street in Black Rock City. I have no idea why the CH-47 is doing a flyover… everything else is completely standard for Burning Man. Notice the 3 million Volt Tesla coil which my roommates built.

I suspect that at this point, this motivation may seem counter-intuitive or even implausible, but let me elaborate. First, we should start with a question: what is Burning Man? Answer: this question is impossible to answer. The difficulty of answering this question is why I’m writing this post. Most people oversimplify and describe the event as a ‘bunch of hippies doing drugs in the desert’ or as ‘a music festival with a dash of art’ or as ‘my favorite time of the year’ and on and on. There are nuggets of truth in all of these answers but none of them convey the diversity of the event. With upwards of 65,000 people gathered for a week, my friends and I like to describe it as a “choose your own adventure” sort of experience. I choose science.

My goal for this post is to give you a sense of the sciency activities which take place in my camp. Coupling this with the fact that science is a tiny subset of the Burning Man ethos, you should come away convinced that there’s much more to the festival than just ‘a bunch of hippies doing drugs in the desert and listening to music.’

I camp with The Phage, as in bacteriophage, the incredibly abundant virus which afflicts bacteria. There are about 200 people in our camp, most of whom are scientists, with a median age of over 30. Only about 100 people camp with the Phage in any given year. The camp also houses some hackers, entrepreneurs and artists but scientific passion is unequivocally our unifying trait. Some of the things we assembled this year include:

Dr. F and Dr. B’s 3 million Volt musical Tesla coil. Humans were inserted for scale.

Musical Tesla coil: two of my roommates built a 3 million Volt musical Tesla coil. Think about this… it’s insane. The project started while they were writing their Caltech PhD theses (EE and Applied Physics) and in my opinion, the Tesla coil’s scale is a testament to the power of procrastination! Thankfully, they both finished their PhDs. After doing so, they spent the months between their defenses and Burning Man building the coil in earnest. Not only was the coil massive–with the entire structure standing well over 20 feet tall–but it was connected through MIDI to a keyboard. Sound is just pressure waves moving through air, and lightning moves lots of air, so this was one of the loudest platforms on the playa. I manned the coil one evening and one professional musician told me it was “by far the coolest instrument he has ever played.” Take a brief break from reading this and watch this video!

Dr. Brainlove getting ready for a midnight stroll and then getting a brainlift.

Dr. Brainlove: we built a colossal climbable “art car” in the shape of a brain which was covered in LEDs and controlled from a wireless EEG device. Our previous art car (Dr. Strangelove) died at the 2013 festival, so last winter our community rallied and ‘brainstormed’ the theme for this vehicle. After settling on a neuroscience theme, one of my campmates in Berkeley scanned her brain and sent a CAD file to Arcology Now in Austin, TX who created an anatomically correct steel frame. We procured a yellow school bus which had been converted to bio diesel. We raised over $30k (there were donations beyond indiegogo.) About 20 of my campmates volunteered their weekends to work at the Nimby in Oakland: hacking apart the bus, building additional structures, covering the bus with LEDs, installing a sound system, etc. One of the finishing touches was that one of my campmates who is a neurosurgeon at UCSD procured some wireless EEG devices and then he and some friends wrote software to control Dr. Brainlove’s LEDs–thus displaying someone’s live brain activity on a 30′ long by 20′ tall climbable musical art car for the entire playa to see! We already have plans to increase the LED density and therefore put on a more impressive interactive neural light show next year. Sugarcubes: in 2013, some campmates built an epic LED sculpture dubbed “the sugarcubes”. Just watch this video and you’ll be blown away. The cubes weren’t close to operational when they arrived so there was 48 hours of hacking madness by Dan Kaminsky, Alexander Green and many brilliant others before our “Tuesday night” party. The ethos is similar to the Caltech undergrad’s party culture–the fun is in the building–don’t tell my friends but I slept through the actual party. Ask a scientist on the left (I’m in there somewhere and so is one of my current roommates– another Caltech PhD ’13.) Science class on the right. Science everywhere! Ask a scientist: there’s no question that this is my favorite on playa activity. This photo doesn’t do the act justice. Imagine a rotating cast of 7-8 phagelings braving dust storms and donning lab coats all FOR SCIENCE! The diversity of questions is incredible and I always learn a tremendous amount (evidenced by losing my voice three years running.) For example, this year, a senior executive at Autodesk approached and asked me a trick question related to the Sun’s magnetic field. Fear not–I was prepared! This has happened before.. and he was wearing a “space” t-shirt so my guard was up. A nuclear physicist from UCLA asked me to explain Bell test experiments (and he didn’t even know my background.) Someone asked how swamp coolers work? To be honest, I didn’t have a clear answer off the top of my head so I called over one of my friends (who was one of the earliest pioneers of optogenetics) and he nailed it immediately. Not having a clear answer to this question was particularly embarrassing because I’ve spent most of the past year thinking about something akin to quantum thermodynamics… if you can call black hole physics and holographic entanglement that. Make/hack sessions: I didn’t participate in any of these this year but some of my campmates teach soldering/microscopy/LED programming/etc classes straight out of our camp. See photo above. EEG and LED hacking. Science talks: we had 4-5 science talks in a carpeted 40ft geodesic dome every evening. This is pretty self explanatory and by this point in my post, the Phage may have enough credibility that you’d believe the caliber is exceptional. Impromptu conversations: this is another indescribable aspect. I’ll risk undermining the beauty of these conversations by using a cheap word: the ‘networking’ at Burning Man is unrivaled. I don’t mean in the for-dollar-profit sense, I mean in the intellectual and social sense. For example, one of my campmates’ brother is a string theory postdoc at Stanford. He came by our camp one evening, we were introduced, and then we met up again in the default world when I visited Stanford the following week. Burning Man is the type of place where you’ll start talking about MPEG/EFF/optogenetics/companyX/etc and then someone will say: “you know that the inventor/spokesperson/pioneer/founder/etc is at the next table over right?” Yup, Burning Man is just a bunch of hippies doing drugs in the desert. You shouldn’t come. You definitely wouldn’t enjoy it. No fun is had and no ideas are shared. Or in other words, Burning Man: where exceptionally capable people prepare themselves for the zombie apocalypse. Check out my friend Peretz Partensky’s Flickr feed if you want to see more photos (and credit goes to him for the photos in this post.) # The singularity is not near: the human brain as a Boson sampler? Ever since the movie Transcendence came out, it seems like the idea of the ‘technological singularity‘ has been in the air. Maybe it’s because I run in an unorthodox circle of deep thinkers, but over the past couple months, I’ve been roped into three conversations related to this topic. The conversations usually end with some version of “ah shucks, machine learning is developing at a fast rate, so we are all doomed. And have you seen those deep learning videos? Computers are learning to play 35 year old video games?! Put this on an exponential trend and we are D00M3d!” Computers are now learning the rules of this game, from visual input only, and then playing it optimally. Are we all doomed? So what is the technological singularity? My personal translation is: are we on the verge of narcissistic flesh-eating robots stealing our lunch money while we commute to the ‘special school for slow sapiens’? This is an especially hyperbolic view, and I want to be clear to distinguish ‘machine learning‘ from ‘artificial consciousness.’ The former seems poised for explosive growth but the latter seems to require breakthroughs in our understanding of the fundamental science. The two concepts are often equated when defining the singularity, or even artificial intelligence, but I think it’s important to distinguish these two concepts. Without distinguishing them, people sometimes make the faulty association: machine_learning_progress=>AI_progress=>artificial_consciousness_progress. I’m generally an optimistic person, but on this topic, I’m especially optimistic about humanity’s status as machine overlords for at least the next ~100 years. Why am I so optimistic? Quantum information (QI) theory has a secret weapon. And that secret weapon is obviously Scott Aaronson (and his brilliant friends+colleagues+sidekicks; especially Alex Arkhipov in this case.) Over the past few years they have done absolutely stunning work related to understanding the computational complexity of linear optics. They colloquially call this work Boson sampling. What I’m about to say is probably extremely obvious to most people in the QI community, but I’ve had conversations with exquisitely well educated people–including a Nobel Laureate–and very few people outside of QI seem to be aware of Aaronson and Arkhipov’s (AA’s) results. Here’s a thought experiment: does a computer have all the hardware required to simulate the human brain? For a long time, many people thought yes, and they even created a more general hypothesis called the “extended Church-Turring hypothesis.” An interdisciplinary group of scientists has long speculated that quantum mechanics may stand as an obstruction towards this hypothesis. In particular, it’s believed that quantum computers would be able to efficiently solve some problems that are hard for a classical computer. These results led people, possibly Roger Penrose most notably, to speculate that consciousness may leverage these quantum effects. However, for many years, there was a huge gap between quantum experiments and the biology of the human brain. If I ever broached this topic at a dinner party, my biologist friends would retort: “but the brain is warm and wet, good luck managing decoherence.” And this seems to be a valid argument against the brain as a universal quantum computer. However, one of AA’s many breakthroughs is that they paved the way towards showing that a rather elementary physical system can gain speed-ups on certain classes of problems over classical computers. Maybe the human brain has a Boson sampling module? More specifically, AA’s physical setup involves being able to: generate identical photons; send them through a network of beamsplitters, phase shifters and mirrors; and then count the number of photons in each mode through ‘nonadaptive’ measurements. This setup computes the permanent of a matrix, which is known to be a hard problem classically. AA showed that if there exists a polynomial-time classical algorithm which samples from the same probability distribution, then the polynomial hierarchy would collapse to the third level (this last statement would be very bad for theoretical computer science and therefore for humans; ergo probably not true.) I should also mention that when I learned the details of these results, during Scott’s lectures this past January at the Israeli Insitute of Advanced Studies’ Winter School in Theoretical Physics, that there was one step in the proof which was not rigorous. Namely, they rely on a conjecture in random matrix theory–but at least they have simulations indicating the conjecture should be true. Nitty gritty details aside, I find the possibility that this simple system is gaining a classical speed-up compelling in the conversation about consciousness. Especially considering that finding permanents is actually useful for some combinatorics problems. When you combine this with Nature’s mischievous manner of finding ways to use the tools available to it, it seems plausible to me that the brain is using something like Boson sampling for at least one non-trivial task towards consciousness. If not Boson sampling, then maybe ‘Fermion smashing’ or ‘minimal surface finding’ or some other crackpottery words I’m coming up with on the fly. The point is, this result opens a can of worms. AA’s results have bred new life into my optimism towards humanity’s ability to rule the lands and interwebs for at least the next few decades. Or until some brilliant computer scientist proves that human consciousness is in P. If nothing else, it’s a fun topic for wild dinner party speculation. # Ten reasons why black holes exist I spent the past two weeks profoundly confused. I’ve been trying to get up to speed on this firewall business and I wanted to understand the picture below. Much confuse. Such lost. [Is doge out of fashion now? I wouldn’t know because I’ve been trapped in a black hole!] [Technical paragraph that you can skip.] I’ve been trying to understand why the picture on the left is correct, even though my intuition said the middle picture should be (intuition should never be trusted when thinking about quantum gravity.) The details of these pictures are technical and tangential to this post, but the brief explanation is that these pictures are called Penrose diagrams and they provide an intuitive way to think about the time dynamics of black holes. The two diagrams on the left represent the same physics as the schematic diagram on the right. I wanted to understand why during Hawking radiation, the radial momentum for partner modes is in the same direction. John Preskill gave me the initial reasoning, that “radial momentum is not an isometry of Schwarzchild or Rindler geometries,” then I struggled for a few weeks to unpack this, and then Dan Harlow rescued me with some beautiful derivations that make it crystal clear that the picture on the left is indeed correct. I wanted to understand this because if the central picture is correct, then it would be hard to catch up to an infalling Hawking mode and therefore to verify firewall weirdness. The images above are simple enough, but maybe the image below will give you a sense for how much of an uphill battle this was! This pretty much sums up my last two weeks (with the caveat that each of these scratch sheets is double sided!) Or in case you wanted to know what a theoretical physicist does all day. After four or five hours of maxing out my brain, it starts to throb. For the past couple of weeks, after breaking my brain with firewalls each day, I’ve been switching gears and reading about black hole astronomy (real-life honest-to-goodness science with data!) Beyond wanting to know the experimental state-of-the-art related to the fancy math I’ve been thinking about, I also had the selfish motivation that I wanted to do some PR maintenance after Nature’s headline: “Stephen Hawking: ‘There are no black holes’.” I found this headline infuriating when Nature posted it back in January. When taken out of context, this quote makes it seem like Stephen Hawking was saying “hey guys, my bad, we’ve been completely wrong all this time. Turn off the telescopes.” When in reality what he was saying was more like: “hey guys, I think this really hard modern firewall paradox is telling us that we’ve misunderstood an extremely subtle detail and we need to make corrections on the order of a few Planck lengths, but it matters!” When you combine this sensationalism with Nature’s lofty credibility, the result is that even a few of my intelligent scientist peers have interpreted this as the non-existence of astrophysical black holes. Not to mention that it opens a crack for the news media to say things like: ‘if even Stephen Hawking has been wrong all this time, then how can we possibly trust the rest of this scientist lot, especially related to climate change?’ So brain throbbing + sensationalism => learning black hole astronomy + PR maintenance. Before presenting the evidence, I should wave my hands about what we’re looking for. You have all heard about black holes. They are objects where so much mass gets concentrated in such a small volume that Einstein’s general theory of relativity predicts that once an object passes beyond a certain distance (called the event horizon), then said object will never be able to escape, and must proceed to the center of the black hole. Even photons cannot escape once they pass beyond the event horizon (except when you start taking quantum mechanics into account, but this is a small correction which we won’t focus on here.) All of our current telescopes collect photons, and as I just mentioned, when photons get close to a black hole, they fall in, so this means a detection with current technology will only be indirect. What are these indirect detections we have made? Well, general relativity makes numerous predictions about black holes. After we confirm enough of these predictions to a high enough precision, and without a viable alternative theory, we can safely conclude that we have detected black holes. This is similar to how many other areas of science work, like particle physics finding new particles through detecting a particle’s decay products, for example. Without further ado, I hope the following experimental evidence will convince you that black holes permeate our universe (and if not black holes, then something even weirder and more interesting!) 1. Sgr A*: There is overwhelming evidence that there is a supermassive black hole at the center of our galaxy, the Milky Way. As a quick note, most of the black holes we have detected are broken into two categories, solar mass, where they are only a few times more massive than our sun (5-30 solar masses), or supermassive, where the mass is about $10^5-10^{10}$ solar masses. Some of the most convincing evidence comes from the picture below. Andrea Ghez and others tracked the orbits of several stars around the center of the Milky Way for over twenty years. We have learned that these stars orbit around a point-like object with a mass on the order of $4\times 10^6$ solar masses. Measurements in the radio spectrum show that there is a radio source located in the same location which we call Sagittarius A* (Sgr A*). Sgr A* is moving at less than $1km/s$ and has a mass of at least $10^5$ solar masses. These bounds make it pretty clear that Sgr A* is the same object as what is at the focus of these orbits. A radio source is exactly what you would expect for this system because as dust particles get pulled towards the black hole, they collide and friction causes them to heat up, and hot objects radiate photons. These arguments together make it pretty clear that Sgr A* is a supermassive black hole at the center of the Milky Way! What are you looking at? This plot shows the orbits of a few stars around the center of our galaxy, tracked over 17 years! 2. Orbit of S2: During a recent talk that Andrea Ghez gave at Caltech, she said that S2 is “her favorite star.” S2 is a 15 solar mass star located near the black hole at the center of our galaxy. S2’s distance from this black hole is only about four times the distance from Neptune to the Sun (at closest point in orbit), and it’s orbital period is only 15 years. The Keck telescopes in Mauna Kea have followed almost two complete orbits of S2. This piece of evidence is redundant compared to point 1, but it’s such an amazing technological feat that I couldn’t resist including it. We’ve followed S2’s complete orbit. Is it orbiting around nothing? Something exotic that we have no idea about? Or much more likely around a black hole. 3. Numerical studies: astrophysicists have done numerous numerical simulations which provide a different flavor of test. Christian Ott at Caltech is pretty famous for these types of studies. Image from a numerical simulation that Christian Ott and his student Evan O’Connor performed. 4. Cyg A: Cygnus A is a galaxy located in the Cygnus constellation. It is an exceptionally bright radio source. As I mentioned in point 1, as dust falls towards a black hole, friction causes it to heat up and then hot objects radiate away photons. The image below demonstrates this. We are able to use the Eddington limit to convert luminosity measurements into estimates of the mass of Cyg A. Not necessarily in the case of Cyg A, but in the case of its cousins Active Galactic Nuclei (AGNs) and Quasars, we are also able to put bounds on their sizes. These two things together show that there is a huge amount of mass trapped in a small volume, which is therefore probably a black hole (alternative models can usually be ruled out.) There is a supermassive black hole at the center of this image which powers the rest of this action! The black hole is spinning and it emits relativistic jets along its axis of rotation. The blobs come from the jets colliding with the intergalactic medium. 5. AGNs and Quasars: these are bright sources which are powered by supermassive black holes. Arguments similar to those used for Cyg A make us confident that they really are powered by black holes and not some alternative. 6. X-ray binaries: astronomers have detected ~20 stellar mass black holes by finding pairs consisting of a star and a black hole, where the star is close enough that the black hole is sucking in its mass. This leads to accretion which leads to the emission of X-Rays which we detect on Earth. Cygnus X-1 is a famous example of this. 7. Water masers: Messier 106 is the quintessential example. 8. Gamma ray bursts: most gamma ray bursts occur when a rapidly spinning high mass star goes supernova (or hypernova) and leaves a neutron star or black hole in its wake. However, it is believed that some of the “long” duration gamma ray bursts are powered by accretion around rapidly spinning black holes. That’s only eight reasons but I hope you’re convinced that black holes really exist! To round out this list to include ten things, here are two interesting open questions related to black holes: 1. Firewalls: I mentioned this paradox at the beginning of this post. This is the cutting edge of quantum gravity which is causing hundreds of physicists to pull their hair out! 2. Feedback: there is an extremely strong correlation between the size of a galaxy’s supermassive black hole and many of the other properties in the galaxy. This connection was only realized about a decade ago and trying to understand how the black hole (which has a mass much smaller than the total mass of the galaxy) affects galaxy formation is an active area of research in astrophysics. In addition to everything mentioned above, I want to emphasize that most of these results are only from the past decade. Not to mention that we seem to be close to the dawn of gravitational wave astronomy which will allow us to probe black holes more directly. There are also exciting instruments that have recently come online, such as NuSTAR. In other words, this is an extremely exciting time to be thinking about black holes–both from observational and theoretical perspectives–we have data and a paradox! In conclusion, black holes exist. They really do. And let’s all make a pact to read critically in the 21st century! Cool resource from Sky and Telescope. [* I want to thank my buddy Kaes Van’t Hof for letting me crash on his couch in NYC last week, which is where I did most of this work. ** I also want to thank Dan Harlow for saving me months of confusion by sharing a draft of his notes from his course on firewalls at the Israeli Institute for Advanced Study’s winter school in theoretical physics.] # Hacking nature: loopholes in the laws of physics I spent my childhood hacking computers. When I was seven, my cousin showed up for Thanksgiving with a box filled with computer parts and we built my first computer. I got into competitive computer gaming around age eleven, and hacking was a natural extension of these activities. Then when I was sixteen, after doing poorly at a Counterstrike tournament, I decided that I should probably apply myself to other things. Needless to say, my parents were thrilled. So that’s when I bought my first computer (instead of building my own), which for deliberate but now antediluvian reasons was a Mac. A few years later, when I was taking CS 106 at Stanford, I was the first student in the course’s history whose reason for buying a Mac was “so that I couldn’t play computer games!” And now you know the story of my childhood. The hacker mentality is quite different than the norm and my childhood trained me to look at absolutist laws as opportunities to find loopholes (of course only when legal and socially responsible!) I’ve applied this same mentality as I’ve been doing physics and I’d like to share with you some of the loopholes that I’ve gathered. Scharnhorst effect enables light to travel faster than in vacuum (c=299,792,458 m/s): this is about the grandaddy of all laws, that nothing can travel faster than light in a vacuum! This effect is the most controversial on my list, because it hasn’t yet been experimentally verified, but it seems obvious with the right picture in mind. Most people’s mental model for light traveling in a vacuum is of little particles/waves called photons traveling through empty space. However, the vacuum is not empty! It is filled with pairs of virtual particles which momentarily fleet into existence. Interactions with these virtual particles create a small amount of ‘resistance’ as photons zoom through the vacuum (photons get absorbed into virtual electron-positron pairs and then spit back out as photons ad infinitum.) Thus, if we could somehow reduce the rate at which virtual particles are created, photons would interact less strongly with the vacuum, and would be able to travel marginally faster than c. But this is exactly what leads to the Casimir effect: the experimentally verified fact that if you take two mirrors and put them ~10 nanometers apart, then they will attract each other because there are more virtual particles created outside the cavity than inside [low momenta virtual modes are inaccessible because the uncertainty principle requires $\Delta x \cdot \Delta p= 10nm\cdot\Delta p \geq \hbar/2$.] This effect is extremely small, only predicting that light would travel one part in $10^{36}$ faster than c. However, it should remind us all to deeply question assumptions. This first loophole used quantum effects to beat a relativistic bound, but the next few loopholes are purely quantum, and are mainly related to that most quantum of all limits, the Heisenberg uncertainty principle. Smashing the standard quantum limit (SQL) with squeezed measurements: the Heisenberg uncertainty principle tells us that there is a fundamental tradeoff in nature: the more precise your information about an object’s position, the less precise your knowledge about its momentum. Or vice versa, or replace x and p with and t, or any other conjugate variables. This uncertainty principle is oftentimes written as $\Delta x\cdot \Delta p \geq \hbar/2$. For a variety of reasons, in the early days of quantum mechanics, it was hard enough to imagine creating a state with $\Delta x \cdot \Delta p = \hbar/2$, but there was some hope because this is obtained in the ground state of a quantum harmonic oscillator. In this case, we have $\Delta x = \Delta p = \sqrt{\hbar/2}$. However, it was harder still to imagine creating states with $\Delta x < \sqrt{\hbar/2}$, these states would be said to ‘go beyond the standard quantum limit’ (SQL). Over the intervening years, not only have we figured out how to go beyond the SQL using squeezed coherent states, but this is actually essential in some of our most exciting current experiments, like LIGO. LIGO is an incredibly ambitious experiment which has been talked about multiple times on this blog. It is trying to usher in a new era of astronomy–moving beyond detecting photons–to detecting gravitational waves, ripples in spacetime which are generated as exceptionally massive objects merge, such as when two black holes collide. The effects of these waves on our local spacetime as they travel past earth are minuscule, on the scale of $10^{-18}m$, which is about one thousand times shorter than the ‘diameter’ of a proton, and is the same order of magnitude as $\sqrt{\hbar/2}$. Remarkably, LIGO has exploited squeezed light to demonstrate sensitivities beyond the SQL. LIGO expects to start detecting gravitational waves on a frequent basis as its upgrades deemed ‘advanced LIGO’ are completed over the next few years. Compressed sensing beats Nyquist-Shannon: let’s play a game. Imagine I’m sending you a radio signal. How often do you need to measure the signal in order to be able to reconstruct it perfectly? The Nyquist-Shannon sampling theorem is a path-breaking result which Claude Shannon proved in 1949. If you measure at least twice as often as the highest frequency, then you are guaranteed perfect recovery of the signal. This incredibly profound result laid the foundation for modern communications. Also, it is important to realize that your signal can be much more general than simply radio waves, such as with a signal of images. This theorem is a sufficient condition for reconstruction, but is it necessary? Not even close. And it took us over 50 years to understand this in generality. Compressed sensing was proposed between 2004-2006 by Emmanuel Candes, David Donaho and Terry Tao with important early contributions by Justin Romberg. I should note that Candes and Romberg were at Caltech during this period. The Nyquist-Shannon theorem told us that with a small amount of knowledge (a bound on the highest frequency) that we could reconstruct a signal perfectly by only measuring at a rate twice faster than the highest frequency–instead of needing to measure continuously. Compressed sensing says that with one extra assumption, assuming that only sparsely few of your frequencies are being used (call it 10 out of 1000), that you can recover your signal with high accuracy using dramatically fewer measurements. And it turns out that this assumption is valid for a huge range of applications: enabling real-time MRIs using conventional technology or more relevant to this blog, increasing our ability to distinguish quantum states via tomography. Unlike the other topics in this blog post, I have never worked with compressed sensing, but my intuition goes like this: instead of measuring in the basis in which you are sparse (frequency for example), measure in a different basis. With high probability each of these measurements will pick up a little piece from each of the occupied modes. Then, to reconstruct your signal, you want to use the L0-“norm” to interpolate in such a way that you use the fewest frequency components possible. Computing the L0-“norm” is not efficient, so one of the major breakthroughs of compressed sensing was showing that with high probability computing the L1-norm approximates the L0 solution, and all of this can be done using a highly efficient linear program. However, I really shouldn’t be speculating because I’ve never invested much time into mastering this new tool, and I’m friends with a couple of the quantum state tomography authors, so maybe they’ll chime in? Brahms is a cool dude. Brahms as a height map where cliffs=Gibbs phenomena=oh no! First three levels of Brahms as a Haar wavelet. Wavelets as the mother of all bases: I previously wrote a post about the importance of choosing a convenient basis. Imagine you have an image which has a bunch of sharp contrasts, such as the outline of a person, or a horizon, or a table, basically anything. How do you store it efficiently? Due to the Gibbs phenomena, the Fourier basis is pretty awful for these applications. Here’s another motivating problem, imagine someone plays one note on an instrument. The sound is localized in both time and frequency. The Fourier basis is also pretty awful at storing/detecting this. Wavelets to the rescue! The theory of wavelets uses some beautiful math to solve the longstanding problem of finding a basis which is localized in both position and momenta space (or very close to it.) Wavelets have profound applications, some of my favorite include: modern image compression (JPEG 2000 onwards) is based on wavelets; Ingrid Daubechies and her colleagues used wavelets to detect forged paintings; recovering previously unrecoverable recordings of Brahms at the piano (I heard about this from Barry Simon, of Reed-Simon fame, who is currently teaching his last class ever); and even the FBI uses wavelets to compress images of fingerprints, obtaining a compression ratio of 20:1. Postselection enables quantum cloning: the no-cloning theorem is well known in the field of quantum information. It says that you cannot find a machine (unitary operation U) which takes an arbitrary input state $|\psi\rangle$, and a known state $|0\rangle$, such that the machine maps $|\psi\rangle \otimes |0\rangle$ to $|\psi\rangle \otimes |\psi\rangle$, and thereby cloning $|\psi \rangle$. This is very easy to prove using the linearity of quantum mechanics. However, there are loopholes. One of the most trivial loopholes is realizing that one can take the state $|\psi\rangle$ and perform something called unambiguous state discrimination, which either spits out exactly which state $|\psi \rangle$ is with some probability, or otherwise spits out “I don’t know which state.” You can postselect that the unambigious state discrimination succeeded and prepare a unitary which clones the relevant states. Peter Shor has a comment on physics stackexchange describing this. Seth Lloyd and John Preskill outlined a less trivial version of this in their recent paper which tries to circumvent firewalls by using postselected quantum teleportation. In this blog post, I’ve only described a tiny fraction of the quantum loopholes that have been discovered. If I had more space/time, two of the next examples I would describe are beating classical correlations with quantum entanglement, in order to win at CHSH games. I would also describe weak measurements and some of the peculiar things they lead to. Beyond that, I would probably refer you to Yakir Aharonov’s amazingly fun book about quantum paradoxes. After reading this, I hope that the next time you encounter an inviolable law of nature, you’ll apply the hacker mentality and attempt to strip it down to its essence, isolate assumptions, and potentially find a loophole. But while you’re doing this, remember that you should never argue with your mother, or with mathematics! # Defending against high-frequency attacks It was the summer of 2008. I was 22 years old, and it was my second week working in the crude oil and natural gas options pit at the New York Mercantile Exchange (NYMEX.) My head was throbbing after two consecutive weeks of disorientation. It was like being born into a new world, but without the neuroplasticity of a young human. And then the crowd erupted. “Yeeeehawwww. YeEEEeeHaaaWWWWW. Go get ’em cowboy.” It seemed that everyone on the sprawling trading floor had started playing Wild Wild West and I had no idea why. After at least thirty seconds, the hollers started to move across the trading floor. They moved away 100 meters or so and then doubled back towards me. After a few meters, he finally got it, and I’m sure he learned a life lesson. Don’t be the biggest jerk in a room filled with traders, and especially, never wear triple-popped pastel-colored Lacoste shirts. This young aspiring trader had been “spurred.” In other words, someone had made paper spurs out of trading receipts and taped them to his shoes. Go get ’em cowboy. I was one academic quarter away from finishing a master’s degree in statistics at Stanford University and I had accepted a full time job working in the algorithmic trading group at DRW Trading. I was doing a summer internship before finishing my degree, and after three months of working in the algorithmic trading group in Chicago, I had volunteered to work at the NYMEX. Most ‘algo’ traders didn’t want this job, because it was far-removed from our mental mathematical monasteries, but I knew I would learn a tremendous amount, so I jumped at the opportunity. And by learn, I mean, get ripped calves and triceps, because my job was to stand in place for seven straight hours updating our mathematical models on a bulky tablet PC as trades occurred. I have no vested interests in the world of high-frequency trading (HFT). I’m currently a PhD student in the quantum information group at Caltech and I have no intentions of returning to finance. I found the work enjoyable, but not as thrilling as thinking about the beginning of the universe (what else is?) However, I do feel like the current discussion about HFT is lop-sided and I’m hoping that I can broaden the perspective by telling a few short stories. What are the main attacks against HFT? Three of them include the evilness of: front-running markets, making money out of nothing, and instability. It’s easy to point to extreme examples of algorithmic traders abusing markets, and they regularly do, but my argument is that HFT has simply computerized age-old tactics. In this process, these tactics have become more benign and markets more stable. Front-running markets: large oil producing nations, such as Mexico, often want to hedge their exposure to changing market prices. They do this by purchasing options. This allows them to lock in a minimum sale price, for a fee of a few dollars per barrel. During my time at the NYMEX, I distinctly remember a broker shouting into the pit: “what’s the price on DEC9 puts.” A trader doesn’t want to give away whether they want to buy or sell, because if the other traders know, then they can artificially move the price. In this particular case, this broker was known to sometimes implement parts of Mexico’s oil hedge. The other traders in the pit suspected this was a trade for Mexico because of his anxious tone, some recent geopolitical news, and the expiration date of these options. Some confident traders took a risk and faded the market. They ended up making between$1-2 million dollars from these trades, relative to what the fair price was at that moment. I mention relative to the fair price, because Mexico ultimately received the better end of this trade. The price of oil dropped in 2009, and Mexico executed its options enabling it to sell its oil at a higher than market price. Mexico spent \$1.5 billion to hedge its oil exposure in 2009.

This was an example of humans anticipating the direction of a trade and capturing millions of dollars in profit as a result. It really is profit as long as the traders can redistribute their exposure at the ‘fair’ market price before markets move too far. The analogous strategy in HFT is called “front-running the market” which was highlighted in the New York Times’ recent article “the wolf hunters of Wall Street.” The HFT version involves analyzing the prices on dozens of exchanges simultaneously, and once an order is published in the order book of one exchange, then using this demand to adjust its orders on the other exchanges. This needs to be done within a few microseconds in order to be successful. This is the computerized version of anticipating demand and fading prices accordingly. These tactics as I described them are in a grey area, but they rapidly become illegal.

Making money from nothing: arbitrage opportunities have existed for as long as humans have been trading. I’m sure an ancient trader received quite the rush when he realized for the first time that he could buy gold in one marketplace and then sell it in another, for a profit. This is only worth the trader’s efforts if he makes a profit after all expenses have been taken into consideration. One of the simplest examples in modern terms is called triangle arbitrage, and it usually involves three pairs of currencies. Currency pairs are ratios; such as USD/AUD, which tells you, how many Australian dollars you receive for one US dollar. Imagine that there is a moment in time when the product of ratios $\frac{USD}{AUD}\frac{AUD}{CAD}\frac{CAD}{USD}$ is 1.01. Then, a trader can take her USD, buy AUD, then use her AUD to buy CAD, and then use her CAD to buy USD. As long as the underlying prices didn’t change while she carried out these three trades, she would capture one cent of profit per trade.

After a few trades like this, the prices will equilibrate and the ratio will be restored to one. This is an example of “making money out of nothing.” Clever people have been trading on arbitrage since ancient times and it is a fundamental source of liquidity. It guarantees that the price you pay in Sydney is the same as the price you pay in New York. It also means that if you’re willing to overpay by a penny per share, then you’re guaranteed a computer will find this opportunity and your order will be filled immediately. The main difference now is that once a computer has been programmed to look for a certain type of arbitrage, then the human mind can no longer compete. This is one of the original arenas where the term “high-frequency” was used. Whoever has the fastest machines, is the one who will capture the profit.

Instability: I believe that the arguments against HFT of this type have the most credibility. The concern here is that exceptional leverage creates opportunity for catastrophe. Imaginations ran wild after the Flash Crash of 2010, and even if imaginations outstripped reality, we learned much about the potential instabilities of HFT. A few questions were posed, and we are still debating the answers. What happens if market makers stop trading in unison? What happens if a programming error leads to billions of dollars in mistaken trades? Do feedback loops between algo strategies lead to artificial prices? These are reasonable questions, which are grounded in examples, and future regulation coupled with monitoring should add stability where it’s feasible.

The culture in wealth driven industries today is appalling. However, it’s no worse in HFT than in finance more broadly and many other industries. It’s important that we dissociate our disgust in a broad culture of greed from debates about the merit of HFT. Black boxes are easy targets for blame because they don’t defend themselves. But that doesn’t mean they aren’t useful when implemented properly.

Are we better off with HFT? I’d argue a resounding yes. The primary function of markets is to allocate capital efficiently. Three of the strongest measures of the efficacy of markets lie in “bid-ask” spreads, volume and volatility. If spreads are low and volume is high, then participants are essentially guaranteed access to capital at as close to the “fair price” as possible. There is huge academic literature on how HFT has impacted spreads and volume but the majority of it indicates that spreads have lowered and volume has increased. However, as alluded to above, all of these points are subtle–but in my opinion, it’s clear that HFT has increased the efficiency of markets (it turns out that computers can sometimes be helpful.) Estimates of HFT’s impact on volatility haven’t been nearly as favorable but I’d also argue these studies are more debatable. Basically, correlation is not causation, and it just so happens that our rapidly developing world is probably more volatile than the pre-HFT world of the last Millennia.

We could regulate away HFT, but we wouldn’t be able to get rid of the underlying problems people point to unless we got rid of markets altogether. As with any new industry, there are aspects of HFT that should be better monitored and regulated, but we should have level-heads and diverse data points as we continue this discussion. As with most important problems, I believe the ultimate solution here lies in educating the public. Or in other words, this is my plug for Python classes for all children!!

I promise that I’ll repent by writing something that involves actual quantum things within the next two weeks!

# Reporting from the ‘Frontiers of Quantum Information Science’

What am I referring to with this title? It is similar to the name of this blog–but that’s not where this particular title comes from–although there is a common denominator. Frontiers of Quantum Information Science was the theme for the 31st Jerusalem winter school in theoretical physics, which takes place annually at the Israeli Institute for Advanced Studies located on the Givat Ram campus of the Hebrew University of Jerusalem. The school took place from December 30, 2013 through January 9, 2014, but some of the attendees are still trickling back to their home institutions. The common denominator is that our very own John Preskill was the director of this school; co-directed by Michael Ben-Or and Patrick Hayden. John mentioned during a previous post and reiterated during his opening remarks that this is the first time the IIAS has chosen quantum information to be the topic for its prestigious advanced school–another sign of quantum information’s emergence as an important sub-field of physics. In this blog post, I’m going to do my best to recount these festivities while John protects his home from forest fires, prepares a talk for the Simons Institute’s workshop on Hamiltonian complexityteaches his quantum information course and celebrates his birthday 60+1.

The school was mainly targeted at physicists, but it was diversely represented. Proof of the value of this diversity came in an interaction between a computer scientist and a physicist, which led to one of the school’s most memorable moments. Both of my most memorable moments started with the talent show (I was surprised that so many talents were on display at a physics conference…) Anyways, towards the end of the show, Mateus Araújo Santos, a PhD student in Vienna, entered the stage and mentioned that he could channel “the ghost of Feynman” to serve as an oracle for NP-complete decision problems. After making this claim, people obviously turned to Scott Aaronson, hoping that he’d be able to break the oracle. However, in order for this to happen, we had to wait until Scott’s third lecture about linear optics and boson sampling the next day. You can watch Scott bombard the oracle with decision problems from 1:00-2:15 during the video from his third lecture.

Scott Aaronson grilling the oracle with a string of NP-complete decision problems! From 1:00-2:15 during this video.

The other most memorable moment was when John briefly danced Gangnam style during Soonwon Choi‘s talent show performance. Unfortunately, I thought I had this on video, but the video didn’t record. If anyone has video evidence of this, then please share!