About shaunmaguire

I'm a PhD student working in quantum information at Caltech. It's astonishing that they gave the keys to this blog to hooligans like myself.

The Science that made Stephen Hawking famous

In anticipation of The Theory of Everything which comes out today, and in the spirit of continuing with Quantum Frontiers’ current movie theme, I wanted to provide an overview of Stephen Hawking’s pathbreaking research. Or at least to the best of my ability—not every blogger on this site has won bets against Hawking! In particular, I want to describe Hawking’s work during the late ‘60s and through the ’70s. His work during the ’60s is the backdrop for this movie and his work during the ’70s revolutionized our understanding of black holes.

stephen-hawking-release

(Portrait of Stephen Hawking outside the Department of Applied Mathematics and Theoretical Physics, Cambridge. Credit: Jason Bye)

As additional context, this movie is coming out at a fascinating time, at a time when Hawking’s contributions appear more prescient and important than ever before. I’m alluding to the firewall paradox, which is the modern reincarnation of the information paradox (which will be discussed below), and which this blog has discussed multiple times. Progress through paradox is an important motto in physics and Hawking has been at the center of arguably the most challenging paradox of the past half century. I should also mention that despite irresponsible journalism in response to Hawking’s “there are no black holes” comment back in January, that there is extremely solid evidence that black holes do in fact exist. Hawking was referring to a technical distinction concerning the horizon/boundary of black holes.

Now let’s jump back and imagine that we are all young graduate students at Cambridge in the early ‘60s. Our protagonist, a young Hawking, had recently been diagnosed with ALS, he had recently met Jane Wilde and he was looking for a thesis topic. This was an exciting time for Einstein’s Theory of General Relativity (GR). The gravitational redshift had recently been confirmed by Pound and Rebka at Harvard, which put the theory on extremely solid footing. This was the third of three “classical tests of GR.” So now that everyone was truly convinced that GR is correct, it became important to get serious about investigating its most bizarre predictions. Hawking and Penrose picked up on this theme most notably.The mathematics of GR allows for singularities which lead to things like the big bang and black holes. This mathematical possibility was known since the works of Friedmann, Lemaitre and Oppenheimer+Snyder starting all the way back in the 1920s, but these calculations involved unphysical assumptions—usually involving unrealistic symmetries. Hawking and Penrose each asked (and answered) the questions: how robust and generic are these mathematical singularities? Will they persist even if we get rid of assumptions like perfect spherical symmetry of matter? What is their interpretation in physics?

I know that I have now used the word “singularity” multiple times without defining it. However, this is for good reason—it’s very hard to assign a precise definition to the term! Some examples of singularities include regions of “infinite curvature” or with “conical deficits.”

Singularity theorems applied to cosmology: Hawking’s first major results, starting with his thesis in 1965, was proving that singularities on the cosmological scale—such as the big bang—were indeed generic phenomena and not just mathematical artifacts. This work was published immediately after, and it built upon, a seminal paper by Penrose. Also, I apologize for copping-out again, but it’s outside the scope of this post to say more about the big bang, but as a rough heuristic, imagine that if you run time backwards then you obtain regions of infinite density. Hawking and Penrose spent the next five or so years stripping away as many assumptions as they could until they were left with rather general singularity theorems. Essentially, they used MATH to say something exceptionally profound about THE BEGINNING OF THE UNIVERSE! Namely that if you start with any solution to Einstein’s equations which is consistent with our observed universe, and run the solution backwards, then you will obtain singularities (regions of infinite density at the Big Bang in this case)! However, I should mention that despite being a revolutionary leap in our understanding of cosmology, this isn’t the end of the story, and that Hawking has also pioneered an attempt to understand what happens when you add quantum effects to the mix. This is still a very active area of research.

Singularity theorems applied to black holes: the first convincing evidence for the existence of astrophysical black holes didn’t come until 1972 with the discovery of Cygnus X-1, and even this discovery was wrought with controversy. So imagine yourself as Hawking back in the late ’60s. He and Penrose had this powerful machinery which they had successfully applied to better understand THE BEGINNING OF THE UNIVERSE but there was still a question about whether or not black holes actually existed in nature (not just in mathematical fantasy land.) In the very late ‘60s and early ’70s, Hawking, Penrose, Carter and others convincingly argued that black holes should exist. Again, they used math to say something about how the most bizarre corners of the universe should behave–and then black holes were discovered observationally a few years later. Math for the win!

No hair theorem: after convincing himself that black holes exist Hawking continued his theoretical studies about their strange properties. In the early ’70s, Hawking, Carter, Israel and Robinson proved a very deep and surprising conjecture of John Wheeler–that black holes have no hair! This name isn’t the most descriptive but it’s certainly provocative. More specifically they showed that only a short time after forming, a black hole is completely described by only a few pieces of data: knowledge of its position, mass, charge, angular momentum and linear momentum (X, M, Q, J and L). It only takes a few dozen numbers to describe an exceptionally complicated object. Contrast this to, for example, 1000 dust particles where you would need tens of thousands of datum (the position and momentum of each particle, their charge, their mass, etc.) This is crazy, the number of degrees of freedom seems to decrease as objects form into black holes?

Black hole thermodynamics: around the same time, Carter, Hawking and Bardeen proved a result similar to the second law of thermodynamics (it’s debatable how realistic their assumptions are.) Recall that this is the law where “the entropy in a closed system only increases.” Hawking showed that, if only GR is taken into account, then the area of a black holes’ horizon only increases. This includes that if two black holes with areas A_1 and A_2 merge then the new area A* will be bigger than the sum of the original areas A_1+A_2.

Combining this with the no hair theorem led to a fascinating exploration of a connection between thermodynamics and black holes. Recall that thermodynamics was mainly worked out in the 1800s and it is very much a “classical theory”–one that didn’t involve either quantum mechanics or general relativity. The study of thermodynamics resulted in the thrilling realization that it could be summarized by four laws. Hawking and friends took the black hole connection seriously and conjectured that there would also be four laws of black hole mechanics.

In my opinion, the most interesting results came from trying to understand the entropy of black hole. The entropy is usually the logarithm of the number of possible states consistent with observed ‘large scale quantities’. Take the ocean for example, the entropy is humungous. There are an unbelievable number of small changes that could be made (imagine the number of ways of swapping the location of a water molecule and a grain of sand) which would be consistent with its large scale properties like it’s temperature. However, because of the no hair theorem, it appears that the entropy of a black hole is very small? What happens when some matter with a large amount of entropy falls into a black hole? Does this lead to a violation of the second law of thermodynamics? No! It leads to a generalization! Bekenstein, Hawking and others showed that there are two contributions to the entropy in the universe: the standard 1800s version of entropy associated to matter configurations, but also contributions proportional to the area of black hole horizons. When you add all of these up, a new “generalized second law of thermodynamics” emerges. Continuing to take this thermodynamic argument seriously (dE=TdS specifically), it appeared that black holes have a temperature!

As a quick aside, a deep and interesting question is what degrees of freedom contribute to this black hole entropy? In the late ’90s Strominger and Vafa made exceptional progress towards answering this question when he showed that in certain settings, the number of microstates coming from string theory exactly reproduces the correct black hole entropy.

Black holes evaporate (Hawking Radiation): again, continuing to take this thermodynamic connection seriously, if black holes have a temperature then they should radiate away energy. But what is the mechanism behind this? This is when Hawking fearlessly embarked on one of the most heroic calculations of the 20th century in which he slogged through extremely technical calculations involving “quantum mechanics in a curved space” and showed that after superimposing quantum effects on top of general relativity, there is a mechanism for particles to escape from a black hole.

This is obviously a hard thing to describe, but for a hack-job analogy, imagine you have a hot plate in a cool room. Somehow the plate “radiates” away its energy until it has the same temperature as the room. How does it do this? By definition, the reason why a plate is hot, is because its molecules are jiggling around rapidly. At the boundary of the plate, sometimes a slow moving air molecule (lower temperature) gets whacked by a molecule in the plate and leaves with a higher momentum than it started with, and in return the corresponding molecule in the plate loses energy. After this happens an enormous number of times, the temperatures equilibrate. In the context of black holes, these boundary interactions would never happen without quantum mechanics. General relativity predicts that anything inside the event horizon is causally disconnected from anything on the outside and that’s that. However, if you take quantum effects into account, then for some very technical reasons, energy can be exchanged at the horizon (interface between the “inside” and “outside” of the black hole.)

Black hole information paradox: but wait, there’s more! These calculations weren’t done using a completely accurate theory of nature (we use the phrase “quantum gravity” as a placeholder for whatever this theory will one day be.) They were done using some nightmarish amalgamation of GR and quantum mechanics. Seminal thought experiments by Hawking led to different predictions depending upon which theory one trusted more: GR or quantum mechanics. Most famously, the information paradox considered what would happen if an “encyclopedia” were thrown into the black hole. GR predicts that after the black hole has fully evaporated, such that only empty space is left behind, that the “information” contained within this encyclopedia would be destroyed. (To readers who know quantum mechanics, replace “encylopedia” with “pure state”.) This prediction unacceptably violates the assumptions of quantum mechanics, which predict that the information contained within the encyclopedia will never be destroyed. (Maybe imagine you enclosed the black hole with perfect sensing technology and measured every photon that came out of the black hole. In principle, according to quantum mechanics, you should be able to reconstruct what was initially thrown into the black hole.)

Making all of this more rigorous: Hawking spent most of the rest of the ’70s making all of this more rigorous and stripping away assumptions. One particularly otherworldly and powerful tool involved redoing many of these black hole calculations using the euclidean path integral formalism.

I’m certain that I missed some key contributions and collaborators in this short history, and I sincerely apologize for that. However, I hope that after reading this you have a deepened appreciation for how productive Hawking was during this period. He was one of humanity’s earliest pioneers into the uncharted territory that we call quantum gravity. And he has inspired at least a few generations worth of theoretical physicists, obviously, including myself.

In addition to reading many of Hawking’s original papers, an extremely fun source for this post is a book which was published after his 60th birthday conference.

Science at Burning Man: Say What?

Burning Man… what a controversial topic these days. The annual festival received quite a bit of media attention this year, with a particular emphasis on how the ‘tech elite’ do burning man. Now that we are no longer in the early September Black Rock City news deluge I wanted to forever out myself as a raging hippie and describe why I keep going back to the festival: for the science of course!

This is a view of my camp, the Phage, as viewed from the main street in Black Rock City.

This is a view of my camp, the Phage, as viewed from the main street in Black Rock City. I have no idea why the CH-47 is doing a flyover… everything else is completely standard for Burning Man. Notice the 3 million Volt Tesla coil which my roommates built.

I suspect that at this point, this motivation may seem counter-intuitive or even implausible, but let me elaborate. First, we should start with a question: what is Burning Man? Answer: this question is impossible to answer. The difficulty of answering this question is why I’m writing this post. Most people oversimplify and describe the event as a ‘bunch of hippies doing drugs in the desert’ or as ‘a music festival with a dash of art’ or as ‘my favorite time of the year’ and on and on. There are nuggets of truth in all of these answers but none of them convey the diversity of the event. With upwards of 65,000 people gathered for a week, my friends and I like to describe it as a “choose your own adventure” sort of experience. I choose science.

My goal for this post is to give you a sense of the sciency activities which take place in my camp. Coupling this with the fact that science is a tiny subset of the Burning Man ethos, you should come away convinced that there’s much more to the festival than just ‘a bunch of hippies doing drugs in the desert and listening to music.’

I camp with The Phage, as in bacteriophage, the incredibly abundant virus which afflicts bacteria. There are about 200 people in our camp, most of whom are scientists, with a median age of over 30. Only about 100 people camp with the Phage in any given year. The camp also houses some hackers, entrepreneurs and artists but scientific passion is unequivocally our unifying trait. Some of the things we assembled this year include:

3 million Volt musical Tesla coil at night and during assembly

Dr. F and Dr. B’s 3 million Volt musical Tesla coil. Humans were inserted for scale.

Musical Tesla coil: two of my roommates built a 3 million Volt musical Tesla coil. Think about this… it’s insane. The project started while they were writing their Caltech PhD theses (EE and Applied Physics) and in my opinion, the Tesla coil’s scale is a testament to the power of procrastination! Thankfully, they both finished their PhDs. After doing so, they spent the months between their defenses and Burning Man building the coil in earnest. Not only was the coil massive–with the entire structure standing well over 20 feet tall–but it was connected through MIDI to a keyboard. Sound is just pressure waves moving through air, and lightning moves lots of air, so this was one of the loudest platforms on the playa. I manned the coil one evening and one professional musician told me it was “by far the coolest instrument he has ever played.” Take a brief break from reading this and watch this video!

Dr. Brainlove

Dr. Brainlove getting ready for a midnight stroll and then getting a brainlift.

Dr. Brainlove: we built a colossal climbable “art car” in the shape of a brain which was covered in LEDs and controlled from a wireless EEG device. Our previous art car (Dr. Strangelove) died at the 2013 festival, so last winter our community rallied and ‘brainstormed’ the theme for this vehicle. After settling on a neuroscience theme, one of my campmates in Berkeley scanned her brain and sent a CAD file to Arcology Now in Austin, TX who created an anatomically correct steel frame. We procured a yellow school bus which had been converted to bio diesel. We raised over $30k (there were donations beyond indiegogo.) About 20 of my campmates volunteered their weekends to work at the Nimby in Oakland: hacking apart the bus, building additional structures, covering the bus with LEDs, installing a sound system, etc. One of the finishing touches was that one of my campmates who is a neurosurgeon at UCSD procured some wireless EEG devices and then he and some friends wrote software to control Dr. Brainlove’s LEDs–thus displaying someone’s live brain activity on a 30′ long by 20′ tall climbable musical art car for the entire playa to see! We already have plans to increase the LED density and therefore put on a more impressive interactive neural light show next year.

Sugarcubes: in 2013, some campmates built an epic LED sculpture dubbed “the sugarcubes”. Just watch this video and you’ll be blown away. The cubes weren’t close to operational when they arrived so there was 48 hours of hacking madness by Dan Kaminsky, Alexander Green and many brilliant others before our “Tuesday night” party. The ethos is similar to the Caltech undergrad’s party culture–the fun is in the building–don’t tell my friends but I slept through the actual party.

Ask a scientist on the left. Science class on the right. Science everywhere!

Ask a scientist on the left (I’m in there somewhere and so is one of my current roommates– another Caltech PhD ’13.) Science class on the right. Science everywhere!

Ask a scientist: there’s no question that this is my favorite on playa activity. This photo doesn’t do the act justice. Imagine a rotating cast of 7-8 phagelings braving dust storms and donning lab coats all FOR SCIENCE! The diversity of questions is incredible and I always learn a tremendous amount (evidenced by losing my voice three years running.) For example, this year, a senior executive at Autodesk approached and asked me a trick question related to the Sun’s magnetic field. Fear not–I was prepared! This has happened before.. and he was wearing a “space” t-shirt so my guard was up. A nuclear physicist from UCLA asked me to explain Bell test experiments (and he didn’t even know my background.) Someone asked how swamp coolers work? To be honest, I didn’t have a clear answer off the top of my head so I called over one of my friends (who was one of the earliest pioneers of optogenetics) and he nailed it immediately. Not having a clear answer to this question was particularly embarrassing because I’ve spent most of the past year thinking about something akin to quantum thermodynamics… if you can call black hole physics and holographic entanglement that.

Make/hack sessions: I didn’t participate in any of these this year but some of my campmates teach soldering/microscopy/LED programming/etc classes straight out of our camp. See photo above.

EEG and LED hacking.

Science talks: we had 4-5 science talks in a carpeted 40ft geodesic dome every evening. This is pretty self explanatory and by this point in my post, the Phage may have enough credibility that you’d believe the caliber is exceptional.

Impromptu conversations: this is another indescribable aspect. I’ll risk undermining the beauty of these conversations by using a cheap word: the ‘networking’ at Burning Man is unrivaled. I don’t mean in the for-dollar-profit sense, I mean in the intellectual and social sense. For example, one of my campmates’ brother is a string theory postdoc at Stanford. He came by our camp one evening, we were introduced, and then we met up again in the default world when I visited Stanford the following week. Burning Man is the type of place where you’ll start talking about MPEG/EFF/optogenetics/companyX/etc and then someone will say: “you know that the inventor/spokesperson/pioneer/founder/etc is at the next table over right?”

Yup, Burning Man is just a bunch of hippies doing drugs in the desert. You shouldn’t come. You definitely wouldn’t enjoy it. No fun is had and no ideas are shared. Or in other words, Burning Man: where exceptionally capable people prepare themselves for the zombie apocalypse.

Check out my friend Peretz Partensky’s Flickr feed if you want to see more photos (and credit goes to him for the photos in this post.)

The singularity is not near: the human brain as a Boson sampler?

Ever since the movie Transcendence came out, it seems like the idea of the ‘technological singularity‘ has been in the air. Maybe it’s because I run in an unorthodox circle of deep thinkers, but over the past couple months, I’ve been roped into three conversations related to this topic. The conversations usually end with some version of “ah shucks, machine learning is developing at a fast rate, so we are all doomed. And have you seen those deep learning videos? Computers are learning to play 35 year old video games?! Put this on an exponential trend and we are D00M3d!”

Computers are now learning the rules of this game and then playing it optimally. Are we all doomed?

Computers are now learning the rules of this game, from visual input only, and then playing it optimally. Are we all doomed?

So what is the technological singularity? My personal translation is: are we on the verge of narcissistic flesh-eating robots stealing our lunch money while we commute to the ‘special school for slow sapiens’?

This is an especially hyperbolic view, and I want to be clear to distinguish ‘machine learning‘ from ‘artificial consciousness.’ The former seems poised for explosive growth but the latter seems to require breakthroughs in our understanding of the fundamental science. The two concepts are often equated when defining the singularity, or even artificial intelligence, but I think it’s important to distinguish these two concepts. Without distinguishing them, people sometimes make the faulty association: machine_learning_progress=>AI_progress=>artificial_consciousness_progress.

I’m generally an optimistic person, but on this topic, I’m especially optimistic about humanity’s status as machine overlords for at least the next ~100 years. Why am I so optimistic? Quantum information (QI) theory has a secret weapon. And that secret weapon is obviously Scott Aaronson (and his brilliant friends+colleagues+sidekicks; especially Alex Arkhipov in this case.) Over the past few years they have done absolutely stunning work related to understanding the computational complexity of linear optics. They colloquially call this work Boson sampling.

What I’m about to say is probably extremely obvious to most people in the QI community, but I’ve had conversations with exquisitely well educated people–including a Nobel Laureate–and very few people outside of QI seem to be aware of Aaronson and Arkhipov’s (AA’s) results. Here’s a thought experiment: does a computer have all the hardware required to simulate the human brain? For a long time, many people thought yes, and they even created a more general hypothesis called the “extended Church-Turring hypothesis.”

An interdisciplinary group of scientists has long speculated that quantum mechanics may stand as an obstruction towards this hypothesis. In particular, it’s believed that quantum computers would be able to efficiently solve some problems that are hard for a classical computer. These results led people, possibly Roger Penrose most notably, to speculate that consciousness may leverage these quantum effects. However, for many years, there was a huge gap between quantum experiments and the biology of the human brain. If I ever broached this topic at a dinner party, my biologist friends would retort: “but the brain is warm and wet, good luck managing decoherence.” And this seems to be a valid argument against the brain as a universal quantum computer. However, one of AA’s many breakthroughs is that they paved the way towards showing that a rather elementary physical system can gain speed-ups on certain classes of problems over classical computers. Maybe the human brain has a Boson sampling module?

More specifically, AA’s physical setup involves being able to: generate identical photons; send them through a network of beamsplitters, phase shifters and mirrors; and then count the number of photons in each mode through ‘nonadaptive’ measurements. This setup computes the permanent of a matrix, which is known to be a hard problem classically. AA showed that if there exists a polynomial-time classical algorithm which samples from the same probability distribution, then the polynomial hierarchy would collapse to the third level (this last statement would be very bad for theoretical computer science and therefore for humans; ergo probably not true.) I should also mention that when I learned the details of these results, during Scott’s lectures this past January at the Israeli Insitute of Advanced Studies’ Winter School in Theoretical Physics, that there was one step in the proof which was not rigorous. Namely, they rely on a conjecture in random matrix theory–but at least they have simulations indicating the conjecture should be true.

Nitty gritty details aside, I find the possibility that this simple system is gaining a classical speed-up compelling in the conversation about consciousness. Especially considering that finding permanents is actually useful for some combinatorics problems. When you combine this with Nature’s mischievous manner of finding ways to use the tools available to it, it seems plausible to me that the brain is using something like Boson sampling for at least one non-trivial task towards consciousness. If not Boson sampling, then maybe ‘Fermion smashing’ or ‘minimal surface finding’ or some other crackpottery words I’m coming up with on the fly. The point is, this result opens a can of worms.

AA’s results have bred new life into my optimism towards humanity’s ability to rule the lands and interwebs for at least the next few decades. Or until some brilliant computer scientist proves that human consciousness is in P. If nothing else, it’s a fun topic for wild dinner party speculation.

Ten reasons why black holes exist

I spent the past two weeks profoundly confused. I’ve been trying to get up to speed on this firewall business and I wanted to understand the picture below.

Much confuse. Such lost. [Is doge out of fashion now? I wouldn’t know because I’ve been trapped in a black hole!]

[Technical paragraph that you can skip.] I’ve been trying to understand why the picture on the left is correct, even though my intuition said the middle picture should be (intuition should never be trusted when thinking about quantum gravity.) The details of these pictures are technical and tangential to this post, but the brief explanation is that these pictures are called Penrose diagrams and they provide an intuitive way to think about the time dynamics of black holes. The two diagrams on the left represent the same physics as the schematic diagram on the right. I wanted to understand why during Hawking radiation, the radial momentum for partner modes is in the same direction. John Preskill gave me the initial reasoning, that “radial momentum is not an isometry of Schwarzchild or Rindler geometries,” then I struggled for a few weeks to unpack this, and then Dan Harlow rescued me with some beautiful derivations that make it crystal clear that the picture on the left is indeed correct. I wanted to understand this because if the central picture is correct, then it would be hard to catch up to an infalling Hawking mode and therefore to verify firewall weirdness. The images above are simple enough, but maybe the image below will give you a sense for how much of an uphill battle this was!

leftorrightinfallingmodes

This pretty much sums up my last two weeks (with the caveat that each of these scratch sheets is double sided!) Or in case you wanted to know what a theoretical physicist does all day.

After four or five hours of maxing out my brain, it starts to throb. For the past couple of weeks, after breaking my brain with firewalls each day, I’ve been switching gears and reading about black hole astronomy (real-life honest-to-goodness science with data!) Beyond wanting to know the experimental state-of-the-art related to the fancy math I’ve been thinking about, I also had the selfish motivation that I wanted to do some PR maintenance after Nature’s headline: “Stephen Hawking: ‘There are no black holes’.” I found this headline infuriating when Nature posted it back in January. When taken out of context, this quote makes it seem like Stephen Hawking was saying “hey guys, my bad, we’ve been completely wrong all this time. Turn off the telescopes.” When in reality what he was saying was more like: “hey guys, I think this really hard modern firewall paradox is telling us that we’ve misunderstood an extremely subtle detail and we need to make corrections on the order of a few Planck lengths, but it matters!” When you combine this sensationalism with Nature’s lofty credibility, the result is that even a few of my intelligent scientist peers have interpreted this as the non-existence of astrophysical black holes. Not to mention that it opens a crack for the news media to say things like: ‘if even Stephen Hawking has been wrong all this time, then how can we possibly trust the rest of this scientist lot, especially related to climate change?’ So brain throbbing + sensationalism => learning black hole astronomy + PR maintenance.

Before presenting the evidence, I should wave my hands about what we’re looking for. You have all heard about black holes. They are objects where so much mass gets concentrated in such a small volume that Einstein’s general theory of relativity predicts that once an object passes beyond a certain distance (called the event horizon), then said object will never be able to escape, and must proceed to the center of the black hole. Even photons cannot escape once they pass beyond the event horizon (except when you start taking quantum mechanics into account, but this is a small correction which we won’t focus on here.) All of our current telescopes collect photons, and as I just mentioned, when photons get close to a black hole, they fall in, so this means a detection with current technology will only be indirect. What are these indirect detections we have made? Well, general relativity makes numerous predictions about black holes. After we confirm enough of these predictions to a high enough precision, and without a viable alternative theory, we can safely conclude that we have detected black holes. This is similar to how many other areas of science work, like particle physics finding new particles through detecting a particle’s decay products, for example.

Without further ado, I hope the following experimental evidence will convince you that black holes permeate our universe (and if not black holes, then something even weirder and more interesting!)

1. Sgr A*: There is overwhelming evidence that there is a supermassive black hole at the center of our galaxy, the Milky Way. As a quick note, most of the black holes we have detected are broken into two categories, solar mass, where they are only a few times more massive than our sun (5-30 solar masses), or supermassive, where the mass is about 10^5-10^{10} solar masses. Some of the most convincing evidence comes from the picture below. Andrea Ghez and others tracked the orbits of several stars around the center of the Milky Way for over twenty years. We have learned that these stars orbit around a point-like object with a mass on the order of 4\times 10^6 solar masses. Measurements in the radio spectrum show that there is a radio source located in the same location which we call Sagittarius A* (Sgr A*). Sgr A* is moving at less than 1km/s and has a mass of at least 10^5 solar masses. These bounds make it pretty clear that Sgr A* is the same object as what is at the focus of these orbits. A radio source is exactly what you would expect for this system because as dust particles get pulled towards the black hole, they collide and friction causes them to heat up, and hot objects radiate photons. These arguments together make it pretty clear that Sgr A* is a supermassive black hole at the center of the Milky Way!

This plot shows the orbits of a few stars

What are you looking at? This plot shows the orbits of a few stars around the center of our galaxy, tracked over 17 years!

2. Orbit of S2: During a recent talk that Andrea Ghez gave at Caltech, she said that S2 is “her favorite star.” S2 is a 15 solar mass star located near the black hole at the center of our galaxy. S2’s distance from this black hole is only about four times the distance from Neptune to the Sun (at closest point in orbit), and it’s orbital period is only 15 years. The Keck telescopes in Mauna Kea have followed almost two complete orbits of S2. This piece of evidence is redundant compared to point 1, but it’s such an amazing technological feat that I couldn’t resist including it.

We've followed S2's complete orbit. Is it orbiting around nothing? Something exotic that we have no idea about? Or much more likely around a black hole.

We’ve followed S2’s complete orbit. Is it orbiting around nothing? Something exotic that we have no idea about? Or much more likely around a black hole.

3. Numerical studies: astrophysicists have done numerous numerical simulations which provide a different flavor of test. Christian Ott at Caltech is pretty famous for these types of studies.

Image from a numerical simulation that Christian Ott and his student Evan O'Connor performed.

Image from a numerical simulation that Christian Ott and his student Evan O’Connor performed.

4. Cyg A: Cygnus A is a galaxy located in the Cygnus constellation. It is an exceptionally bright radio source. As I mentioned in point 1, as dust falls towards a black hole, friction causes it to heat up and then hot objects radiate away photons. The image below demonstrates this. We are able to use the Eddington limit to convert luminosity measurements into estimates of the mass of Cyg A. Not necessarily in the case of Cyg A, but in the case of its cousins Active Galactic Nuclei (AGNs) and Quasars, we are also able to put bounds on their sizes. These two things together show that there is a huge amount of mass trapped in a small volume, which is therefore probably a black hole (alternative models can usually be ruled out.)

There is a supermassive black hole at the center of this image. Dust falls towards it, gets heated up

There is a supermassive black hole at the center of this image which powers the rest of this action! The black hole is spinning and it emits relativistic jets along its axis of rotation. The blobs come from the jets colliding with the intergalactic medium.

5. AGNs and Quasars: these are bright sources which are powered by supermassive black holes. Arguments similar to those used for Cyg A make us confident that they really are powered by black holes and not some alternative.

6. X-ray binaries: astronomers have detected ~20 stellar mass black holes by finding pairs consisting of a star and a black hole, where the star is close enough that the black hole is sucking in its mass. This leads to accretion which leads to the emission of X-Rays which we detect on Earth. Cygnus X-1 is a famous example of this.

7. Water masers: Messier 106 is the quintessential example.

8. Gamma ray bursts: most gamma ray bursts occur when a rapidly spinning high mass star goes supernova (or hypernova) and leaves a neutron star or black hole in its wake. However, it is believed that some of the “long” duration gamma ray bursts are powered by accretion around rapidly spinning black holes.

That’s only eight reasons but I hope you’re convinced that black holes really exist! To round out this list to include ten things, here are two interesting open questions related to black holes:

1. Firewalls: I mentioned this paradox at the beginning of this post. This is the cutting edge of quantum gravity which is causing hundreds of physicists to pull their hair out!

2. Feedback: there is an extremely strong correlation between the size of a galaxy’s supermassive black hole and many of the other properties in the galaxy. This connection was only realized about a decade ago and trying to understand how the black hole (which has a mass much smaller than the total mass of the galaxy) affects galaxy formation is an active area of research in astrophysics.

In addition to everything mentioned above, I want to emphasize that most of these results are only from the past decade. Not to mention that we seem to be close to the dawn of gravitational wave astronomy which will allow us to probe black holes more directly. There are also exciting instruments that have recently come online, such as NuSTAR. In other words, this is an extremely exciting time to be thinking about black holes–both from observational and theoretical perspectives–we have data and a paradox! In conclusion, black holes exist. They really do. And let’s all make a pact to read critically in the 21st century!

Cool resource from Sky and Telescope.

[* I want to thank my buddy Kaes Van’t Hof for letting me crash on his couch in NYC last week, which is where I did most of this work. ** I also want to thank Dan Harlow for saving me months of confusion by sharing a draft of his notes from his course on firewalls at the Israeli Institute for Advanced Study’s winter school in theoretical physics.]

Hacking nature: loopholes in the laws of physics

I spent my childhood hacking computers. When I was seven, my cousin showed up for Thanksgiving with a box filled with computer parts and we built my first computer. I got into competitive computer gaming around age eleven, and hacking was a natural extension of these activities. Then when I was sixteen, after doing poorly at a Counterstrike tournament, I decided that I should probably apply myself to other things. Needless to say, my parents were thrilled. So that’s when I bought my first computer (instead of building my own), which for deliberate but now antediluvian reasons was a Mac. A few years later, when I was taking CS 106 at Stanford, I was the first student in the course’s history whose reason for buying a Mac was “so that I couldn’t play computer games!” And now you know the story of my childhood.

The hacker mentality is quite different than the norm and my childhood trained me to look at absolutist laws as opportunities to find loopholes (of course only when legal and socially responsible!) I’ve applied this same mentality as I’ve been doing physics and I’d like to share with you some of the loopholes that I’ve gathered.

scharnhorst

Scharnhorst effect enables light to travel faster than in vacuum (c=299,792,458 m/s): this is about the grandaddy of all laws, that nothing can travel faster than light in a vacuum! This effect is the most controversial on my list, because it hasn’t yet been experimentally verified, but it seems obvious with the right picture in mind. Most people’s mental model for light traveling in a vacuum is of little particles/waves called photons traveling through empty space. However, the vacuum is not empty! It is filled with pairs of virtual particles which momentarily fleet into existence. Interactions with these virtual particles create a small amount of ‘resistance’ as photons zoom through the vacuum (photons get absorbed into virtual electron-positron pairs and then spit back out as photons ad infinitum.) Thus, if we could somehow reduce the rate at which virtual particles are created, photons would interact less strongly with the vacuum, and would be able to travel marginally faster than c. But this is exactly what leads to the Casimir effect: the experimentally verified fact that if you take two mirrors and put them ~10 nanometers apart, then they will attract each other because there are more virtual particles created outside the cavity than inside [low momenta virtual modes are inaccessible because the uncertainty principle requires \Delta x \cdot \Delta p= 10nm\cdot\Delta p \geq \hbar/2.] This effect is extremely small, only predicting that light would travel one part in 10^{36} faster than c. However, it should remind us all to deeply question assumptions.

This first loophole used quantum effects to beat a relativistic bound, but the next few loopholes are purely quantum, and are mainly related to that most quantum of all limits, the Heisenberg uncertainty principle.

Smashing the standard quantum limit (SQL) with squeezed measurements: the Heisenberg uncertainty principle tells us that there is a fundamental tradeoff in nature: the more precise your information about an object’s position, the less precise your knowledge about its momentum. Or vice versa, or replace x and p with and t, or any other conjugate variables. This uncertainty principle is oftentimes written as \Delta x\cdot \Delta p \geq \hbar/2. For a variety of reasons, in the early days of quantum mechanics, it was hard enough to imagine creating a state with \Delta x \cdot \Delta p = \hbar/2, but there was some hope because this is obtained in the ground state of a quantum harmonic oscillator. In this case, we have \Delta x = \Delta p = \sqrt{\hbar/2}. However, it was harder still to imagine creating states with \Delta x < \sqrt{\hbar/2}, these states would be said to ‘go beyond the standard quantum limit’ (SQL). Over the intervening years, not only have we figured out how to go beyond the SQL using squeezed coherent states, but this is actually essential in some of our most exciting current experiments, like LIGO.

LIGO is an incredibly ambitious experiment which has been talked about multiple times on this blog. It is trying to usher in a new era of astronomy–moving beyond detecting photons–to detecting gravitational waves, ripples in spacetime which are generated as exceptionally massive objects merge, such as when two black holes collide. The effects of these waves on our local spacetime as they travel past earth are minuscule, on the scale of 10^{-18}m, which is about one thousand times shorter than the ‘diameter’ of a proton, and is the same order of magnitude as \sqrt{\hbar/2}. Remarkably, LIGO has exploited squeezed light to demonstrate sensitivities beyond the SQL. LIGO expects to start detecting gravitational waves on a frequent basis as its upgrades deemed ‘advanced LIGO’ are completed over the next few years.

Compressed sensing beats Nyquist-Shannon: let’s play a game. Imagine I’m sending you a radio signal. How often do you need to measure the signal in order to be able to reconstruct it perfectly? The Nyquist-Shannon sampling theorem is a path-breaking result which Claude Shannon proved in 1949. If you measure at least twice as often as the highest frequency, then you are guaranteed perfect recovery of the signal. This incredibly profound result laid the foundation for modern communications. Also, it is important to realize that your signal can be much more general than simply radio waves, such as with a signal of images. This theorem is a sufficient condition for reconstruction, but is it necessary? Not even close. And it took us over 50 years to understand this in generality.

Compressed sensing was proposed between 2004-2006 by Emmanuel Candes, David Donaho and Terry Tao with important early contributions by Justin Romberg. I should note that Candes and Romberg were at Caltech during this period. The Nyquist-Shannon theorem told us that with a small amount of knowledge (a bound on the highest frequency) that we could reconstruct a signal perfectly by only measuring at a rate twice faster than the highest frequency–instead of needing to measure continuously. Compressed sensing says that with one extra assumption, assuming that only sparsely few of your frequencies are being used (call it 10 out of 1000), that you can recover your signal with high accuracy using dramatically fewer measurements. And it turns out that this assumption is valid for a huge range of applications: enabling real-time MRIs using conventional technology or more relevant to this blog, increasing our ability to distinguish quantum states via tomography.

Unlike the other topics in this blog post, I have never worked with compressed sensing, but my intuition goes like this: instead of measuring in the basis in which you are sparse (frequency for example), measure in a different basis. With high probability each of these measurements will pick up a little piece from each of the occupied modes. Then, to reconstruct your signal, you want to use the L0-“norm” to interpolate in such a way that you use the fewest frequency components possible. Computing the L0-“norm” is not efficient, so one of the major breakthroughs of compressed sensing was showing that with high probability computing the L1-norm approximates the L0 solution, and all of this can be done using a highly efficient linear program. However, I really shouldn’t be speculating because I’ve never invested much time into mastering this new tool, and I’m friends with a couple of the quantum state tomography authors, so maybe they’ll chime in?

Brahms is a cool dude. Brahms as a height map--cliffs=Gibbs phenomena=oh no! First three levels of Brahms wavelets.

Brahms is a cool dude. Brahms as a height map where cliffs=Gibbs phenomena=oh no! First three levels of Brahms as a Haar wavelet.

Wavelets as the mother of all bases: I previously wrote a post about the importance of choosing a convenient basis. Imagine you have an image which has a bunch of sharp contrasts, such as the outline of a person, or a horizon, or a table, basically anything. How do you store it efficiently? Due to the Gibbs phenomena, the Fourier basis is pretty awful for these applications. Here’s another motivating problem, imagine someone plays one note on an instrument. The sound is localized in both time and frequency. The Fourier basis is also pretty awful at storing/detecting this. Wavelets to the rescue! The theory of wavelets uses some beautiful math to solve the longstanding problem of finding a basis which is localized in both position and momenta space (or very close to it.) Wavelets have profound applications, some of my favorite include: modern image compression (JPEG 2000 onwards) is based on wavelets; Ingrid Daubechies and her colleagues used wavelets to detect forged paintings; recovering previously unrecoverable recordings of Brahms at the piano (I heard about this from Barry Simon, of Reed-Simon fame, who is currently teaching his last class ever); and even the FBI uses wavelets to compress images of fingerprints, obtaining a compression ratio of 20:1.

Postselection enables quantum cloning: the no-cloning theorem is well known in the field of quantum information. It says that you cannot find a machine (unitary operation U) which takes an arbitrary input state |\psi\rangle, and a known state |0\rangle, such that the machine maps |\psi\rangle \otimes |0\rangle to |\psi\rangle \otimes |\psi\rangle, and thereby cloning |\psi \rangle. This is very easy to prove using the linearity of quantum mechanics. However, there are loopholes. One of the most trivial loopholes is realizing that one can take the state |\psi\rangle and perform something called unambiguous state discrimination, which either spits out exactly which state |\psi \rangle is with some probability, or otherwise spits out “I don’t know which state.” You can postselect that the unambigious state discrimination succeeded and prepare a unitary which clones the relevant states. Peter Shor has a comment on physics stackexchange describing this. Seth Lloyd and John Preskill outlined a less trivial version of this in their recent paper which tries to circumvent firewalls by using postselected quantum teleportation.

In this blog post, I’ve only described a tiny fraction of the quantum loopholes that have been discovered. If I had more space/time, two of the next examples I would describe are beating classical correlations with quantum entanglement, in order to win at CHSH games. I would also describe weak measurements and some of the peculiar things they lead to. Beyond that, I would probably refer you to Yakir Aharonov’s amazingly fun book about quantum paradoxes.

After reading this, I hope that the next time you encounter an inviolable law of nature, you’ll apply the hacker mentality and attempt to strip it down to its essence, isolate assumptions, and potentially find a loophole. But while you’re doing this, remember that you should never argue with your mother, or with mathematics!

Defending against high-frequency attacks

It was the summer of 2008. I was 22 years old, and it was my second week working in the crude oil and natural gas options pit at the New York Mercantile Exchange (NYMEX.) My head was throbbing after two consecutive weeks of disorientation. It was like being born into a new world, but without the neuroplasticity of a young human. And then the crowd erupted. “Yeeeehawwww. YeEEEeeHaaaWWWWW. Go get ‘em cowboy.”

It seemed that everyone on the sprawling trading floor had started playing Wild Wild West and I had no idea why. After at least thirty seconds, the hollers started to move across the trading floor. They moved away 100 meters or so and then doubled back towards me. After a few meters, he finally got it, and I’m sure he learned a life lesson. Don’t be the biggest jerk in a room filled with traders, and especially, never wear triple-popped pastel-colored Lacoste shirts. This young aspiring trader had been “spurred.”

In other words, someone had made paper spurs out of trading receipts and taped them to his shoes. Go get ‘em cowboy.

I was one academic quarter away from finishing a master’s degree in statistics at Stanford University and I had accepted a full time job working in the algorithmic trading group at DRW Trading. I was doing a summer internship before finishing my degree, and after three months of working in the algorithmic trading group in Chicago, I had volunteered to work at the NYMEX. Most ‘algo’ traders didn’t want this job, because it was far-removed from our mental mathematical monasteries, but I knew I would learn a tremendous amount, so I jumped at the opportunity. And by learn, I mean, get ripped calves and triceps, because my job was to stand in place for seven straight hours updating our mathematical models on a bulky tablet PC as trades occurred.

I have no vested interests in the world of high-frequency trading (HFT). I’m currently a PhD student in the quantum information group at Caltech and I have no intentions of returning to finance. I found the work enjoyable, but not as thrilling as thinking about the beginning of the universe (what else is?) However, I do feel like the current discussion about HFT is lop-sided and I’m hoping that I can broaden the perspective by telling a few short stories.

What are the main attacks against HFT? Three of them include the evilness of: front-running markets, making money out of nothing, and instability. It’s easy to point to extreme examples of algorithmic traders abusing markets, and they regularly do, but my argument is that HFT has simply computerized age-old tactics. In this process, these tactics have become more benign and markets more stable.

Front-running markets: large oil producing nations, such as Mexico, often want to hedge their exposure to changing market prices. They do this by purchasing options. This allows them to lock in a minimum sale price, for a fee of a few dollars per barrel. During my time at the NYMEX, I distinctly remember a broker shouting into the pit: “what’s the price on DEC9 puts.” A trader doesn’t want to give away whether they want to buy or sell, because if the other traders know, then they can artificially move the price. In this particular case, this broker was known to sometimes implement parts of Mexico’s oil hedge. The other traders in the pit suspected this was a trade for Mexico because of his anxious tone, some recent geopolitical news, and the expiration date of these options.

Some confident traders took a risk and faded the market. They ended up making between $1-2 million dollars from these trades, relative to what the fair price was at that moment. I mention relative to the fair price, because Mexico ultimately received the better end of this trade. The price of oil dropped in 2009, and Mexico executed its options enabling it to sell its oil at a higher than market price. Mexico spent $1.5 billion to hedge its oil exposure in 2009.

This was an example of humans anticipating the direction of a trade and capturing millions of dollars in profit as a result. It really is profit as long as the traders can redistribute their exposure at the ‘fair’ market price before markets move too far. The analogous strategy in HFT is called “front-running the market” which was highlighted in the New York Times’ recent article “the wolf hunters of Wall Street.” The HFT version involves analyzing the prices on dozens of exchanges simultaneously, and once an order is published in the order book of one exchange, then using this demand to adjust its orders on the other exchanges. This needs to be done within a few microseconds in order to be successful. This is the computerized version of anticipating demand and fading prices accordingly. These tactics as I described them are in a grey area, but they rapidly become illegal.

Making money from nothing: arbitrage opportunities have existed for as long as humans have been trading. I’m sure an ancient trader received quite the rush when he realized for the first time that he could buy gold in one marketplace and then sell it in another, for a profit. This is only worth the trader’s efforts if he makes a profit after all expenses have been taken into consideration. One of the simplest examples in modern terms is called triangle arbitrage, and it usually involves three pairs of currencies. Currency pairs are ratios; such as USD/AUD, which tells you, how many Australian dollars you receive for one US dollar. Imagine that there is a moment in time when the product of ratios \frac{USD}{AUD}\frac{AUD}{CAD}\frac{CAD}{USD} is 1.01. Then, a trader can take her USD, buy AUD, then use her AUD to buy CAD, and then use her CAD to buy USD. As long as the underlying prices didn’t change while she carried out these three trades, she would capture one cent of profit per trade.

After a few trades like this, the prices will equilibrate and the ratio will be restored to one. This is an example of “making money out of nothing.” Clever people have been trading on arbitrage since ancient times and it is a fundamental source of liquidity. It guarantees that the price you pay in Sydney is the same as the price you pay in New York. It also means that if you’re willing to overpay by a penny per share, then you’re guaranteed a computer will find this opportunity and your order will be filled immediately. The main difference now is that once a computer has been programmed to look for a certain type of arbitrage, then the human mind can no longer compete. This is one of the original arenas where the term “high-frequency” was used. Whoever has the fastest machines, is the one who will capture the profit.

Instability: I believe that the arguments against HFT of this type have the most credibility. The concern here is that exceptional leverage creates opportunity for catastrophe. Imaginations ran wild after the Flash Crash of 2010, and even if imaginations outstripped reality, we learned much about the potential instabilities of HFT. A few questions were posed, and we are still debating the answers. What happens if market makers stop trading in unison? What happens if a programming error leads to billions of dollars in mistaken trades? Do feedback loops between algo strategies lead to artificial prices? These are reasonable questions, which are grounded in examples, and future regulation coupled with monitoring should add stability where it’s feasible.

The culture in wealth driven industries today is appalling. However, it’s no worse in HFT than in finance more broadly and many other industries. It’s important that we dissociate our disgust in a broad culture of greed from debates about the merit of HFT. Black boxes are easy targets for blame because they don’t defend themselves. But that doesn’t mean they aren’t useful when implemented properly.

Are we better off with HFT? I’d argue a resounding yes. The primary function of markets is to allocate capital efficiently. Three of the strongest measures of the efficacy of markets lie in “bid-ask” spreads, volume and volatility. If spreads are low and volume is high, then participants are essentially guaranteed access to capital at as close to the “fair price” as possible. There is huge academic literature on how HFT has impacted spreads and volume but the majority of it indicates that spreads have lowered and volume has increased. However, as alluded to above, all of these points are subtle–but in my opinion, it’s clear that HFT has increased the efficiency of markets (it turns out that computers can sometimes be helpful.) Estimates of HFT’s impact on volatility haven’t been nearly as favorable but I’d also argue these studies are more debatable. Basically, correlation is not causation, and it just so happens that our rapidly developing world is probably more volatile than the pre-HFT world of the last Millennia.

We could regulate away HFT, but we wouldn’t be able to get rid of the underlying problems people point to unless we got rid of markets altogether. As with any new industry, there are aspects of HFT that should be better monitored and regulated, but we should have level-heads and diverse data points as we continue this discussion. As with most important problems, I believe the ultimate solution here lies in educating the public. Or in other words, this is my plug for Python classes for all children!!

I promise that I’ll repent by writing something that involves actual quantum things within the next two weeks!

Reporting from the ‘Frontiers of Quantum Information Science’

What am I referring to with this title? It is similar to the name of this blog–but that’s not where this particular title comes from–although there is a common denominator. Frontiers of Quantum Information Science was the theme for the 31st Jerusalem winter school in theoretical physics, which takes place annually at the Israeli Institute for Advanced Studies located on the Givat Ram campus of the Hebrew University of Jerusalem. The school took place from December 30, 2013 through January 9, 2014, but some of the attendees are still trickling back to their home institutions. The common denominator is that our very own John Preskill was the director of this school; co-directed by Michael Ben-Or and Patrick Hayden. John mentioned during a previous post and reiterated during his opening remarks that this is the first time the IIAS has chosen quantum information to be the topic for its prestigious advanced school–another sign of quantum information’s emergence as an important sub-field of physics. In this blog post, I’m going to do my best to recount these festivities while John protects his home from forest fires, prepares a talk for the Simons Institute’s workshop on Hamiltonian complexityteaches his quantum information course and celebrates his birthday 60+1.

The school was mainly targeted at physicists, but it was diversely represented. Proof of the value of this diversity came in an interaction between a computer scientist and a physicist, which led to one of the school’s most memorable moments. Both of my most memorable moments started with the talent show (I was surprised that so many talents were on display at a physics conference…) Anyways, towards the end of the show, Mateus Araújo Santos, a PhD student in Vienna, entered the stage and mentioned that he could channel “the ghost of Feynman” to serve as an oracle for NP-complete decision problems. After making this claim, people obviously turned to Scott Aaronson, hoping that he’d be able to break the oracle. However, in order for this to happen, we had to wait until Scott’s third lecture about linear optics and boson sampling the next day. You can watch Scott bombard the oracle with decision problems from 1:00-2:15 during the video from his third lecture.

oracle_aaronson

Scott Aaronson grilling the oracle with a string of NP-complete decision problems! From 1:00-2:15 during this video.

The other most memorable moment was when John briefly danced Gangnam style during Soonwon Choi‘s talent show performance. Unfortunately, I thought I had this on video, but the video didn’t record. If anyone has video evidence of this, then please share!
Continue reading