About shaunmaguire

I have a complicated existence. I'm a partner at GV (formerly Google Ventures) but also finishing my PhD at Caltech. It's astonishing that they gave the keys to this blog to hooligans like myself.

The 10 biggest breakthroughs in physics over the past 25 years, according to us.

Making your way to the cutting edge of any field is a daunting challenge. But especially when the edge of the field is expanding; and even harder still when the rate of expansion is accelerating. John recently helped Physics World create a special 25th anniversary issue where they identified the five biggest breakthroughs in physics over the past 25 years, and also the five biggest open questions. In pure John fashion, at his group meeting on Wednesday night, he made us work before revealing the answers. The photo below shows our guesses, where the asterisks denote Physics World‘s selections. This is the blog post I wish I had when I was a fifteen year-old aspiring physicist–this is an attempt to survey and provide a tiny toehold on the edge (from my biased, incredibly naive, and still developing perspective.)

The IQI's

The IQI’s quantum information-biased guesses of Physics World’s 5 biggest breakthroughs over the past 25 years, and 5 biggest open problems. X’s denote Physics World’s selections. Somehow we ended up with 10 selections in each category…

The biggest breakthroughs of the past 25 years:

*Neutrino Mass: surprisingly, neutrinos have a nonzero mass, which provides a window into particle physics beyond the standard model. THE STANDARD MODEL has been getting a lot of attention recently. This is well deserved in my opinion, considering that the vast majority of its predictions have come true, most of which were made by the end of the 1960s. Last year’s discovery of the Higgs Boson is the feather in its cap. However, it’s boring when things work too perfectly, because then we don’t know what path to continue on. That’s where the neutrino mass comes in. First, what are neutrinos? Neutrinos are a fundamental particle that have the special property that they barely interact with other particles. There are four fundamental forces in nature: electromagnetism, gravity, strong (holds quarks together to create neutrons and protons), and weak (responsible for radioactivity and nuclear fusion.) We can design experiments which allow us to observe neutrinos. We have learned that they are electrically neutral, so they aren’t affected by electromagnetism. They are barely affected by the strong force, if at all. They have an extremely small mass, so gravity acts on them only subtly. The main way in which they interact with their environment is through the weak force. Here’s the amazing thing: only really clunky versions of the standard model can allow for a nonzero neutrino mass! Hence, when a small but nonzero mass was experimentally established in 1998, we gained one of our first toeholds into particle physics beyond the standard model. This is particularly important today, because to the best of my knowledge, the LHC hasn’t yet discovered any other new physics beyond the standard model. The mechanism behind the neutrino mass is not yet understood. Moreover, neutrinos have a bunch of other bizarre properties which we understand empirically, but not their theoretical origins. The strangest of which goes by the name neutrino oscillations. In one sentence: there are three different kinds of neutrinos, and they can spontaneously transmute themselves from one type to another. This happens because physics is formulated in the language of mathematics, and the math says that the eigenstates corresponding to ‘flavors’ are not the same as the eigenstates corresponding to ‘mass.’ Words, words, words. Maybe the Caltech particle theory people should have a blog?

Shor’s Algorithm: a quantum computer can factor N=1433301577 into 37811*37907 exponentially faster than a classical computer. This result from Peter Shor in 1994 is near and dear to our quantum hearts. It opened the floodgates showing that there are tasks a quantum computer could perform exponentially faster than a classical computer, and therefore that we should get BIG$$$ from the world over in order to advance our field!! The task here is factoring large numbers into their prime factors; the difficulty of which has been the basis for many cryptographic protocols. In one sentence, Shor’s algorithm achieves this exponential speed-up because there is a step in the factoring algorithm (period finding) which can be performed in parallel via the quantum Fourier transform.
Continue reading

On the importance of choosing a convenient basis

The benefits of Caltech’s proximity to Hollywood don’t usually trickle down to measly grad students like myself, except in the rare occasions when we befriend the industry’s technical contingent. One of my friends is a computer animator for Disney, which means that she designs algorithms enabling luxuriously flowing hair or trees with realistic lighting or feathers that have gorgeous texture, for movies like Wreck-it Ralph. Empowering computers to efficiently render scenes with these complicated details is trickier than you’d think and it requires sophisticated new mathematics. Fascinating conversations are one of the perks of having friends like this. But so are free trips to Disneyland! A couple nights ago, while standing in line for The Tower of Terror, I asked her what’s she’s currently working on. She’s very smart, as can be evidenced by her BS/MS in Computer Science/Mathematics from MIT, but she asked me if I “know about spherical harmonics.” Asking this to an aspiring quantum mechanic is like asking an auto mechanic if they know how to use a monkey wrench. She didn’t know what she was getting herself into!

me, LIGO, Disney

IQIM, LIGO, Disney

Along with this spherical harmonics conversation, I had a few other incidents last week that hammered home the importance of choosing a convenient basis when solving a scientific problem. First, my girlfriend works on LIGO and she’s currently writing her thesis. LIGO is a huge collaboration involving hundreds of scientists, and naturally, nobody there knows the detailed inner-workings of every subsystem. However, when it comes to writing the overview section of ones thesis, you need to at least make a good faith attempt to understand the whole behemoth. Anyways, my girlfriend recently asked if I know how the wavelet transform works. This is another example of a convenient basis, one that is particularly suited for analyzing abrupt changes, such as detecting the gravitational waves that would be emitted during the final few seconds of two black holes merging (ring-down). Finally, for the past couple weeks, I’ve been trying to understand entanglement entropy in quantum field theories. Most of the calculations that can be carried out explicitly are for the special subclass of quantum field theories called “conformal field theories,” which in two dimensions have a very convenient ‘basis’, the Virasoro algebra.

So why does a Disney animator care about spherical harmonics? It turns out that every frame that goes into one of Disney’s movies needs to be digitally rendered using a powerful computing cluster. The animated film industry has traded the painstaking process of hand-animators drawing every single frame, for the almost equally time-consuming process of computer clusters generating every frame. It doesn’t look like strong AI will be available in our immediate future, and in the meantime, humans are still much better than computers at detecting patterns and making intuitive judgements about the ‘physical correctness of an image.’ One of the primary advantages of computer animation is that an animator shouldn’t need to shade in every pixel of every frame — some of this burden should fall on computers. Let’s imagine a thought experiment. An animator wants to get the lighting correct for a nighttime indoor shot. They should be able to simply place the moon somewhere out of the shot, so that its glow can penetrate through the windows. They should also be able to choose from a drop down menu and tell the computer that a hand drawn lightbulb is a ‘light source.’ The computer should then figure out how to make all of the shadows and brightness appear physically correct. Another example of a hard problem is that an animator should be able to draw a character, then tell the computer that the hair they drew is ‘hair’, so that as the character moves through scenes, the physics of the hair makes sense. Programming computers do these things autonomously is harder than it sounds.

In the lighting example, imagine you want to get the lighting correct in a forest shot with complicated pine trees and leaf structures. The computer would need to do the ray-tracing for all of the photons emanating from the different light sources, and then the second-order effects as these photons reflect, and then third-order effects, etc. It’s a tall order to make the scene look accurate to the human eyeball/brain. Instead of doing all of this ray-tracing, it’s helpful to choose a convenient basis in order to dramatically speed up the processing. Instead of the complicated forest example, let’s imagine you are working with a tree from Super Mario Bros. Imagine drawing a sphere somewhere in the middle of this and then defining a ‘height function’, which outputs the ‘elevation’ of the tree foliage over each point on the sphere. I tried to use suggestive language, so that you’d draw an analogy to thinking of Earth’s ‘height function’ as the elevation of mountains and the depths of trenches over the sphere, with sea-level as a baseline. An example of how you could digitize this problem for a tree or for the earth is by breaking up the sphere into a certain number of pixels, maybe one per square meter for the earth (5*10^14 square meters gives approximately 2^49 pixels), and then associating an integer height value between [-2^15,2^15] to each pixel. This would effectively digitize the height map of the earth. In this case, keeping track of the elevation to approximately the meter level. But this leaves us with a huge amount of information that we need to store, and then process. We’d have to keep track of the height value for each pixel, giving us approximately 2^49*2^16=2^65 bits=4 exabytes that we’d have to keep track of. And this is for an easy static problem with only meter resolution! We can store this information much more efficiently using spherical harmonics.

mariotree

There are many ways to think about spherical harmonics. Basically, they’re functions which map points on the sphere to real numbers Y_l^m: (\theta,\phi) \mapsto Y_l^m(\theta,\phi)\in\mathbb{R}, such that they satisfy a few special properties. They are orthogonal, meaning that if you multiply two different spherical harmonics together and then integrate over the sphere, then you get zero. If you square one of the functions and then integrate over the sphere, you get a finite, nonzero value. This means that they are orthogonal functions. They also span the space of all height functions that one could define over the sphere. This means that for a planet with an arbitrarily complicated topography, you would be able to find some weighted combination of different spherical harmonics which perfectly describes that planet’s topography. These are the key properties which make a set of functions a basis: they span and are orthogonal (this is only a heuristic). There is also a natural way to think about the light that hits the tree. We can use the same sphere and simply calculate the light rays as they would hit the ideal sphere. With these two different ‘height functions’, it’s easy to calculate the shadows and brightness inside the tree. You simply convolve the two functions, which is a fast operation on a computer. It also means that if the breeze slightly changes the shape of the tree, or if the sun moves a little bit, then it’s very easy to update the shading. Implicit in what I just said, using spherical harmonics allows us to efficiently store this height map. I haven’t calculated this on a computer, but it doesn’t seem totally crazy to think that we’d be able to store the topography of the earth to a reasonable accuracy, with 100 nonzero coefficients of the spherical harmonics to 64 bits of precision, 2^7*2^6= 2^13 << 2^65. Where does this cost savings come from? It comes from the fact that the spherical harmonics are a convenient basis, which naturally encode the types of correlations we see in Earth’s topography — if you’re standing at an elevation of 2000m, the area within ten meters is probably at a similar elevation. Cliffs are what break this basis — but are what the wavelet basis was designed to handle.

I’ve only described a couple bases in this post and I’ve neglected to mention some of the most famous examples! This includes the Fourier basis, which was designed to encode periodic signals, such as music and radio waves. I also have not gone into any detail about the Virasoro algebra, which I mentioned at the beginning of this post, and I’ve been using it heavily for the past few weeks. For the sake of diversity, I’ll spend a few sentences whetting your apetite. Complex analysis is primarily the study of analytic functions. In two dimensions, these analytic functions “preserve angles.” This means that if you have two curves which intersect at a point with angle \theta, then after using an analytic function to map these curves to their image, also in the complex plane, then the angle between the curves will still be \theta. An especially convenient basis for the analytic functions in two-dimensions (\{f: \mathbb{C} \to \mathbb{C}\}, where f(z) = \sum_{n=0}^{\infty} a_nz^n) is given by the set of functions \{l_n = -z^{n+1}\partial_z\}. As always, I’m not being exactly precise, but this is a ‘basis’ because we can encode all the information describing an infinitesimal two-dimensional angle-preserving map using these elements. It turns out to have incredibly special properties, including that its quantum cousin yields something called the “central charge” which has deep ramifications in physics, such as being related to the c-theorem. Conformal field theories are fascinating because they describe the physics of phase transitions. Having a convenient basis in two-dimensions is a large part of why we’ve been able to make progress in our understanding of two-dimensional phase transitions (more important is that the 2d conformal symmetry group is infinite-dimensional, but that’s outside the scope of this post.) Convenient bases are also important for detecting gravitational waves, making incredible movies and striking up nerdy conversations in long lines at Disneyland!

 

QIP 2013 from the perspective of a greenhorn (grad student)

caltechCrew_qip2013

Most of Caltech’s contingent during QIP’s banquet. Not pictured: sword dancers, jug balancers and Gorjan.

A couple of weeks ago, about half of IQI (now part of IQIM) flew from Pasadena to Beijing in order to attend QIP 2013, the 16th annual workshop on quantum information processing. I wish I could report that the quantum information community solved the world’s problems over the past year, or at least built a 2^10 qubit universal quantum computer, but unfortunately, we’re not quite there yet. As a substitute, I’ll mention a few of the talks that I particularly enjoyed and the really hard open problems that they left us with.

The emphases of the talks mainly bifurcated towards computer science versus physics. I was better prepared to understand the talks emphasizing the latter, so my comments will mainly describe those talks. Patrick Hayden’s talk: “summoning information in spacetime: or where and when can a qubit be?” was one of my favorites. To the extent that I understood things, the goal of this work is to better understand how quantum information can propagate forward in time. If a qubit were created at spacetime location S, and then if it were forced to remain localized, the no-cloning theorem would give strict bounds regarding how it could move forward in time. The qubit would follow a worldline and that would be the end of things. However, qubits don’t need to remain localized, as teleportation experiments have pretty clearly demonstrated, and it therefore seems like qubits can propagate into the future in more subtle ways–ways that at face value appear to violate the no-cloning theorem. Patrick and the undergraduate that he worked with on this project, Alex May, came up with a pictorial approach to better understand these subtleties. The really hard open problems that these ideas could potentially be applied to include: firewalls, position-dependent quantum cryptography and to paradoxes concerning the apparent no-cloning violations near black hole event horizons.
Continue reading

Science Magazine’s Breakthrough of 2012

A few nights ago, I attended Dr. Harvey B. Newman’s public lecture at Caltech titled: “Physics at the Large Hadron Collider: A New Window on Matter, Spacetime and the Universe.” The weekly quantum information group meeting finished early so that we could attend the lecture (Dr. Preskill’s group meeting lasted slightly longer than two hours: record brevity during the seven months that I’ve been a member!) We weren’t alone in deciding to attend this lecture. Seating on the ground floor of Beckman Auditorium was filled, so there were at least 800 people in attendance. Judging by the age of the audience, and from a few comments that I overheard, I estimate that a majority of the audience was unaffiliated with Caltech. Anyways, Dr. Newman’s inspiring lecture reminded me how lucky I am to be a graduate student at Caltech and it also clarified misconceptions surrounding the Large Hadron Collider (LHC), and the discovery of the Higgs, in particular.

Before mentioning some of the highlights of Dr. Newman’s lecture, I want to describe the atmosphere in the room leading up to the talk. A few minutes before the lecture began, I overheard a conversation between three women. It came up that one of the ladies is a Caltech physics graduate student. When I glanced over my shoulder, I recognized that the girl, Emily, is a friend of mine. She was talking to a mother and her high school-aged daughter who loves physics. It’s hard to describe the admiration that oozed from the mother’s face as she spoke with Emily–it was as if Emily provided a window into a future where her daughter’s dreams had come true. It brought back memories, from when I was in the high schooler’s position. As a scientifically-minded child growing up in Southern California, I dreamed of studying at Caltech, but it seemed like an impossible goal. I empathized with the high schooler and also with her mother, who reminded me of my own mom. Mom’s have a hopeless job: they’re genetically programmed to want the best for their children, but they oftentimes don’t have the means to make these dreams a reality. Especially when the child’s dream is to become a scientist. It’s a rare parent who understands the textbooks that an aspiring scientist consumes themselves with, and an even rarer parent, who can give their child an advantage when they enter the crapshoot that is undergraduate admissions. The angst of the conversation reminded me that I’m one of the lucky few whose childhood dreams have come true–it’s an opportunity that I don’t want to squander.

The conversation between two elderly men sitting next to me also brought back uncomfortable memories. They were trying to prove their intelligence to each other through an endless proceeding of anecdotes and physics observations. I empathized with them as well. Being at a place like Caltech is intimidating. As an outsider, you don’t have explicit credentials signaling that you belong, so you walk on eggshells, trying to prove how smart you are. I’ve seen this countless times, such as when I give tours to high schoolers, but it’s especially pronounced amongst incoming graduate students. However, it quickly fades as they become comfortable with their position. But to outsiders, every time they re-enter a hallowed place, their insecurities flood back. I know this because I was guilty of this! I spoke with the gentlemen for a while and they were incredibly nice, but smart as they were, they were momentarily insecure. Putting on my ambassador hat for a moment, if there are any ‘outsiders’ reading this blog, I want to say that I, for one, am glad that you attend events like this.
Continue reading

It’s been a tough week for hidden variable theories

The RSS subscriptions which populate my Google Reader mainly fall into two categories: scientific and other. Sometimes patterns emerge when superimposing these disparate fields onto the same photo-detection plate (my brain.) Today, it became abundantly clear that it’s been a tough week for hidden variable theories.

Let me explain. Hidden variable theories were proposed by physicists in an attempt to explain the ‘indeterminism’ which seems to arise in quantum mechanics, and especially in the double-slit experiment. This probably means nothing to many of you, so let me explain further: the hidden variables in Tuesday’s election weren’t enough to trump Nate Silver’s incredibly accurate predictions based upon statistics and data (hidden variables in Tuesday’s election include: “momentum,” “the opinions of undecided voters,” and “pundit’s hunches.”) This isn’t to say that there weren’t hidden variables at play — clearly the statistical models used weren’t fully complete and will someday be improved upon — but hidden variables alone weren’t the dominant influence. Indeed, Barack Obama was re-elected for a second term. However, happy as I was to see statistics trump hunches, the point of this post is not to wax political, but rather to describe the recent failure of hidden variable theories in an arena more appropriate for this blog: quantum experiments.

Continue reading

How to build a teleportation machine: Teleportation protocol

Damn, it sure takes a long time for light to travel to the Pegasus galaxy. If only quantum teleportation enabled FTL Stargates…

I was hoping to post this earlier, but a heavy dose of writer’s block set in (I met a girl, and no, this blog didn’t help — but she is a physicist!) I also got lost in the rabbit hole that is quantum teleportation. My initial intention with this series of posts was simply to clarify common misconceptions and to introduce basic concepts in quantum information. However, while doing so, I started a whirlwind tour of deep questions in physics which become unavoidable as you think harder and deeper about quantum teleportation. I’ve only just begun this journey, but using quantum teleportation as a springboard has already led me to contemplate crazy things such as time-travel via coupling postselection with quantum teleportation and the subtleties of entanglement. In other words, quantum teleportation may not be the instantaneous Stargate style teleportation you had in mind, but it’s incredibly powerful in its own right. Personally, I think we’ve barely begun to understand the full extent of its ramifications.

Continue reading

How to build a teleportation machine: Intro to entanglement

Oh my, I ate the whole thing again. Are physicists eligible for Ben and Jerry’s sponsorships?

I’m not sure what covers more ground when I go for a long run — my physical body or my metaphorical mind? Chew on that one, zen scholar! Anyways, I basically wrote the following post during my most recent run, and I also worked up an agressive appetite for Ben and Jerry’s ice cream. I’m going to reward myself after writing this post by devouring a pint of “half-baked” brown-kie ice cream (you can’t find this stuff in your local store.)

The goal of this series of blog posts is to explain quantum teleportation and how Caltech built a machine to do this. The tricky aspect is that there are two foundational elements of quantum information that need to be explained first — they’re both phenomenally interesting in their own right, but substantially subtler than a teleportation device, so the goal with this series is to explain qubits and entanglement at a level which will allow you to appreciate what our teleportation machine does (and after explaining quantum teleportation, hopefully some of you will be motivated to dive deeper into the subtleties of quantum information.) This post will explain entanglement.
Continue reading

How to build a teleportation machine: Intro to qubits

A match made in heaven.

If a tree falls in a forest, and nobody is there to hear it, does it make a sound? The answer was obvious to my 12-year-old self — of course it made a sound. More specifically, something ranging from a thud to a thump. There doesn’t need to be an animal present for the tree to jiggle air molecules. Classical physics for the win! Around the same time I was exposed to this thought experiment, I read Michael Crichton’s Timeline. The premise is simple, but not necessarily feasible: archeologists use ‘quantum technology’ (many-worlds interpretation and quantum teleportation) to travel to the Dordogne region of France in the mid 1300s. Blood, guts, action, drama, and plot twists ensue. I haven’t returned to this book since I was thirteen, so I’m guaranteed to have the plot wrong, but for better or worse, I credit this book with planting the seeds of a misconception about what ‘quantum teleportation’ actually entails. This is the first of a multi-part post which will introduce readers to the one-and-only way we know of how teleportation works.
Continue reading