About shaunmaguire

I'm a PhD student working in quantum information at Caltech. It's astonishing that they gave the keys to this blog to hooligans like myself.

Defending against high-frequency attacks


It was the summer of 2008. I was 22 years old, and it was my second week working in the crude oil and natural gas options pit at the New York Mercantile Exchange (NYMEX.) My head was throbbing after two consecutive weeks of disorientation. It was like being born into a new world, but without the neuroplasticity of a young human. And then the crowd erupted. “Yeeeehawwww. YeEEEeeHaaaWWWWW. Go get ‘em cowboy.”

It seemed that everyone on the sprawling trading floor had started playing Wild Wild West and I had no idea why. After at least thirty seconds, the hollers started to move across the trading floor. They moved away 100 meters or so and then doubled back towards me. After a few meters, he finally got it, and I’m sure he learned a life lesson. Don’t be the biggest jerk in a room filled with traders, and especially, never wear triple-popped pastel-colored Lacoste shirts. This young aspiring trader had been “spurred.”

In other words, someone had made paper spurs out of trading receipts and taped them to his shoes. Go get ‘em cowboy.

I was one academic quarter away from finishing a master’s degree in statistics at Stanford University and I had accepted a full time job working in the algorithmic trading group at DRW Trading. I was doing a summer internship before finishing my degree, and after three months of working in the algorithmic trading group in Chicago, I had volunteered to work at the NYMEX. Most ‘algo’ traders didn’t want this job, because it was far-removed from our mental mathematical monasteries, but I knew I would learn a tremendous amount, so I jumped at the opportunity. And by learn, I mean, get ripped calves and triceps, because my job was to stand in place for seven straight hours updating our mathematical models on a bulky tablet PC as trades occurred.

I have no vested interests in the world of high-frequency trading (HFT). I’m currently a PhD student in the quantum information group at Caltech and I have no intentions of returning to finance. I found the work enjoyable, but not as thrilling as thinking about the beginning of the universe (what else is?) However, I do feel like the current discussion about HFT is lop-sided and I’m hoping that I can broaden the perspective by telling a few short stories.

What are the main attacks against HFT? Three of them include the evilness of: front-running markets, making money out of nothing, and instability. It’s easy to point to extreme examples of algorithmic traders abusing markets, and they regularly do, but my argument is that HFT has simply computerized age-old tactics. In this process, these tactics have become more benign and markets more stable.

Front-running markets: large oil producing nations, such as Mexico, often want to hedge their exposure to changing market prices. They do this by purchasing options. This allows them to lock in a minimum sale price, for a fee of a few dollars per barrel. During my time at the NYMEX, I distinctly remember a broker shouting into the pit: “what’s the price on DEC9 puts.” A trader doesn’t want to give away whether they want to buy or sell, because if the other traders know, then they can artificially move the price. In this particular case, this broker was known to sometimes implement parts of Mexico’s oil hedge. The other traders in the pit suspected this was a trade for Mexico because of his anxious tone, some recent geopolitical news, and the expiration date of these options.

Some confident traders took a risk and faded the market. They ended up making between $1-2 million dollars from these trades, relative to what the fair price was at that moment. I mention relative to the fair price, because Mexico ultimately received the better end of this trade. The price of oil dropped in 2009, and Mexico executed its options enabling it to sell its oil at a higher than market price. Mexico spent $1.5 billion to hedge its oil exposure in 2009.

This was an example of humans anticipating the direction of a trade and capturing millions of dollars in profit as a result. It really is profit as long as the traders can redistribute their exposure at the ‘fair’ market price before markets move too far. The analogous strategy in HFT is called “front-running the market” which was highlighted in the New York Times’ recent article “the wolf hunters of Wall Street.” The HFT version involves analyzing the prices on dozens of exchanges simultaneously, and once an order is published in the order book of one exchange, then using this demand to adjust its orders on the other exchanges. This needs to be done within a few microseconds in order to be successful. This is the computerized version of anticipating demand and fading prices accordingly. These tactics as I described them are in a grey area, but they rapidly become illegal.

Making money from nothing: arbitrage opportunities have existed for as long as humans have been trading. I’m sure an ancient trader received quite the rush when he realized for the first time that he could buy gold in one marketplace and then sell it in another, for a profit. This is only worth the trader’s efforts if he makes a profit after all expenses have been taken into consideration. One of the simplest examples in modern terms is called triangle arbitrage, and it usually involves three pairs of currencies. Currency pairs are ratios; such as USD/AUD, which tells you, how many Australian dollars you receive for one US dollar. Imagine that there is a moment in time when the product of ratios \frac{USD}{AUD}\frac{AUD}{CAD}\frac{CAD}{USD} is 1.01. Then, a trader can take her USD, buy AUD, then use her AUD to buy CAD, and then use her CAD to buy USD. As long as the underlying prices didn’t change while she carried out these three trades, she would capture one cent of profit per trade.

After a few trades like this, the prices will equilibrate and the ratio will be restored to one. This is an example of “making money out of nothing.” Clever people have been trading on arbitrage since ancient times and it is a fundamental source of liquidity. It guarantees that the price you pay in Sydney is the same as the price you pay in New York. It also means that if you’re willing to overpay by a penny per share, then you’re guaranteed a computer will find this opportunity and your order will be filled immediately. The main difference now is that once a computer has been programmed to look for a certain type of arbitrage, then the human mind can no longer compete. This is one of the original arenas where the term “high-frequency” was used. Whoever has the fastest machines, is the one who will capture the profit.

Instability: I believe that the arguments against HFT of this type have the most credibility. The concern here is that exceptional leverage creates opportunity for catastrophe. Imaginations ran wild after the Flash Crash of 2010, and even if imaginations outstripped reality, we learned much about the potential instabilities of HFT. A few questions were posed, and we are still debating the answers. What happens if market makers stop trading in unison? What happens if a programming error leads to billions of dollars in mistaken trades? Do feedback loops between algo strategies lead to artificial prices? These are reasonable questions, which are grounded in examples, and future regulation coupled with monitoring should add stability where it’s feasible.

The culture in wealth driven industries today is appalling. However, it’s no worse in HFT than in finance more broadly and many other industries. It’s important that we dissociate our disgust in a broad culture of greed from debates about the merit of HFT. Black boxes are easy targets for blame because they don’t defend themselves. But that doesn’t mean they aren’t useful when implemented properly.

Are we better off with HFT? I’d argue a resounding yes. The primary function of markets is to allocate capital efficiently. Three of the strongest measures of the efficacy of markets lie in “bid-ask” spreads, volume and volatility. If spreads are low and volume is high, then participants are essentially guaranteed access to capital at as close to the “fair price” as possible. There is huge academic literature on how HFT has impacted spreads and volume but the majority of it indicates that spreads have lowered and volume has increased. However, as alluded to above, all of these points are subtle–but in my opinion, it’s clear that HFT has increased the efficiency of markets (it turns out that computers can sometimes be helpful.) Estimates of HFT’s impact on volatility haven’t been nearly as favorable but I’d also argue these studies are more debatable. Basically, correlation is not causation, and it just so happens that our rapidly developing world is probably more volatile than the pre-HFT world of the last Millennia.

We could regulate away HFT, but we wouldn’t be able to get rid of the underlying problems people point to unless we got rid of markets altogether. As with any new industry, there are aspects of HFT that should be better monitored and regulated, but we should have level-heads and diverse data points as we continue this discussion. As with most important problems, I believe the ultimate solution here lies in educating the public. Or in other words, this is my plug for Python classes for all children!!

I promise that I’ll repent by writing something that involves actual quantum things within the next two weeks!

Reporting from the ‘Frontiers of Quantum Information Science’


What am I referring to with this title? It is similar to the name of this blog–but that’s not where this particular title comes from–although there is a common denominator. Frontiers of Quantum Information Science was the theme for the 31st Jerusalem winter school in theoretical physics, which takes place annually at the Israeli Institute for Advanced Studies located on the Givat Ram campus of the Hebrew University of Jerusalem. The school took place from December 30, 2013 through January 9, 2014, but some of the attendees are still trickling back to their home institutions. The common denominator is that our very own John Preskill was the director of this school; co-directed by Michael Ben-Or and Patrick Hayden. John mentioned during a previous post and reiterated during his opening remarks that this is the first time the IIAS has chosen quantum information to be the topic for its prestigious advanced school–another sign of quantum information’s emergence as an important sub-field of physics. In this blog post, I’m going to do my best to recount these festivities while John protects his home from forest fires, prepares a talk for the Simons Institute’s workshop on Hamiltonian complexityteaches his quantum information course and celebrates his birthday 60+1.

The school was mainly targeted at physicists, but it was diversely represented. Proof of the value of this diversity came in an interaction between a computer scientist and a physicist, which led to one of the school’s most memorable moments. Both of my most memorable moments started with the talent show (I was surprised that so many talents were on display at a physics conference…) Anyways, towards the end of the show, Mateus Araújo Santos, a PhD student in Vienna, entered the stage and mentioned that he could channel “the ghost of Feynman” to serve as an oracle for NP-complete decision problems. After making this claim, people obviously turned to Scott Aaronson, hoping that he’d be able to break the oracle. However, in order for this to happen, we had to wait until Scott’s third lecture about linear optics and boson sampling the next day. You can watch Scott bombard the oracle with decision problems from 1:00-2:15 during the video from his third lecture.

oracle_aaronson

Scott Aaronson grilling the oracle with a string of NP-complete decision problems! From 1:00-2:15 during this video.

The other most memorable moment was when John briefly danced Gangnam style during Soonwon Choi‘s talent show performance. Unfortunately, I thought I had this on video, but the video didn’t record. If anyone has video evidence of this, then please share!
Continue reading

The 10 biggest breakthroughs in physics over the past 25 years, according to us.


Making your way to the cutting edge of any field is a daunting challenge. But especially when the edge of the field is expanding; and even harder still when the rate of expansion is accelerating. John recently helped Physics World create a special 25th anniversary issue where they identified the five biggest breakthroughs in physics over the past 25 years, and also the five biggest open questions. In pure John fashion, at his group meeting on Wednesday night, he made us work before revealing the answers. The photo below shows our guesses, where the asterisks denote Physics World‘s selections. This is the blog post I wish I had when I was a fifteen year-old aspiring physicist–this is an attempt to survey and provide a tiny toehold on the edge (from my biased, incredibly naive, and still developing perspective.)

The IQI's

The IQI’s quantum information-biased guesses of Physics World’s 5 biggest breakthroughs over the past 25 years, and 5 biggest open problems. X’s denote Physics World’s selections. Somehow we ended up with 10 selections in each category…

The biggest breakthroughs of the past 25 years:

*Neutrino Mass: surprisingly, neutrinos have a nonzero mass, which provides a window into particle physics beyond the standard model. THE STANDARD MODEL has been getting a lot of attention recently. This is well deserved in my opinion, considering that the vast majority of its predictions have come true, most of which were made by the end of the 1960s. Last year’s discovery of the Higgs Boson is the feather in its cap. However, it’s boring when things work too perfectly, because then we don’t know what path to continue on. That’s where the neutrino mass comes in. First, what are neutrinos? Neutrinos are a fundamental particle that have the special property that they barely interact with other particles. There are four fundamental forces in nature: electromagnetism, gravity, strong (holds quarks together to create neutrons and protons), and weak (responsible for radioactivity and nuclear fusion.) We can design experiments which allow us to observe neutrinos. We have learned that they are electrically neutral, so they aren’t affected by electromagnetism. They are barely affected by the strong force, if at all. They have an extremely small mass, so gravity acts on them only subtly. The main way in which they interact with their environment is through the weak force. Here’s the amazing thing: only really clunky versions of the standard model can allow for a nonzero neutrino mass! Hence, when a small but nonzero mass was experimentally established in 1998, we gained one of our first toeholds into particle physics beyond the standard model. This is particularly important today, because to the best of my knowledge, the LHC hasn’t yet discovered any other new physics beyond the standard model. The mechanism behind the neutrino mass is not yet understood. Moreover, neutrinos have a bunch of other bizarre properties which we understand empirically, but not their theoretical origins. The strangest of which goes by the name neutrino oscillations. In one sentence: there are three different kinds of neutrinos, and they can spontaneously transmute themselves from one type to another. This happens because physics is formulated in the language of mathematics, and the math says that the eigenstates corresponding to ‘flavors’ are not the same as the eigenstates corresponding to ‘mass.’ Words, words, words. Maybe the Caltech particle theory people should have a blog?

Shor’s Algorithm: a quantum computer can factor N=1433301577 into 37811*37907 exponentially faster than a classical computer. This result from Peter Shor in 1994 is near and dear to our quantum hearts. It opened the floodgates showing that there are tasks a quantum computer could perform exponentially faster than a classical computer, and therefore that we should get BIG$$$ from the world over in order to advance our field!! The task here is factoring large numbers into their prime factors; the difficulty of which has been the basis for many cryptographic protocols. In one sentence, Shor’s algorithm achieves this exponential speed-up because there is a step in the factoring algorithm (period finding) which can be performed in parallel via the quantum Fourier transform.
Continue reading

On the importance of choosing a convenient basis


The benefits of Caltech’s proximity to Hollywood don’t usually trickle down to measly grad students like myself, except in the rare occasions when we befriend the industry’s technical contingent. One of my friends is a computer animator for Disney, which means that she designs algorithms enabling luxuriously flowing hair or trees with realistic lighting or feathers that have gorgeous texture, for movies like Wreck-it Ralph. Empowering computers to efficiently render scenes with these complicated details is trickier than you’d think and it requires sophisticated new mathematics. Fascinating conversations are one of the perks of having friends like this. But so are free trips to Disneyland! A couple nights ago, while standing in line for The Tower of Terror, I asked her what’s she’s currently working on. She’s very smart, as can be evidenced by her BS/MS in Computer Science/Mathematics from MIT, but she asked me if I “know about spherical harmonics.” Asking this to an aspiring quantum mechanic is like asking an auto mechanic if they know how to use a monkey wrench. She didn’t know what she was getting herself into!

me, LIGO, Disney

IQIM, LIGO, Disney

Along with this spherical harmonics conversation, I had a few other incidents last week that hammered home the importance of choosing a convenient basis when solving a scientific problem. First, my girlfriend works on LIGO and she’s currently writing her thesis. LIGO is a huge collaboration involving hundreds of scientists, and naturally, nobody there knows the detailed inner-workings of every subsystem. However, when it comes to writing the overview section of ones thesis, you need to at least make a good faith attempt to understand the whole behemoth. Anyways, my girlfriend recently asked if I know how the wavelet transform works. This is another example of a convenient basis, one that is particularly suited for analyzing abrupt changes, such as detecting the gravitational waves that would be emitted during the final few seconds of two black holes merging (ring-down). Finally, for the past couple weeks, I’ve been trying to understand entanglement entropy in quantum field theories. Most of the calculations that can be carried out explicitly are for the special subclass of quantum field theories called “conformal field theories,” which in two dimensions have a very convenient ‘basis’, the Virasoro algebra.

So why does a Disney animator care about spherical harmonics? It turns out that every frame that goes into one of Disney’s movies needs to be digitally rendered using a powerful computing cluster. The animated film industry has traded the painstaking process of hand-animators drawing every single frame, for the almost equally time-consuming process of computer clusters generating every frame. It doesn’t look like strong AI will be available in our immediate future, and in the meantime, humans are still much better than computers at detecting patterns and making intuitive judgements about the ‘physical correctness of an image.’ One of the primary advantages of computer animation is that an animator shouldn’t need to shade in every pixel of every frame — some of this burden should fall on computers. Let’s imagine a thought experiment. An animator wants to get the lighting correct for a nighttime indoor shot. They should be able to simply place the moon somewhere out of the shot, so that its glow can penetrate through the windows. They should also be able to choose from a drop down menu and tell the computer that a hand drawn lightbulb is a ‘light source.’ The computer should then figure out how to make all of the shadows and brightness appear physically correct. Another example of a hard problem is that an animator should be able to draw a character, then tell the computer that the hair they drew is ‘hair’, so that as the character moves through scenes, the physics of the hair makes sense. Programming computers do these things autonomously is harder than it sounds.

In the lighting example, imagine you want to get the lighting correct in a forest shot with complicated pine trees and leaf structures. The computer would need to do the ray-tracing for all of the photons emanating from the different light sources, and then the second-order effects as these photons reflect, and then third-order effects, etc. It’s a tall order to make the scene look accurate to the human eyeball/brain. Instead of doing all of this ray-tracing, it’s helpful to choose a convenient basis in order to dramatically speed up the processing. Instead of the complicated forest example, let’s imagine you are working with a tree from Super Mario Bros. Imagine drawing a sphere somewhere in the middle of this and then defining a ‘height function’, which outputs the ‘elevation’ of the tree foliage over each point on the sphere. I tried to use suggestive language, so that you’d draw an analogy to thinking of Earth’s ‘height function’ as the elevation of mountains and the depths of trenches over the sphere, with sea-level as a baseline. An example of how you could digitize this problem for a tree or for the earth is by breaking up the sphere into a certain number of pixels, maybe one per square meter for the earth (5*10^14 square meters gives approximately 2^49 pixels), and then associating an integer height value between [-2^15,2^15] to each pixel. This would effectively digitize the height map of the earth. In this case, keeping track of the elevation to approximately the meter level. But this leaves us with a huge amount of information that we need to store, and then process. We’d have to keep track of the height value for each pixel, giving us approximately 2^49*2^16=2^65 bits=4 exabytes that we’d have to keep track of. And this is for an easy static problem with only meter resolution! We can store this information much more efficiently using spherical harmonics.

mariotree

There are many ways to think about spherical harmonics. Basically, they’re functions which map points on the sphere to real numbers Y_l^m: (\theta,\phi) \mapsto Y_l^m(\theta,\phi)\in\mathbb{R}, such that they satisfy a few special properties. They are orthogonal, meaning that if you multiply two different spherical harmonics together and then integrate over the sphere, then you get zero. If you square one of the functions and then integrate over the sphere, you get a finite, nonzero value. This means that they are orthogonal functions. They also span the space of all height functions that one could define over the sphere. This means that for a planet with an arbitrarily complicated topography, you would be able to find some weighted combination of different spherical harmonics which perfectly describes that planet’s topography. These are the key properties which make a set of functions a basis: they span and are orthogonal (this is only a heuristic). There is also a natural way to think about the light that hits the tree. We can use the same sphere and simply calculate the light rays as they would hit the ideal sphere. With these two different ‘height functions’, it’s easy to calculate the shadows and brightness inside the tree. You simply convolve the two functions, which is a fast operation on a computer. It also means that if the breeze slightly changes the shape of the tree, or if the sun moves a little bit, then it’s very easy to update the shading. Implicit in what I just said, using spherical harmonics allows us to efficiently store this height map. I haven’t calculated this on a computer, but it doesn’t seem totally crazy to think that we’d be able to store the topography of the earth to a reasonable accuracy, with 100 nonzero coefficients of the spherical harmonics to 64 bits of precision, 2^7*2^6= 2^13 << 2^65. Where does this cost savings come from? It comes from the fact that the spherical harmonics are a convenient basis, which naturally encode the types of correlations we see in Earth’s topography — if you’re standing at an elevation of 2000m, the area within ten meters is probably at a similar elevation. Cliffs are what break this basis — but are what the wavelet basis was designed to handle.

I’ve only described a couple bases in this post and I’ve neglected to mention some of the most famous examples! This includes the Fourier basis, which was designed to encode periodic signals, such as music and radio waves. I also have not gone into any detail about the Virasoro algebra, which I mentioned at the beginning of this post, and I’ve been using it heavily for the past few weeks. For the sake of diversity, I’ll spend a few sentences whetting your apetite. Complex analysis is primarily the study of analytic functions. In two dimensions, these analytic functions “preserve angles.” This means that if you have two curves which intersect at a point with angle \theta, then after using an analytic function to map these curves to their image, also in the complex plane, then the angle between the curves will still be \theta. An especially convenient basis for the analytic functions in two-dimensions (\{f: \mathbb{C} \to \mathbb{C}\}, where f(z) = \sum_{n=0}^{\infty} a_nz^n) is given by the set of functions \{l_n = -z^{n+1}\partial_z\}. As always, I’m not being exactly precise, but this is a ‘basis’ because we can encode all the information describing an infinitesimal two-dimensional angle-preserving map using these elements. It turns out to have incredibly special properties, including that its quantum cousin yields something called the “central charge” which has deep ramifications in physics, such as being related to the c-theorem. Conformal field theories are fascinating because they describe the physics of phase transitions. Having a convenient basis in two-dimensions is a large part of why we’ve been able to make progress in our understanding of two-dimensional phase transitions (more important is that the 2d conformal symmetry group is infinite-dimensional, but that’s outside the scope of this post.) Convenient bases are also important for detecting gravitational waves, making incredible movies and striking up nerdy conversations in long lines at Disneyland!

 

QIP 2013 from the perspective of a greenhorn (grad student)


caltechCrew_qip2013

Most of Caltech’s contingent during QIP’s banquet. Not pictured: sword dancers, jug balancers and Gorjan.

A couple of weeks ago, about half of IQI (now part of IQIM) flew from Pasadena to Beijing in order to attend QIP 2013, the 16th annual workshop on quantum information processing. I wish I could report that the quantum information community solved the world’s problems over the past year, or at least built a 2^10 qubit universal quantum computer, but unfortunately, we’re not quite there yet. As a substitute, I’ll mention a few of the talks that I particularly enjoyed and the really hard open problems that they left us with.

The emphases of the talks mainly bifurcated towards computer science versus physics. I was better prepared to understand the talks emphasizing the latter, so my comments will mainly describe those talks. Patrick Hayden’s talk: “summoning information in spacetime: or where and when can a qubit be?” was one of my favorites. To the extent that I understood things, the goal of this work is to better understand how quantum information can propagate forward in time. If a qubit were created at spacetime location S, and then if it were forced to remain localized, the no-cloning theorem would give strict bounds regarding how it could move forward in time. The qubit would follow a worldline and that would be the end of things. However, qubits don’t need to remain localized, as teleportation experiments have pretty clearly demonstrated, and it therefore seems like qubits can propagate into the future in more subtle ways–ways that at face value appear to violate the no-cloning theorem. Patrick and the undergraduate that he worked with on this project, Alex May, came up with a pictorial approach to better understand these subtleties. The really hard open problems that these ideas could potentially be applied to include: firewalls, position-dependent quantum cryptography and to paradoxes concerning the apparent no-cloning violations near black hole event horizons.
Continue reading

Science Magazine’s Breakthrough of 2012


A few nights ago, I attended Dr. Harvey B. Newman’s public lecture at Caltech titled: “Physics at the Large Hadron Collider: A New Window on Matter, Spacetime and the Universe.” The weekly quantum information group meeting finished early so that we could attend the lecture (Dr. Preskill’s group meeting lasted slightly longer than two hours: record brevity during the seven months that I’ve been a member!) We weren’t alone in deciding to attend this lecture. Seating on the ground floor of Beckman Auditorium was filled, so there were at least 800 people in attendance. Judging by the age of the audience, and from a few comments that I overheard, I estimate that a majority of the audience was unaffiliated with Caltech. Anyways, Dr. Newman’s inspiring lecture reminded me how lucky I am to be a graduate student at Caltech and it also clarified misconceptions surrounding the Large Hadron Collider (LHC), and the discovery of the Higgs, in particular.

Before mentioning some of the highlights of Dr. Newman’s lecture, I want to describe the atmosphere in the room leading up to the talk. A few minutes before the lecture began, I overheard a conversation between three women. It came up that one of the ladies is a Caltech physics graduate student. When I glanced over my shoulder, I recognized that the girl, Emily, is a friend of mine. She was talking to a mother and her high school-aged daughter who loves physics. It’s hard to describe the admiration that oozed from the mother’s face as she spoke with Emily–it was as if Emily provided a window into a future where her daughter’s dreams had come true. It brought back memories, from when I was in the high schooler’s position. As a scientifically-minded child growing up in Southern California, I dreamed of studying at Caltech, but it seemed like an impossible goal. I empathized with the high schooler and also with her mother, who reminded me of my own mom. Mom’s have a hopeless job: they’re genetically programmed to want the best for their children, but they oftentimes don’t have the means to make these dreams a reality. Especially when the child’s dream is to become a scientist. It’s a rare parent who understands the textbooks that an aspiring scientist consumes themselves with, and an even rarer parent, who can give their child an advantage when they enter the crapshoot that is undergraduate admissions. The angst of the conversation reminded me that I’m one of the lucky few whose childhood dreams have come true–it’s an opportunity that I don’t want to squander.

The conversation between two elderly men sitting next to me also brought back uncomfortable memories. They were trying to prove their intelligence to each other through an endless proceeding of anecdotes and physics observations. I empathized with them as well. Being at a place like Caltech is intimidating. As an outsider, you don’t have explicit credentials signaling that you belong, so you walk on eggshells, trying to prove how smart you are. I’ve seen this countless times, such as when I give tours to high schoolers, but it’s especially pronounced amongst incoming graduate students. However, it quickly fades as they become comfortable with their position. But to outsiders, every time they re-enter a hallowed place, their insecurities flood back. I know this because I was guilty of this! I spoke with the gentlemen for a while and they were incredibly nice, but smart as they were, they were momentarily insecure. Putting on my ambassador hat for a moment, if there are any ‘outsiders’ reading this blog, I want to say that I, for one, am glad that you attend events like this.
Continue reading

It’s been a tough week for hidden variable theories


The RSS subscriptions which populate my Google Reader mainly fall into two categories: scientific and other. Sometimes patterns emerge when superimposing these disparate fields onto the same photo-detection plate (my brain.) Today, it became abundantly clear that it’s been a tough week for hidden variable theories.

Let me explain. Hidden variable theories were proposed by physicists in an attempt to explain the ‘indeterminism’ which seems to arise in quantum mechanics, and especially in the double-slit experiment. This probably means nothing to many of you, so let me explain further: the hidden variables in Tuesday’s election weren’t enough to trump Nate Silver’s incredibly accurate predictions based upon statistics and data (hidden variables in Tuesday’s election include: “momentum,” “the opinions of undecided voters,” and “pundit’s hunches.”) This isn’t to say that there weren’t hidden variables at play — clearly the statistical models used weren’t fully complete and will someday be improved upon — but hidden variables alone weren’t the dominant influence. Indeed, Barack Obama was re-elected for a second term. However, happy as I was to see statistics trump hunches, the point of this post is not to wax political, but rather to describe the recent failure of hidden variable theories in an arena more appropriate for this blog: quantum experiments.

Continue reading

How to build a teleportation machine: Teleportation protocol


Damn, it sure takes a long time for light to travel to the Pegasus galaxy. If only quantum teleportation enabled FTL Stargates…

I was hoping to post this earlier, but a heavy dose of writer’s block set in (I met a girl, and no, this blog didn’t help — but she is a physicist!) I also got lost in the rabbit hole that is quantum teleportation. My initial intention with this series of posts was simply to clarify common misconceptions and to introduce basic concepts in quantum information. However, while doing so, I started a whirlwind tour of deep questions in physics which become unavoidable as you think harder and deeper about quantum teleportation. I’ve only just begun this journey, but using quantum teleportation as a springboard has already led me to contemplate crazy things such as time-travel via coupling postselection with quantum teleportation and the subtleties of entanglement. In other words, quantum teleportation may not be the instantaneous Stargate style teleportation you had in mind, but it’s incredibly powerful in its own right. Personally, I think we’ve barely begun to understand the full extent of its ramifications.

Continue reading

How to build a teleportation machine: Intro to entanglement


Oh my, I ate the whole thing again. Are physicists eligible for Ben and Jerry’s sponsorships?

I’m not sure what covers more ground when I go for a long run — my physical body or my metaphorical mind? Chew on that one, zen scholar! Anyways, I basically wrote the following post during my most recent run, and I also worked up an agressive appetite for Ben and Jerry’s ice cream. I’m going to reward myself after writing this post by devouring a pint of “half-baked” brown-kie ice cream (you can’t find this stuff in your local store.)

The goal of this series of blog posts is to explain quantum teleportation and how Caltech built a machine to do this. The tricky aspect is that there are two foundational elements of quantum information that need to be explained first — they’re both phenomenally interesting in their own right, but substantially subtler than a teleportation device, so the goal with this series is to explain qubits and entanglement at a level which will allow you to appreciate what our teleportation machine does (and after explaining quantum teleportation, hopefully some of you will be motivated to dive deeper into the subtleties of quantum information.) This post will explain entanglement.
Continue reading

How to build a teleportation machine: Intro to qubits


A match made in heaven.

If a tree falls in a forest, and nobody is there to hear it, does it make a sound? The answer was obvious to my 12-year-old self — of course it made a sound. More specifically, something ranging from a thud to a thump. There doesn’t need to be an animal present for the tree to jiggle air molecules. Classical physics for the win! Around the same time I was exposed to this thought experiment, I read Michael Crichton’s Timeline. The premise is simple, but not necessarily feasible: archeologists use ‘quantum technology’ (many-worlds interpretation and quantum teleportation) to travel to the Dordogne region of France in the mid 1300s. Blood, guts, action, drama, and plot twists ensue. I haven’t returned to this book since I was thirteen, so I’m guaranteed to have the plot wrong, but for better or worse, I credit this book with planting the seeds of a misconception about what ‘quantum teleportation’ actually entails. This is the first of a multi-part post which will introduce readers to the one-and-only way we know of how teleportation works.
Continue reading