Hacking nature: loopholes in the laws of physics

I spent my childhood hacking computers. When I was seven, my cousin showed up for Thanksgiving with a box filled with computer parts and we built my first computer. I got into competitive computer gaming around age eleven, and hacking was a natural extension of these activities. Then when I was sixteen, after doing poorly at a Counterstrike tournament, I decided that I should probably apply myself to other things. Needless to say, my parents were thrilled. So that’s when I bought my first computer (instead of building my own), which for deliberate but now antediluvian reasons was a Mac. A few years later, when I was taking CS 106 at Stanford, I was the first student in the course’s history whose reason for buying a Mac was “so that I couldn’t play computer games!” And now you know the story of my childhood.

The hacker mentality is quite different than the norm and my childhood trained me to look at absolutist laws as opportunities to find loopholes (of course only when legal and socially responsible!) I’ve applied this same mentality as I’ve been doing physics and I’d like to share with you some of the loopholes that I’ve gathered.

scharnhorst

Scharnhorst effect enables light to travel faster than in vacuum (c=299,792,458 m/s): this is about the grandaddy of all laws, that nothing can travel faster than light in a vacuum! This effect is the most controversial on my list, because it hasn’t yet been experimentally verified, but it seems obvious with the right picture in mind. Most people’s mental model for light traveling in a vacuum is of little particles/waves called photons traveling through empty space. However, the vacuum is not empty! It is filled with pairs of virtual particles which momentarily fleet into existence. Interactions with these virtual particles create a small amount of ‘resistance’ as photons zoom through the vacuum (photons get absorbed into virtual electron-positron pairs and then spit back out as photons ad infinitum.) Thus, if we could somehow reduce the rate at which virtual particles are created, photons would interact less strongly with the vacuum, and would be able to travel marginally faster than c. But this is exactly what leads to the Casimir effect: the experimentally verified fact that if you take two mirrors and put them ~10 nanometers apart, then they will attract each other because there are more virtual particles created outside the cavity than inside [low momenta virtual modes are inaccessible because the uncertainty principle requires \Delta x \cdot \Delta p= 10nm\cdot\Delta p \geq \hbar/2.] This effect is extremely small, only predicting that light would travel one part in 10^{36} faster than c. However, it should remind us all to deeply question assumptions.

This first loophole used quantum effects to beat a relativistic bound, but the next few loopholes are purely quantum, and are mainly related to that most quantum of all limits, the Heisenberg uncertainty principle.

Smashing the standard quantum limit (SQL) with squeezed measurements: the Heisenberg uncertainty principle tells us that there is a fundamental tradeoff in nature: the more precise your information about an object’s position, the less precise your knowledge about its momentum. Or vice versa, or replace x and p with and t, or any other conjugate variables. This uncertainty principle is oftentimes written as \Delta x\cdot \Delta p \geq \hbar/2. For a variety of reasons, in the early days of quantum mechanics, it was hard enough to imagine creating a state with \Delta x \cdot \Delta p = \hbar/2, but there was some hope because this is obtained in the ground state of a quantum harmonic oscillator. In this case, we have \Delta x = \Delta p = \sqrt{\hbar/2}. However, it was harder still to imagine creating states with \Delta x < \sqrt{\hbar/2}, these states would be said to ‘go beyond the standard quantum limit’ (SQL). Over the intervening years, not only have we figured out how to go beyond the SQL using squeezed coherent states, but this is actually essential in some of our most exciting current experiments, like LIGO.

LIGO is an incredibly ambitious experiment which has been talked about multiple times on this blog. It is trying to usher in a new era of astronomy–moving beyond detecting photons–to detecting gravitational waves, ripples in spacetime which are generated as exceptionally massive objects merge, such as when two black holes collide. The effects of these waves on our local spacetime as they travel past earth are minuscule, on the scale of 10^{-18}m, which is about one thousand times shorter than the ‘diameter’ of a proton, and is the same order of magnitude as \sqrt{\hbar/2}. Remarkably, LIGO has exploited squeezed light to demonstrate sensitivities beyond the SQL. LIGO expects to start detecting gravitational waves on a frequent basis as its upgrades deemed ‘advanced LIGO’ are completed over the next few years.

Compressed sensing beats Nyquist-Shannon: let’s play a game. Imagine I’m sending you a radio signal. How often do you need to measure the signal in order to be able to reconstruct it perfectly? The Nyquist-Shannon sampling theorem is a path-breaking result which Claude Shannon proved in 1949. If you measure at least twice as often as the highest frequency, then you are guaranteed perfect recovery of the signal. This incredibly profound result laid the foundation for modern communications. Also, it is important to realize that your signal can be much more general than simply radio waves, such as with a signal of images. This theorem is a sufficient condition for reconstruction, but is it necessary? Not even close. And it took us over 50 years to understand this in generality.

Compressed sensing was proposed between 2004-2006 by Emmanuel Candes, David Donaho and Terry Tao with important early contributions by Justin Romberg. I should note that Candes and Romberg were at Caltech during this period. The Nyquist-Shannon theorem told us that with a small amount of knowledge (a bound on the highest frequency) that we could reconstruct a signal perfectly by only measuring at a rate twice faster than the highest frequency–instead of needing to measure continuously. Compressed sensing says that with one extra assumption, assuming that only sparsely few of your frequencies are being used (call it 10 out of 1000), that you can recover your signal with high accuracy using dramatically fewer measurements. And it turns out that this assumption is valid for a huge range of applications: enabling real-time MRIs using conventional technology or more relevant to this blog, increasing our ability to distinguish quantum states via tomography.

Unlike the other topics in this blog post, I have never worked with compressed sensing, but my intuition goes like this: instead of measuring in the basis in which you are sparse (frequency for example), measure in a different basis. With high probability each of these measurements will pick up a little piece from each of the occupied modes. Then, to reconstruct your signal, you want to use the L0-“norm” to interpolate in such a way that you use the fewest frequency components possible. Computing the L0-“norm” is not efficient, so one of the major breakthroughs of compressed sensing was showing that with high probability computing the L1-norm approximates the L0 solution, and all of this can be done using a highly efficient linear program. However, I really shouldn’t be speculating because I’ve never invested much time into mastering this new tool, and I’m friends with a couple of the quantum state tomography authors, so maybe they’ll chime in?

Brahms is a cool dude. Brahms as a height map--cliffs=Gibbs phenomena=oh no! First three levels of Brahms wavelets.

Brahms is a cool dude. Brahms as a height map where cliffs=Gibbs phenomena=oh no! First three levels of Brahms as a Haar wavelet.

Wavelets as the mother of all bases: I previously wrote a post about the importance of choosing a convenient basis. Imagine you have an image which has a bunch of sharp contrasts, such as the outline of a person, or a horizon, or a table, basically anything. How do you store it efficiently? Due to the Gibbs phenomena, the Fourier basis is pretty awful for these applications. Here’s another motivating problem, imagine someone plays one note on an instrument. The sound is localized in both time and frequency. The Fourier basis is also pretty awful at storing/detecting this. Wavelets to the rescue! The theory of wavelets uses some beautiful math to solve the longstanding problem of finding a basis which is localized in both position and momenta space (or very close to it.) Wavelets have profound applications, some of my favorite include: modern image compression (JPEG 2000 onwards) is based on wavelets; Ingrid Daubechies and her colleagues used wavelets to detect forged paintings; recovering previously unrecoverable recordings of Brahms at the piano (I heard about this from Barry Simon, of Reed-Simon fame, who is currently teaching his last class ever); and even the FBI uses wavelets to compress images of fingerprints, obtaining a compression ratio of 20:1.

Postselection enables quantum cloning: the no-cloning theorem is well known in the field of quantum information. It says that you cannot find a machine (unitary operation U) which takes an arbitrary input state |\psi\rangle, and a known state |0\rangle, such that the machine maps |\psi\rangle \otimes |0\rangle to |\psi\rangle \otimes |\psi\rangle, and thereby cloning |\psi \rangle. This is very easy to prove using the linearity of quantum mechanics. However, there are loopholes. One of the most trivial loopholes is realizing that one can take the state |\psi\rangle and perform something called unambiguous state discrimination, which either spits out exactly which state |\psi \rangle is with some probability, or otherwise spits out “I don’t know which state.” You can postselect that the unambigious state discrimination succeeded and prepare a unitary which clones the relevant states. Peter Shor has a comment on physics stackexchange describing this. Seth Lloyd and John Preskill outlined a less trivial version of this in their recent paper which tries to circumvent firewalls by using postselected quantum teleportation.

In this blog post, I’ve only described a tiny fraction of the quantum loopholes that have been discovered. If I had more space/time, two of the next examples I would describe are beating classical correlations with quantum entanglement, in order to win at CHSH games. I would also describe weak measurements and some of the peculiar things they lead to. Beyond that, I would probably refer you to Yakir Aharonov’s amazingly fun book about quantum paradoxes.

After reading this, I hope that the next time you encounter an inviolable law of nature, you’ll apply the hacker mentality and attempt to strip it down to its essence, isolate assumptions, and potentially find a loophole. But while you’re doing this, remember that you should never argue with your mother, or with mathematics!

22 thoughts on “Hacking nature: loopholes in the laws of physics

  1. Hi Shaun,

    I think your explanation of compressed sensing is essentially correct. I would only add that the basis you pick should in particular be “incoherent” with respect to the sparse basis. A random basis will have this property with high probability, but you can also use structured bases with less randomness (though it is generally harder to get good guarantees on the quality of the solution). The scaling that you get in the number of measurements is proportional to the sparsity times the *log* of the maximum frequency, so it’s a pretty dramatic improvement.

  2. Hi Shaun!

    Please let me state that I too enjoyed this essay. Yet perhaps your essay’s selection of five burning arrows (in Dick Lipton and Ken Regan’s phrase) is too strongly rooted in the 20th century?

    Year  Loop-hole
    1946: wavelet basis expansion (Gabor)
    1963: squeezed states (Glauber and Sudarshan)
    1970: compressed sampling (petroleum industry)
    1993: Scharnhorst effect (Barton and Scharnhorst)
    1998: postselected quantum cloning (Duan and Guo)

    Note that the median age of these five arrows is 43 years (an eternity in math and physics).

    It’s plain that these results are far too old for students to regard them as seriously transgressive.

    As a candidate for a modern-era burning-arrow essay, please allow me to commend Jaeyoon Cho’s preprint “Entanglement area law in thermodynamically gapped spin systems” (arXiv:1404.7616, this week). The language of Cho’s preprint is bit dry, and yet for reasons that are summarized mentioned on Gil Kalai’s weblog, the essay provides ample math-and-physics material for a 21st century burning-arrow essay upon the theme “Hilbert Space Considered Harmful: Let’s Ban Feynman”

    This essay could draw upon two celebrated burning-arrow essays: (1) Edsger Dijkstra’s “GOTO Statement Considered Harmful” (1968) and (2) Glen Gould’s “Let’s Ban Applause” (1962) (note: Google’s Search readily finds the text of the former and an amusing video reading of the latter).

    In common with these two (much-cited) transgressive 20th century essays, a comparably transgressive 21st century essay upon the theme “Hilbert Space Considered Harmful: Let’s Ban Feynman” would have the merit of offending considerably many senior researchers; an attribute that in itself may commend this theme to the consideration of graduate students.

  3. A hermetically isolated hard vacuum envelope contains two closely spaced but not touching, in-register and parallel, electrically conductive plates having micro-spiked inner surfaces. They are connected with a wire, perhaps containing a dissipative load (small motor). One plate has a large vacuum work function material inner surface (e.g., osmium at 5.93 eV). The other plate has a small vacuum work function material inner surface (e.g., n-doped diamond “carbon nitride” at 0.1 eV). Above 0 kelvin, spontaneous cold cathode emission runs the closed isolated system. Emitted electrons continuously fall down the 5.8 volt potential gradient. Evaporation from carbon nitride cools that plate. Accelerated collision onto osmium warms that plate. Round and round. The plates never come into thermal equilibrium when electrically shorted. The motor runs forever.

    Find the loophole that is not there. That the Second Law is a suggestion not a rule is not the boojum.

  4. Very interesting article, Shaun. Indeed, people are more intimidated by the no-go theorems than they should be. Each theorem applies to a limited situation, but sometimes we tend to generalize them beyond this. I wrote about this in a recent blog post (which also contains a hack to the 7 bridges problem).

    You said that the “first loophole used quantum effects to beat a relativistic bound”.

    Think at an alternative history, in which people figured out how to measure the speed of light in air, but not yet in vacuum, but special relativity was discovered. They would call the speed of light in air c. Suppose later they manage to measure it in vacuum. Would it be correct to say that they beat the relativistic bound because they found that the refractive index of vacuum is slightly smaller than that of air? Similarly, in our version of history, people measured c in the fluctuating vacuum, and obtained the known value. If we could remove the quantum fluctuations from vacuum, then we would find the real value for the constant c from special relativity, and not “beat it” 🙂

    Now, I take the opportunity to suggest a couple more “hacks”, which contradict accepted limits in physics (self-promotion alert!).

    1. Hacking singularities in General Relativity.

    2. Hacking determinism to make it compatible with free-will, in Quanutum Mechanics, and hacking the wavefunction collapse to make it unitary 1, 2.

  5. As long as we are suggesting hacks… how about the no go on continuous variable quantum computing hacked using non gaussian ancilla…
    Hacking might just be reduced to a cool word for scientific ingenuity 😛

    • I agree, and it often times is! This is from the Wikipedia entry on “Hacker (hobbyist)”

      “a hacker is a person who enjoys exploring the limits of what is possible, in a spirit of playful cleverness.”

  6. Dear Shaun,

    I am a layman so please excuse me if I go completely off track in the following.

    In re c:

    You wrote: “It is filled with pairs of virtual particles which momentarily fleet into existence. Interactions with these virtual particles create a small amount of ‘resistance’ as photons zoom through the vacuum (photons get absorbed into virtual electron-positron pairs and then spit back out as photons ad infinitum.)”

    Presumably these interactions are not without loss (neutrinos appear?). Since we are unable to measure c from a distant source could this account for the Hubble red shift?

    Conjecture: A refractive index > 1 results in a cumulative loss of velocity. Experiment: Would one thousand km of fiber optic cable (index 1.52) in the path of a lunar ranging laser show a measureable difference?

    Regards,
    Roy

  7. Pingback: hacking nature - First Light Machine

  8. That what determinates the velocity of light, and that what enables photon-spacetime interaction are the lengthwise capacitance of space \epsilon (a.k.a. electric permittivity of vacuum) and lengthwise inductance of space \mu (a.k.a. magnetic permeability of vacuum).
    \epsilon and \mu of space are the most fundamental properties of space.

    Einstein’s relativity and Heisenberg’s principles are completely obsolete and misleading.
    They have emerged as part of the general attempts and preparations of German elites to become no.1 in everything. The new total superpower. With Planck’s discovery and Einsteins relativity works, they saw the opportunity to write the physics from scratch. That means: neglect all important discoveries, get “under” them, derive them.
    The most fundamental and the most important discoveries until then were discoveries of infinitesimal calculus (Newton), Newton’s principles, Newton’s law of gravitation, \epsilon and \mu (Coulomb and Ampere), and velocity of EM waves (Maxwell).
    Einsteins principle of universal constancy of the speed of light is wrong. Heisenberg’s upgrade of uncertainty principle is total nonsense.
    http://www.science20.com/forums/theories_everything
    Read “The gem” series of texts.

    • In a warped universe, velocity (d/t) can not be computed because distance between two points is variable.

  9. Pingback: Loophole in laws of physics enables light speed to increase? | Uncommon Descent

  10. By “hacking” nature, are you making a working free-energy device? And I do mean, over-unity (efficiency > 100%). Otherwise all these loopholes could serve no benefit at all.

    Faraday, and yes, Faraday, was… a… Genius!

  11. Pingback: Quora

Your thoughts here.