About spiros

Spyridon Michalakis does research on quantum many-body physics and topological quantum order at Caltech's Institute for Quantum Information and Matter, where he is also the manager for outreach activities.

The theory of everything: Help wanted

When Scientific American writes that physicists are working on a theory of everything, does it sound ambitious enough to you? Do you lie awake at night thinking that a theory of everything should be able to explain, well, everything? What if that theory is founded on quantum mechanics and finds a way to explain gravitation through the microscopic laws of the quantum realm? Would that be a grand unified theory of everything?

The answer is no, for two different, but equally important reasons. First, there is the inherent assumption that quantum systems change in time according to Schrodinger’s evolution: i \hbar \partial_t \psi(t) = H \psi(t). Why? Where does that equation come from? Is it a fundamental law of nature, or is it an emergent relationship between different states of the universe? What if the parameter t, which we call time, as well as the linear, self-adjoint operator H, which we call the Hamiltonian, are both emergent from a more fundamental, and highly typical phenomenon: the large amount of entanglement that is generically found when one decomposes the state space of a single, static quantum wavefunction, into two (different in size) subsystems: a clock and a space of configurations (on which our degrees of freedom live)? So many questions, so few answers.

The static multiverse

The perceptive reader may have noticed that I italicized the word ‘static’ above, when referring to the quantum wavefunction of the multiverse. The emphasis on static is on purpose. I want to make clear from the beginning that a theory of everything can only be based on axioms that are truly fundamental, in the sense that they cannot be derived from more general principles as special cases. How would you know that your fundamental principles are irreducible? You start with set theory and go from there. If that assumes too much already, then you work on your set theory axioms. On the other hand, if you can exhibit a more general principle from which your original concept derives, then you are on the right path towards more fundamentalness.

In that sense, time and space as we understand them, are not fundamental concepts. We can imagine an object that can only be in one state, like a switch that is stuck at the OFF position, never changing or evolving in any way, and we can certainly consider a complete graph of interactions between subsystems (the equivalent of a black hole in what we think of as space) with no local geometry in our space of configurations. So what would be more fundamental than time and space? Let’s start with time: The notion of an unordered set of numbers, such as \{4,2,5,1,3,6,8,7,12,9,11,10\}, is a generalization of a clock, since we are only keeping the labels, but not their ordering. If we can show that a particular ordering emerges from a more fundamental assumption about the very existence of a theory of everything, then we have an understanding of time as a set of ordered labels, where each label corresponds to a particular configuration in the mathematical space containing our degrees of freedom. In that sense, the existence of the labels in the first place corresponds to a fundamental notion of potential for change, which is a prerequisite for the concept of time, which itself corresponds to constrained (ordered in some way) change from one label to the next. Our task is first to figure out where the labels of the clock come from, then where the illusion of evolution comes from in a static universe (Heisenberg evolution), and finally, where the arrow of time comes from in a macroscopic world (the illusion of irreversible evolution).

The axioms we ultimately choose must satisfy the following conditions simultaneously: 1. the implications stemming from these assumptions are not contradicted by observations, 2. replacing any one of these assumptions by its negation would lead to observable contradictions, and 3. the assumptions contain enough power to specify non-trivial structures in our theory. In short, as Immanuel Kant put it in his accessible bedtime story The critique of Pure Reason, we are looking for synthetic a priori knowledge that can explain space and time, which ironically were Kant’s answer to that same question.

The fundamental ingredients of the ultimate theory

Before someone decides to delve into the math behind the emergence of unitarity (Heisenberg evolution) and the nature of time, there is another reason why the grand unified theory of everything has to do more than just give a complete theory of how the most elementary subsystems in our universe interact and evolve. What is missing is the fact that quantity has a quality all its own. In other words, patterns emerge from seemingly complex data when we zoom out enough. This “zooming out” procedure manifests itself in two ways in physics: as coarse-graining of the data and as truncation and renormalization. These simple ideas allow us to reduce the computational complexity of evaluating the next state of a complex system: If most of the complexity of the system is hidden at a level you cannot even observe (think pre retina-display era), then all you have to keep track of is information at the macroscopic, coarse-grained level. On top of that, you can use truncation and renormalization to zero in on the most likely/ highest weight configurations your coarse-grained data can be in – you can safely throw away a billion configurations, if their combined weight is less than 0.1% of the total, because your super-compressed data will still give you the right answer with a fidelity of 99.9%. This is how you get to reduce a 9 GB raw video file down to a 300 MB Youtube video that streams over your WiFi connection without losing too much of the video quality.

I will not focus on the second requirement for the “theory of everything”, the dynamics of apparent complexity. I think that this fundamental task is the purview of other sciences, such as chemistry, biology, anthropology and sociology, which look at the “laws” of physics from higher and higher vantage points (increasingly coarse-graining the topology of the space of possible configurations). Here, I would like to argue that the foundation on which a theory of everything rests, at the basement level if such a thing exists, consists of four ingredients: Math, Hilbert spaces with tensor decompositions into subsystems, stability and compressibility. Now, you know about math (though maybe not of Zermelo-Fraenkel set theory), you may have heard of Hilbert spaces if you majored in math and/or physics, but you don’t know what stability, or compressibility mean in this context. So let me motivate the last two with a question and then explain in more detail below: What are the most fundamental assumptions that we sweep under the rug whenever we set out to create a theory of anything that can fit in a book – or ten thousand books – and still have predictive power? Stability and compressibility.

Math and Hilbert spaces are fundamental in the following sense: A theory needs a Language in order to encode the data one can extract from that theory through synthesis and analysis. The data will be statistical in the most general case (with every configuration/state we attach a probability/weight of that state conditional on an ambient configuration space, which will often be a subset of the total configuration space), since any observer creating a theory of the universe around them only has access to a subset of the total degrees of freedom. The remaining degrees of freedom, what quantum physicists group as the Environment, affect our own observations through entanglement with our own degrees of freedom. To capture this richness of correlations between seemingly uncorrelated degrees of freedom, the mathematical space encoding our data requires more than just a metric (i.e. an ability to measure distances between objects in that space) – it requires an inner-product: a way to measure angles between different objects, or equivalently, the ability to measure the amount of overlap between an input configuration and an output configuration, thus quantifying the notion of incremental change. Such mathematical spaces are precisely the Hilbert spaces mentioned above and contain states (with wavefunctions being a special case of such states) and operators acting on the states (with measurements, rotations and general observables being special cases of such operators). But, let’s get back to stability and compressibility, since these two concepts are not standard in physics.

Stability

Stability is that quality that says that if the theory makes a prediction about something observable, then we can test our theory by making observations on the state of the world and, more importantly, new observations do not contradict our theory. How can a theory fall apart if it is unstable? One simple way is to make predictions that are untestable, since they are metaphysical in nature (think of religious tenets). Another way is to make predictions that work for one level of coarse-grained observations and fail for a lower level of finer coarse-graining (think of Newtonian Mechanics). A more extreme case involves quantum mechanics assumed to be the true underlying theory of physics, which could still fail to produce a stable theory of how the world works from our point of view. For example, say that your measurement apparatus here on earth is strongly entangled with the current state of a star that happens to go supernova 100 light-years from Earth during the time of your experiment. If there is no bound on the propagation speed of the information between these two subsystems, then your apparatus is engulfed in flames for no apparent reason and you get random data, where you expected to get the same “reproducible” statistics as last week. With no bound on the speed with which information can travel between subsystems of the universe, our ability to explain and/or predict certain observations goes out the window, since our data on these subsystems will look like white noise, an illusion of randomness stemming from the influence of inaccessible degrees of freedom acting on our measurement device. But stability has another dimension; that of continuity. We take for granted our ability to extrapolate the curve that fits 1000 data points on a plot. If we don’t assume continuity (and maybe even a certain level of smoothness) of the data, then all bets are off until we make more measurements and gather additional data points. But even then, we can never gather an infinite (let alone, uncountable) number of data points – we must extrapolate from what we have and assume that the full distribution of the data is close in norm to our current dataset (a norm is a measure of distance between states in the Hilbert space).

The emergence of the speed of light

The assumption of stability may seem trivial, but it holds within it an anthropic-style explanation for the bound on the speed of light. If there is no finite speed of propagation for the information between subsystems that are “far apart”, from our point of view, then we will most likely see randomness where there is order. A theory needs order. So, what does it mean to be “far apart” if we have made no assumption for the existence of an underlying geometry, or spacetime for that matter? There is a very important concept in mathematical physics that generalizes the concept of the speed of light for non-relativistic quantum systems whose subsystems live on a graph (i.e. where there may be no spatial locality or apparent geometry): the Lieb-Robinson velocity. Those of us working at the intersection of mathematical physics and quantum many-body physics, have seen first-hand the powerful results one can get from the existence of such an effective and emergent finite speed of propagation of information between quantum subsystems that, in principle, can signal to each other instantaneously through the action of a non-local unitary operator (rotation of the full system under Heisenberg evolution). It turns out that under certain natural assumptions on the graph of interactions between the different subsystems of a many-body quantum system, such a finite speed of light emerges naturally. The main requirement on the graph comes from the following intuitive picture: If each node in your graph is connected to only a few other nodes and the number of paths between any two nodes is bounded above in some nice way (say, polynomially in the distance between the nodes), then communication between two distant nodes will take time proportional to the distance between the nodes (in graph distance units, the smallest number of nodes among all paths connecting the two nodes). Why? Because at each time step you can only communicate with your neighbors and in the next time step they will communicate with theirs and so on, until one (and then another, and another) of these communication cascades reaches the other node. Since you have a bound on how many of these cascades will eventually reach the target node, the intensity of the communication wave is bounded by the effective action of a single messenger traveling along a typical path with a bounded speed towards the destination. There should be generalizations to weighted graphs, but this area of mathematical physics is still really active and new results on bounds on the Lieb-Robinson velocity gather attention very quickly.

Escaping black holes

If this idea holds any water, then black holes are indeed nearly complete graphs, where the notion of space and time breaks down, since there is no effective bound on the speed with which information propagates from one node to another. The only way to escape is to find yourself at the boundary of the complete graph, where the nodes of the black hole’s apparent horizon are connected to low-degree nodes outside. Once you get to a low-degree node, you need to keep moving towards other low-degree nodes in order to escape the “gravitational pull” of the black hole’s super-connectivity. In other words, gravitation in this picture is an entropic force: we gravitate towards massive objects for the same reason that we “gravitate” towards the direction of the arrow of time: we tend towards higher entropy configurations – the probability of reaching the neighborhood of a set of highly connected nodes is much, much higher than hanging out for long near a set of low-degree nodes in the same connected component of the graph. If a graph has disconnected components, then their is no way to communicate between the corresponding spacetimes – their states are in a tensor product with each other. One has to carefully define entanglement between components of a graph, before giving a unified picture of how spatial geometry arises from entanglement. Somebody get to it.

Erik Verlinde has introduced the idea of gravity as an entropic force and Fotini Markopoulou, et al. have introduced the notion of quantum graphity (gravity emerging from graph models). I think these approaches must be taken seriously, if only because they work with more fundamental principles than the ones found in Quantum Field Theory and General Relativity. After all, this type of blue sky thinking has led to other beautiful connections, such as ER=EPR (the idea that whenever two systems are entangled, they are connected by a wormhole). Even if we were to disagree with these ideas for some technical reason, we must admit that they are at least trying to figure out the fundamental principles that guide the things we take for granted. Of course, one may disagree with certain attempts at identifying unifying principles simply because the attempts lack the technical gravitas that allows for testing and calculations. Which is why a technical blog post on the emergence of time from entanglement is in the works.

Compressibility

So, what about that last assumption we seem to take for granted? How can you have a theory you can fit in a book about a sequence of events, or snapshots of the state of the observable universe, if these snapshots look like the static noise on a TV screen with no transmission signal? Well, you can’t! The fundamental concept here is Kolmogorov complexity and its connection to randomness/predictability. A sequence of data bits like:

10011010101101001110100001011010011101010111010100011010110111011110

has higher complexity (and hence looks more random/less predictable) than the sequence:

10101010101010101010101010101010101010101010101010101010101010101010

because there is a small computer program that can output each successive bit of the latter sequence (even if it had a million bits), but (most likely) not of the former. In particular, to get the second sequence with one million bits one can write the following short program:

string s = ’10’;
for n=1 to 499,999:
s.append(’10’);
n++;
end
print s;

As the number of bits grows, one may wonder if the number of iterations (given above by 499,999), can be further compressed to make the program even smaller. The answer is yes: The number 499,999 in binary requires \log_2 499,999 bits, but that binary number is a string of 0s and 1s, so it has its own Kolmogorov complexity, which may be smaller than \log_2 499,999. So, compressibility has a strong element of recursion, something that in physics we associate with scale invariance and fractals.

You may be wondering whether there are truly complex sequences of 0,1 bits, or if one can always find a really clever computer program to compress any N bit string down to, say, N/100 bits. The answer is interesting: There is no computer program that can compute the Kolmogorov complexity of an arbitrary string (the argument has roots in Berry’s Paradox), but there are strings of arbitrarily large Kolmogorov complexity (that is, no matter what program we use and what language we write it in, the smallest program (in bits) that outputs the N-bit string will be at least N bits long). In other words, there really are streams of data (in the form of bits) that are completely incompressible. In fact, a typical string of 0s and 1s will be almost completely incompressible!

Stability, compressibility and the arrow of time

So, what does compressibility have to do with the theory of everything? It has everything to do with it. Because, if we ever succeed in writing down such a theory in a physics textbook, we will have effectively produced a computer program that, given enough time, should be able to compute the next bit in the string that represents the data encoding the coarse-grained information we hope to extract from the state of the universe. In other words, the only reason the universe makes sense to us is because the data we gather about its state is highly compressible. This seems to imply that this universe is really, really special and completely atypical. Or is it the other way around? What if the laws of physics were non-existent? Would there be any consistent gravitational pull between matter to form galaxies and stars and planets? Would there be any predictability in the motion of the planets around suns? Forget about life, let alone intelligent life and the anthropic principle. Would the Earth, or Jupiter even know where to go next if it had no sense that it was part of a non-random plot in the movie that is spacetime? Would there be any notion of spacetime to begin with? Or an arrow of time? When you are given one thousand frames from one thousand different movies, there is no way to make a single coherent plot. Even the frames of a single movie would make little sense upon reshuffling.

What if the arrow of time emerged from the notions of stability and compressibility, through coarse-graining that acts as a compression algorithm for data that is inherently highly-complex and, hence, highly typical as the next move to make? If two strings of data look equally complex upon coarse-graining, but one of them has a billion more ways of appearing from the underlying raw data, then which one will be more likely to appear in the theory-of-everything book of our coarse-grained universe? Note that we need both high compressibility after coarse-graining in order to write down the theory, as well as large entropy before coarse-graining (from a large number of raw strings that all map to one string after coarse-graining), in order to have an arrow of time. It seems that we need highly-typical, highly complex strings that become easy to write down once we coarse grain the data in some clever way. Doesn’t that seem like a contradiction? How can a bunch of incompressible data become easily compressible upon coarse-graining? Here is one way: Take an N-bit string and define its 1-bit coarse-graining as the boolean AND of its digits. All but one strings will default to 0. The all 1s string will default to 1. Equally compressible, but the probability of seeing the 1 after coarse-graining is 2^{-N}. With only 300 bits, finding the coarse-grained 1 is harder than looking for a specific atom in the observable universe. In other words, if the coarse-graining rule at time t is the one given above, then you can be pretty sure you will be seeing a 0 come up next in your data. Notice that before coarse-graining, all 2^N strings are equally likely, so there is no arrow of time, since there is no preferred string from a probabilistic point of view.

Conclusion, for now

When we think about the world around us, we go to our intuitions first as a starting point for any theory describing the multitude of possible experiences (observable states of the world). If we are to really get to the bottom of this process, it seems fruitful to ask “why do I assume this?” and “is that truly fundamental or can I derive it from something else that I already assumed was an independent axiom?” One of the postulates of quantum mechanics is the axiom corresponding to the evolution of states under Schrodinger’s equation. We will attempt to derive that equation from the other postulates in an upcoming post. Until then, your help is wanted with the march towards more fundamental principles that explain our seemingly self-evident truths. Question everything, especially when you think you really figured things out. Start with this post. After all, a theory of everything should be able to explain itself.

UP NEXT: Entanglement, Schmidt decomposition, concentration measure bounds and the emergence of discrete time and unitary evolution.

John Preskill and the dawn of the entanglement frontier

Editor’s Note: John Preskill’s recent election to the National Academy of Sciences generated a lot of enthusiasm among his colleagues and students. In an earlier post today, famed Stanford theoretical physicist, Leonard Susskind, paid tribute to John’s early contributions to physics ranging from magnetic monopoles to the quantum mechanics of black holes. In this post, Daniel Gottesman, a faculty member at the Perimeter Institute, takes us back to the formative years of the Institute for Quantum Information at Caltech, the precursor to IQIM and a world-renowned incubator for quantum information and quantum computation research. Though John shies away from the spotlight, we, at IQIM, believe that the integrity of his character and his role as a mentor and catalyst for science are worthy of attention and set a good example for current and future generations of theoretical physicists.

Preskill's legacy may well be the incredible number of preeminent research scientists in quantum physics he has mentored throughout his extraordinary career.

Preskill’s legacy may well be the incredible number of preeminent research scientists in quantum physics he has mentored throughout his extraordinary career.

When someone wins a big award, it has become traditional on this blog for John Preskill to write something about them. The system breaks down, though, when John is the one winning the award. Therefore I’ve been brought in as a pinch hitter (or should it be pinch lionizer?).

The award in this case is that John has been elected to the National Academy of Sciences, along with Charlie Kane and a number of other people that don’t work on quantum information. Lenny Susskind has already written about John’s work on other topics; I will focus on quantum information.

On the research side of quantum information, John is probably best known for his work on fault-tolerant quantum computation, particularly topological fault tolerance. John jumped into the field of quantum computation in 1994 in the wake of Shor’s algorithm, and brought me and some of his other grad students with him. It was obvious from the start that error correction was an important theoretical challenge (emphasized, for instance, by Unruh), so that was one of the things we looked at. We couldn’t figure out how to do it, but some other people did. John and I embarked on a long drawn-out project to get good bounds on the threshold error rate. If you can build a quantum computer with an error rate below the threshold value, you can do arbitrarily large quantum computations. If not, then errors will eventually overwhelm you. Early versions of my project with John suggested that the threshold should be about 10^{-4}, and the number began floating around (somewhat embarrassingly) as the definitive word on the threshold value. Our attempts to bound the higher-order terms in the computation became rather grotesque, and the project proceeded very slowly until a new approach and the recruitment of Panos Aliferis finally let us finish a paper with a rigorous proof of a slightly lower threshold value.

Meanwhile, John had also been working on topological quantum computation. John has already written about his excitement when Kitaev visited Caltech and talked about the toric code. The two of them, plus Eric Dennis and Andrew Landahl, studied the application of this code for fault tolerance. If you look at the citations of this paper over time, it looks rather … exponential. For a while, topological things were too exotic for most quantum computer people, but over time, the virtues of surface codes have become obvious (apparently high threshold, convenient for two-dimensional architectures). It’s become one of the hot topics in recent years and there are no signs of flagging interest in the community.

John has also made some important contributions to security proofs for quantum key distribution, known to the cognoscenti just by its initials. QKD allows two people (almost invariably named Alice and Bob) to establish a secret key by sending qubits over an insecure channel. If the eavesdropper Eve tries to live up to her name, her measurements of the qubits being transmitted will cause errors revealing her presence. If Alice and Bob don’t detect the presence of Eve, they conclude that she is not listening in (or at any rate hasn’t learned much about the secret key) and therefore they can be confident of security when they later use the secret key to encrypt a secret message. With Peter Shor, John gave a security proof of the best-known QKD protocol, known as the “Shor-Preskill” proof. Sometimes we scientists lack originality in naming. It was not the first proof of security, but earlier ones were rather complicated. The Shor-Preskill proof was conceptually much clearer and made a beautiful connection between the properties of quantum error-correcting codes and QKD. The techniques introduced in their paper got adopted into much later work on quantum cryptography.

Collaborating with John is always an interesting experience. Sometimes we’ll discuss some idea or some topic and it will be clear that John does not understand the idea clearly or knows little about the topic. Then, a few days later we discuss the same subject again and John is an expert, or at least he knows a lot more than me. I guess this ability to master
topics quickly is why he was always able to answer Steve Flammia’s random questions after lunch. And then when it comes time to write the paper … John will do it. It’s not just that he will volunteer to write the first draft — he keeps control of the whole paper and generally won’t let you edit the source, although of course he will incorporate your comments. I think this habit started because of incompatibilities between the TeX editor he was using and any other program, but he maintains it (I believe) to make sure that the paper meets his high standards of presentation quality.

This also explains why John has been so successful as an expositor. His
lecture notes for the quantum computation class at Caltech are well-known. Despite being incomplete and not available on Amazon, they are probably almost as widely read as the standard textbook by Nielsen and Chuang.

Before IQIM, there was IQI, and before that was QUIC.

Before IQIM, there was IQI, and before that was QUIC.

He apparently is also good at writing grants. Under his leadership and Jeff Kimble’s, Caltech has become one of the top places for quantum computation. In my last year of graduate school, John and Jeff, along with Steve Koonin, secured the QUIC grant, and all of a sudden Caltech had money for quantum computation. I got a research assistantship and could write my thesis without having to worry about TAing. Postdocs started to come — first Chris Fuchs, then a long stream of illustrious others. The QUIC grant grew into IQI, and that eventually sprouted an M and drew in even more people. When I was a student, John’s group was located in Lauritsen with the particle theory group. We had maybe three grad student offices (and not all the students were working on quantum information), plus John’s office. As the Caltech quantum effort grew, IQI acquired territory in another building, then another, and then moved into a good chunk of the new Annenberg building. Without John’s efforts, the quantum computing program at Caltech would certainly be much smaller and maybe completely lacking a theory side. It’s also unlikely this blog would exist.

The National Academy has now elected John a member, probably more for his research than his twitter account (@preskill), though I suppose you never know. Anyway, congratulations, John!

-D. Gottesman

Of magnetic monopoles and fast-scrambling black holes

Editor’s Note: On April 29th, 2014, the National Academy of Sciences announced the new electees to the prestigious organization. This was an especially happy occasion for everyone here at IQIM, since the new members included our very own John Preskill, Richard P. Feynman Professor of Theoretical Physics and regular blogger on this site. A request was sent to Leonard Susskind, a close friend and collaborator of John’s, to take a trip down memory lane and give the rest of us a glimpse of some of John’s early contributions to Physics. John, congratulations from all of us here at IQIM.

Preskill-John_7950_WebJohn Preskill was elected to the National Academy of Sciences, an event long overdue. Perhaps it took longer than it should have because there is no way to pigeon-hole him; he is a theoretical physicist, and that’s all there is to it.

John has long been one of my heroes in theoretical physics. There is something very special about his work. It has exceptional clarity, it has vision, it has integrity—you can count on it. And sometimes it has another property: it can surprise. The first time I heard his name come up, sometime around 1979, I was not only surprised; I was dismayed. A student whose name I had never heard of, had uncovered a serious clash between two things, both of which I deeply wanted to believe in. One was the Big-Bang theory and the other was the discovery of grand unified particle theories. Unification led to the extraordinary prediction that Dirac’s magnetic monopoles must exist, at least in principle. The Big-Bang theory said they must exist in fact. The extreme conditions at the beginning of the universe were exactly what was needed to create loads of monopoles; so many that they would flood the universe with too much mass. John, the unknown graduate student, did a masterful analysis. It left no doubt that something had to give. Cosmology gave. About a year later, inflationary cosmology was discovered by Guth who was in part motivated by Preskill’s monopole puzzle.

John’s subsequent career as a particle physicist was marked by a number of important insights which often had that surprising quality. The cosmology of the invisible axion was one. Others had to do with very subtle and counterintuitive features of quantum field theory, like the existence of “Alice strings”. In the very distant past, Roger Penrose and I had a peculiar conversation about possible generalizations of the Aharonov-Bohm effect. We speculated on all sorts of things that might happen when something is transported around a string. I think it was Roger who got excited about the possibilities that might result if a topological defect could change gender. Alice strings were not quite that exotic, only electric charge flips, but nevertheless it was very surprising.

John of course had a long standing interest in the quantum mechanics of black holes: I will quote a passage from a visionary 1992 review paper, “Do Black Holes Destroy Information?

“I conclude that the information loss paradox may well presage a revolution in fundamental physics.”

At that time no one knew the answer to the paradox, although a few of us, including John, thought the answer was that information could not be lost. But almost no one saw the future as clearly as John did. Our paths crossed in 1993 in a very exciting discussion about black holes and information. We were both thinking about the same thing, now called black hole complementarity. We were concerned about quantum cloning if information is carried by Hawking radiation. We thought we knew the answer: it takes too long to retrieve the information to then be able to jump into the black hole and discover the clone. This is probably true, but at that time we had no idea how close a call this might be.

It took until 2007 to properly formulate the problem. Patrick Hayden and John Preskill utterly surprised me, and probably everyone else who had been thinking about black holes, with their now-famous paper “Black Holes as Mirrors.” In a sense, this paper started a revolution in applying the powerful methods of quantum information theory to black holes.

We live in the age of entanglement. From quantum computing to condensed matter theory, to quantum gravity, entanglement is the new watchword. Preskill was in the vanguard of this revolution, but he was also the teacher who made the new concepts available to physicists like myself. We can now speak about entanglement, error correction, fault tolerance, tensor networks and more. The Preskill lectures were the indispensable source of knowledge and insight for us.

Congratulations John. And congratulations NAS.

-L. S.

Talking quantum mechanics with second graders

“What’s the hardest problem you’ve ever solved?”

Kids focus right in. Driven by a ruthless curiosity, they ask questions from which adults often shy away. Which is great, if you think you know the answer to everything a 7 year-old can possibly ask you…

Two Wednesdays ago, I was invited to participate in three Q&A sessions that quickly turned into Reddit-style AMA (ask-me-anything) sessions over Skype with four 5th grade classes and one 2nd grade class of students at Medina Elementary in Medina, Washington. When asked by the organizers what I would like the sessions to focus on, I initially thought of introducing students to the mod I helped design for Minecraft, called QCraft, which brings concepts like quantum entanglement and quantum superposition into the world of Minecraft. But then I changed my mind. I told the organizers that I would talk about anything the kids wanted to know more about. It dawned on me that maybe not all 5th graders are as excited about quantum physics as I am. Yet.

The students took the bait. They peppered me with questions for over two hours —everything from “What is a quantum physicist and how do you become one?” to “What is it like to work with a fashion designer (about my collaboration with Project Runway’s Alicia Hardesty on Project X Squared)?” and of course, “Why did you steal the cannon?” (learn more about the infamous Cannon Heist – yes kids, there is an ongoing war between the two schools and Caltech took the last (hot) shot just days ago.)”

Caltech students visited MIT bearing some clever gifts.

Caltech students visited MIT during pre-frosh weekend, bearing some clever gifts.

Then they dug a little deeper: “If we have a quantum computer that knows the answer to everything, why do we need to go to school?” This question was a little tricky, so I framed the answer like this: I compared the computer to a sidekick, and the kids—the future scientists, artists and engineers —to superheroes. Sidekicks always look up to the superheroes for guidance and leadership. And then I got this question from a young girl: “If we are superheroes, what should we do with all this power?” I thought about it for a second and though my initial inclination was to go with: “You should make Angry Birds 3D!”, I went with this instead: “People often say, “Study hard so that one day you can cure cancer, figure out the theory of everything and save the world!” But I would rather see you all do things to understand the world. Sometimes you think you are saving the world when it does not need saving—it is just misunderstood. Find ways to understand one another and move to look for the value in others. Because there is always value in others, often hiding from us behind powerful emotions.” The kids listened in silence and, in that moment, I felt profoundly connected with them and their teachers.

I wasn’t expecting any more “deep” questions, until another young girl raised her hand and asked: “Can I be a quantum physicist, or is it only for the boys?” The ferocity of my answer caught me by surprise: “Of course you can! You can do anything you set your mind to and anyone who tells you otherwise, be it your teachers, your friends or even your parents, they are just wrong! In fact, you have the potential to leave all the boys in the class behind!” The applause and laughter from all the girls sounded even louder among the thunderous silence from the boys. Which is when I realized my mistake and added: “You boys can be superheroes too! Just make sure not to underestimate the girls. For your own sake.

Why did I feel so strongly about this issue of women in science? Caltech has a notoriously bad reputation when it comes to the representation of women among our faculty and postdocs (graduate students too?) in areas such as Physics and Mathematics. IQIM has over a dozen male faculty members in its roster and only one woman: Prof. Nai-Chang Yeh. Anyone who meets Prof. Yeh quickly realizes that she is an intellectual powerhouse with boundless energy split among her research, her many students and requests for talks, conference organization and mentoring. Which is why, invariably, every one of the faculty members at IQIM feels really strongly about finding a balance and creating a more inclusive environment for women in science. This is a complex issue that requires a lot of introspection and creative ideas from all sides over the long term, but in the meantime, I just really wanted to tell the girls that I was counting on them to help with understanding our world, as much as I was counting on the boys. Quantum mechanics? They got it. Abstract math? No problem.*

It was of course inevitable that they would want to know why we created the Minecraft mod, a collaborative work between Google, MinecraftEDU and IQIM – after all, when I asked them if they had played Minecraft before, all hands shot up. Both IQIM and Google think it is important to educate younger generations about quantum computers and the complex ideas behind quantum physics; and more importantly, to meet kids where they play, in this case, inside the Minecraft game. I explained to the kids that the game was a place where they could experiment with concepts from quantum mechanics and that we were developing other resources to make sure they had a place to go to if they wanted to know more (see our animations with Jorge Cham at http://phdcomics.com/quantum).

As for the hardest problem I have ever solved? I described it in my first blog post here, An Intellectual Tornado. The kids sat listening in some sort of trance as I described the nearly perilous journey through the lands of “agony” and “self doubt” and into the valley of “grace”, the place one reaches when they learn to walk next to their worst fears, as understanding replaces fear and respect for a far superior opponent teaches true humility and instills in you a sense of adventure. By that time, I thought I was in the clear – as far as fielding difficult questions from 10 year-olds goes – but one little devil decided to ask me this simple question: “Can you explain in 2 minutes what quantum physics is?” Sure! You see kids, emptiness, what we call the quantum vacuum, underlies the emergence of spacetime through the build-up of correlations between disjoint degrees of freedom, we like to call entangled subsystems. The uniqueness of the Schmidt decomposition over generic quantum states, coupled with concentration of measure estimates over unequal bipartite decompositions gives rise to Schrodinger’s evolution and the concept of unitarity – which itself only emerges in the thermodynamic limit. In the remaining minute, let’s discuss the different interpretations of the following postulates of quantum mechanics: Let’s start with measurements…

Reaching out to elementary school kids is just one way we can make science come alive, and many of us here at IQIM look forward to sharing with kids of any age our love for adventuring far and wide to understand the world around us. In case you are an expert in anything, or just passionate about something, I highly recommend engaging the next generation through visits to classrooms and Skype sessions across state lines. Because, sometimes, you get something like this from their teacher:

Hello Dr. Michalakis,

My class was lucky enough to be able to participate in one of the Skype chats you did with Medina Elementary this morning. My students returned to the classroom with so many questions, wonderings, concerns, and ideas that we could spend the remainder of the year discussing them all.

Your ability to thoughtfully answer EVERY single question posed to you was amazing. I was so impressed and inspired by your responses that I am tempted to actually spend the remainder of the year discussing quantum mechanics J.

I particularly appreciated your point that our efforts should focus on trying to “understand the world” rather than “save” the world. I work each day to try and inspire curiosity and wonder in my students. You accomplished more towards my goal in about 40 minutes than I probably have all year. For that I am grateful.

All the best,
A.T.

* Several of my female classmates at MIT (where I did my undergraduate degree in Math with Computer Science) had a clarity of thought and a sense of perseverance that Seal Team Six would be envious of. So I would go to them for help with my hardest homework.

Can a game teach kids quantum mechanics?

Five months ago, I received an email and then a phone call from Google’s Creative Lab Executive Producer, Lorraine Yurshansky. Lo, as she prefers to be called, is not your average thirty year-old. She has produced award-winning short films like Peter at the End (starring Napoleon Dynamite, aka Jon Heder), launched the wildly popular Maker Camp on Google+ and had time to run a couple of New York marathons as a warm-up to all of that. So why was she interested in talking to a quantum physicist?

You may remember reading about Google’s recent collaboration with NASA and D-Wave, on using NASA’s supercomputing facilities along with a D-Wave Two machine to solve optimization problems relevant to both Google (Glass, for example) and NASA (analysis of massive data sets). It was natural for Google, then, to want to promote this new collaboration through a short video about quantum computers. The video appeared last week on Google’s YouTube channel:

This is a very exciting collaboration in my view. Google has opened its doors to quantum computation and this has some powerful consequences. And it is all because of D-Wave. But, let me put my perspective in context, before Scott Aaronson unleashes the hounds of BQP on me.

Two years ago, together with Science magazine’s 2010 Breakthrough of the Year winner, Aaron O’ Connell, we decided to ask Google Ventures for $10,000,000 dollars to start a quantum computing company based on technology Aaron had developed as a graduate student at John Martini’s group at UCSB. The idea we pitched was that a hand-picked team of top experimentalists and theorists from around the world, would prototype new designs to achieve longer coherence times and greater connectivity between superconducting qubits, faster than in any academic environment. Google didn’t bite. At the time, I thought the reason behind the rejection was this: Google wants a real quantum computer now, not just a 10 year plan of how to make one based on superconducting X-mon qubits that may or may not work.

I was partially wrong. The reason for the rejection was not a lack of proof that our efforts would pay off eventually – it was a lack of any prototype on which Google could run algorithms relevant to their work. In other words, Aaron and I didn’t have something that Google could use right-away. But D-Wave did and Google was already dating D-Wave One for at least three years, before marrying D-Wave Two this May. Quantum computation has much to offer Google, so I am excited to see this relationship blossom (whether it be D-Wave or Pivit Inc that builds the first quantum computer). Which brings me back to that phone call five months ago…

Lorraine: Hi Spiro. Have you heard of Google’s collaboration with NASA on the new Quantum Artificial Intelligence Lab?

Me: Yes. It is all over the news!

Lo: Indeed. Can you help us design a mod for Minecraft to get kids excited about quantum mechanics and quantum computers?

Me: Minecraft? What is Minecraft? Is it like Warcraft or Starcraft?

Lo: (Omg, he doesn’t know Minecraft!?! How old is this guy?) Ahh, yeah, it is a game where you build cool structures by mining different kinds of blocks in this sandbox world. It is popular with kids.

Me: Oh, okay. Let me check out the game and see what I can come up with.

After looking at the game I realized three things:
1. The game has a fan base in the tens of millions.
2. There is an annual convention (Minecon) devoted to this game alone.
3. I had no idea how to incorporate quantum mechanics within Minecraft.

Lo and I decided that it would be better to bring some outside help, if we were to design a new mod for Minecraft. Enter E-Line Media and TeacherGaming, two companies dedicated to making games which focus on balancing the educational aspect with gameplay (which influences how addictive the game is). Over the next three months, producers, writers, game designers and coder-extraordinaire Dan200, came together to create a mod for Minecraft. But, we quickly came to a crossroads: Make a quantum simulator based on Dan200’s popular ComputerCraft mod, or focus on gameplay and a high-level representation of quantum mechanics within Minecraft?

The answer was not so easy at first, especially because I kept pushing for more authenticity (I asked Dan200 to create Hadamard and CNOT gates, but thankfully he and Scot Bayless – a legend in the gaming world – ignored me.) In the end, I would like to think that we went with the best of both worlds, given the time constraints we were operating under (a group of us are attending Minecon 2013 to showcase the new mod in two weeks) and the young audience we are trying to engage. For example, we decided that to prepare a pair of entangled qubits within Minecraft, you would use the Essence of Entanglement, an object crafted using the Essence of Superposition (Hadamard gate, yay!) and Quantum Dust placed in a CNOT configuration on a crafting table (don’t ask for more details). And when it came to Quantum Teleportation within the game, two entangled quantum computers would need to be placed at different parts of the world, each one with four surrounding pylons representing an encoding/decoding mechanism. Of course, on top of each pylon made of obsidian (and its far-away partner), you would need to place a crystal, as the required classical side-channel. As an authorized quantum mechanic, I allowed myself to bend quantum mechanics, but I could not bring myself to mess with Special Relativity.

As the mod launched two days ago, I am not sure how successful it will be. All I know is that the team behind its development is full of superstars, dedicated to making sure that John Preskill wins this bet (50 years from now):

The plan for the future is to upload a variety of posts and educational resources on qcraft.org discussing the science behind the high-level concepts presented within the game, at a level that middle-schoolers can appreciate. So, if you play Minecraft (or you have kids over the age of 10), download qCraft now and start building. It’s a free addition to Minecraft.

The million dollar conjecture you’ve never heard of…

Curating a blog like this one and writing about imaginary stuff like Fermat’s Lost Theorem means that you get the occasional comment of the form: I have a really short proof of a famous open problem in math. Can you check it for me? Usually, the answer is no. But, about a week ago, a reader of the blog that had caught an omission in a proof contained within one of my previous posts, asked me to do just that: Check out a short proof of Beal’s Conjecture. Many of you probably haven’t heard of billionaire Mr. Beal and his $1,000,000 conjecture, so here it is:

Let a,b,c and x,y,z > 2 be positive integers satisfying a^x+b^y=c^z. Then, gcd(a,b,c) > 1; that is, the numbers a,b,c have a common factor.

After reading the “short proof” of the conjecture, I realized that this was a pretty cool conjecture! Also, the short proof was wrong, though the ideas within were non-trivial. But, partial progress had been made by others, so I thought I would take a crack at it on the 10 hour flight from Athens to Philadelphia. In particular, I convinced myself that if I could prove the conjecture for all even exponents x,y,z, then I could claim half the prize. Well, I didn’t quite get there, but I made some progress using knowledge found in these two blog posts: Redemption: Part I and Fermat’s Lost Theorem. In particular, one can show that the conjecture holds true for x=y=2n and z = 2k, for n \ge 3, k \ge 1. Moreover, the general case of even exponents can be reduced to the case of x=y=p \ge 3 and y=z=q \ge 3, for p,q primes. Which makes one wonder if the general case has a similar reduction, where two of the three exponents can be assumed equal.

The proof is pretty trivial, since most of the heavy lifting is done by Fermat’s Last Theorem (which itself has a rather elegant, short proof I wanted to post in the margins – alas, WordPress has a no-writing-on-margins policy). Moreover, it turns out that the general case of even exponents follows from a combination of results obtained by others over the past two decades (see the Partial Results section of the Wikipedia article on the conjecture linked above – in particular, the (n,n,2) case). So why am I even bothering to write about my efforts? Because it’s math! And math equals magic. Also, in case this proof is not known and in the off chance that some of the ideas can be used in the general case. Okay, here we go…

Proof. The idea is to assume that the numbers a,b,c have no common factor and then reach a contradiction. We begin by noting that a^{2m}+b^{2n}=c^{2k} is equivalent to (a^m)^2+(b^n)^2=(c^k)^2. In other words, the triplet (a^m,b^n,c^k) is a Pythagorean triple (sides of a right triangle), so we must have a^m=2rs, b^n=r^2-s^2, c^k =r^2+s^2, for some positive integers r,s with no common factors (otherwise, our assumption that a,b,c have no common factor would be violated). There are two cases to consider now:

Case I: r is even. This implies that 2r=a_0^m and s=a_1^m, where a=a_0\cdot a_1 and a_0,a_1 have no factors in common. Moreover, since b^n=r^2-s^2=(r+s)(r-s) and r,s have no common factors, then r+s,r-s have no common factors either (why?) Hence, r+s = b_0^n, r-s=b_1^n, where b=b_0\cdot b_1 and b_0,b_1 have no factors in common. But, a_0^m = 2r = (r+s)+(r-s)=b_0^n+b_1^n, implying that a_0^m=b_0^n+b_1^n, where b_0,b_1,a_0 have no common factors.

Case II: s is even. This implies that 2s=a_1^m and r=a_0^m, where a=a_0\cdot a_1 and a_0,a_1 have no factors in common. As in Case I, r+s = b_0^n, r-s=b_1^n, where b=b_0\cdot b_1 and b_0,b_1 have no factors in common. But, a_1^m = 2s = (r+s)-(r-s)=b_0^n-b_1^n, implying that a_1^m+b_1^n=b_0^n, where b_0,b_1,a_1 have no common factors.

We have shown, then, that if Beal’s conjecture holds for the exponents (x,y,z)=(n,n,m) and (x,y,z)=(m,n,n), then it holds for (x,y,z)=(2m,2n,2k), for arbitrary k \ge 1. As it turns out, when m=n, Beal’s conjecture becomes Fermat’s Last Theorem, implying that the conjecture holds for all exponents (x,y,z)=(2n,2n,2k), with n\ge 3 and k\ge 1.

Open Problem: Are there any solutions to a^p+b^p= c\cdot (a+b)^q, for a,b,c positive integers and primes p,q\ge 3?

PS: If you find a mistake in the proof above, please let everyone know in the comments. I would really appreciate it!

The Feynman flower

Since I met Sean Carroll about a year ago, my life has changed for the better. In particular, I started following his wife @JenLucPiquant on Twitter and began reading her Scientific American blog, Cocktail Party Physics, a great place to get the latest news on physics – with a twist. Today, through one of Mrs. Ouellette’s RTs (re-tweets), I came across a fascinating article on Feynman, titled `Richard Feynman: Life, the universe and everything.

The article goes into some detail about the unusual life Feynman led, describing some of its high points without shying away from the idiosyncratic aspects of the master physicist’s life. Described in the article is this colorful animation that a graphic designer made of a brief excerpt from the now famous BBC interview of Feynman (The pleasure of finding things out):

As the Telegraph article describes, the little animation has gone viral, spreading the message that science actually adds to our ability to appreciate beauty in the world: Unlike popular belief would have us think, science is not dry, it is not cold, not clinical, or simply analytical, devoid of emotional impact on the ones that have devoted their lives to pursue it. Science is the one honest, brave (and obviously awesome) answer humanity has come up with to the burning question:

What just happened here?

The truth is that science always begins with the following answer: I don’t know. That is the honest part. Then comes the desire to know, otherwise known as curiosity. Whereas many of us will say of certain things: We can’t possibly know this, the scientist will say: I will try anyways. That is the brave part. Because some questions lead to answers that we don’t really want to accept – we are afraid that the answer may break something inside of us, something we invested time to construct, something we cherish, something important like the feeling of accomplishment for effortlessly appreciating the sublime beauty of a flower.

And then, of course, there is that thing about science being hard to do. It is. You can try to bs your way through science, and some do try, but then you might as well be a car salesman and make some good money in the process (actually, I met an honest car salesman just three weeks ago and I am still not sure what to make of it – he works for a Mini Cooper dealership in L.A. and that is all I am allowed to say in order to protect him and his family.)

But this is only half of the story…

The question that (too many) scientists are afraid to ask themselves is the following: What if my science doesn’t speak for itself? What if my science seems super-boring to others, or simply pointless (like studying fruit flies)? What if I actually need to go out into the unknown (the world that lies outside the Ivy Tower) in order to engage the public, to get the world excited about my discoveries?

Aristotle, the grandaddy of analytical thinking, wrote the most influential treatise on Rhetoric (the art of persuasion) for good reason. So here is my thesis: Every other Sunday morning, every church hosts a scientist to give a public lecture (no boring, arcane jargon allowed) on subjects ranging from the Big Bang Theory to the Theory of Evolution and Stem Cell research. No need to try to reconcile science with religion during the lecture, or be combative – just a good story based on scientific findings – let the audience decide if they want more. All I am saying is: Give it a try. Like pastors, there are scientists out there who love to tell a good story and provide some food for thought (and maybe raise some money from the parish for their good work). We can even use animations like the one above, or this one from PhD Comics.

What do you think?

An unlikely love affair

Most readers of this blog already know that when it comes to physics, I am faking it. I am a mathematician, after all, and even that is a bit of a stretch. So, what force of nature could convince me to take graduate level Quantum Mechanics during my years of pursuing a doctorate in Applied Mathematics?

After graduating from MIT with a degree in Mathematics with Computer Science (18C), I found myself in the following predicament: I was about to start doing research on Quantum Computation as a PhD candidate at UC Davis’ Department of Mathematics, but I had taken exactly two physics courses since 9th grade (instead of Chemistry, Biology and Physics, I had no choice but to take Anthropology, Sociology and Philosophy throughout high school; which I blame for starting a fashion line…) The courses are well-known to MIT undergraduates – 8.01 (Classical Mechanics) and 8.02 (Electromagnetism) – since they are part of MIT’s General Institute Requirements (GIRs). Modesty and common sense should force me to say that I found the two MIT courses hard, but it would not be true. I remember getting back my 8.01 midterm exam on rocket dynamics with a score of 101%. I didn’t even know there was a bonus question, but I remember the look on my friend’s face when he saw my score and Prof. Walter Lewin announced that the average was 45%. It doesn’t take much more than that to make you cocky. So when my PhD adviser suggested years later that I take graduate Quantum Mechanics with no background in anything quantum, I accepted without worrying about the details too much – until the first day of class…

Prof. Ching-Yao Fong (Distinguished Professor of Physics at UC Davis) walked in with a stack of tests that were supposed to assess how much we had learned in our undergraduate quantum mechanics courses. I wrote my name and enjoyed 40 minutes of terror as it dawned on me that I would have to take years of physics to catch up with the requirements needed for any advanced quantum mechanics course. But out-of-state (worse, out-of-country) PhD students don’t have the luxury of time given the fact that we cost three times as much as in-state students to support (every UC is a public university). So I stayed in class and slowly learned to avoid the horrified looks of others (all Physics PhD candidates), whenever I asked an interesting question (thanks Dr. Fong), or made a non-sense remark during class. And then the miracle happened again. I aced the class. I have already discussed my superpower of super-stubbornness, but this was different. I actually had to learn stuff in order to do well in advanced quantum mechanics. I learned about particles in boxes, wavefunctions, equations governing the evolution of everything in the universe – the usual stuff. It was exhilarating, a whole new world, a dazzling place I never knew! In all my years at MIT, I never took notes on any of my classes and I continued the same “brilliant” tactic throughout my PhD, except for one class: Quantum Mechanics. I even used highlighters for the first time in my life!

It was a bonafide love affair.

Thinking about it years later, comfortable in my poly-amorous relationship with Paul Dirac (British), Werner Heisenberg (German), Erwin Schrödinger (Austrian) and Niels Bohr (Danish), I realize that some people may consider this love one-sided. Not true. Here is proof: Dirac himself teaching quantum mechanics like only he could.

Note: The intrepid Quantum Cardinal, Steve Flammia, scooped us again! Check out his post on the Dirac lectures and virtual hangouts for quantum computation lectures on Google+.

Largest prime number found?

Over the past few months, I have been inundated with tweets about the largest prime number ever found. That number, according to Nature News, is 2^{57,885,161}-1. This is certainly a very large prime number and one would think that we would need a supercomputer to find a prime number larger than this one. In fact, Nature mentions that there are infinitely many prime numbers, but the powerful prime number theorem doesn’t tell us how to find them!
nature_news_highlightWell, I am here to tell you of the discovery of the new largest prime number ever found, which I will call P_{euclid}. Here it is:

P_{euclid} = 2\cdot 3\cdot 5\cdot 7\cdot 11 \cdot \cdots \cdot (2^{57,885,161}-1) +1.

This number, the product of all prime numbers known so far plus one, is so large that I can’t even write it down on this blog post. But it is certainly (proof left as an exercise…!) a prime number (see Problem 4 in The allure of elegance) and definitely larger than the one getting all the hype. Finally, I will be getting published in Nature!

In the meantime, if you are looking for a real challenge, calculate how many digits my prime number has in base 10. Whoever gets it right (within an order of magnitude), will be my co-author in the shortest Nature paper ever written.

Update 2: I read somewhere that in order to get attention to your blog posts, you should sprinkle them with grammatical errors and let the commenters do the rest for you. I wish I was mastermind-y enough to engineer this post in this fashion. Instead, I get the feeling that someone will run a primality test on P_{euclid} just to prove me wrong. Well, what are you waiting for? In the meantime, another challenge: What is the smallest number (ballpark it using Prime Number Theorem) of primes we need to multiply together before adding one, in order to have a number with a larger prime factor than 2^{57,885,161}-1?

Update: The number P_{euclid} given above may not be prime itself, as pointed out quickly by Steve Flammia, Georg and Graeme Smith. But, it does contain within it the new largest prime number ever known, which may be the number itself. Now, if only we had a quantum computer to factor numbers quickly…Wait, wasn’t there a polynomial time primality test?

Note: The number mentioned is the largest known Mersenne prime. That Mersenne primes are crazy hard to find is an awesome problem in number theory.

De divina proportione

As a mathematician, I often wonder if life would have been easier were I born 2,400 years ago. Back then, all you had to do to become eternally famous was to show that 3^2+4^2=5^2 ( I am looking at you Πυθαγόρα!) OK, maybe I am not giving the Ancients enough credit. I mean, they didn’t have an iPhone back then, so they probably had to do 3^2+4^2 by hand. All kidding aside, they did generalize the previous equality to other large numbers like 12^2+5^2=13^2 (I am feeling a little sassy today, I guess.) Still, back then, mathematics did not start as an abstract subject about relations between numbers. It grew from a naive attempt to control elements of design that were essential to living, like building airplanes and plasma TVs. The Greeks didn’t succeed then and, if I am not mistaken, they still haven’t succeeded in making either airplanes, or plasma TVs. But, back then at least, my ancestors made some beautiful buildings. Like the Parthenon.

The Acropolis in Athens, GreeceThe temple of Athina (the Goddess of Wisdom, which gave her name to the city we now call Athens after a fierce contest with Poseidon – imagine flying into Poseidonia every time you visited Greece had she lost) was designed to be seen from far away and inspire awe in those who wished to conquer the city-state of Athens. But, those who were granted access to the space behind the doric columns, came face to face with the second divine woman to ever make Zeus stand in attention, whenever she met her dad on legendary Mt. Olympus: Αθηνά. And so, Φειδίας (Phidias), that most famous of ancient Greek sculptors, decided to immortalize Athina’s power with a magnificent statue, a tribute to the effortless grace with which she personified the wisdom of an ancient culture in harmony with the earth’s most precious gift – feta cheese.

Here she is, playing an invisible electric cello next to Yo-Yo Ma (also invisible).

Here she is, playing an invisible electric cello next to Yo-Yo Ma (also invisible). And yes, she liked to work out.

OK, I may be biased on this one. For Greeks, virgin olive oil and φέτα cheese go like peanut butter and jelly (I didn’t even know the last two went so well together, until I left Greece for the country of America!) Oh yeah, you are probably wondering what feta cheese and olive oil have to do with the Goddess of Wisdom. Well, how do you think she won over the Athenians, against Zeus’ almighty brother, Poseidon? The olive branch, of course. The sea is good and all (actually, the sea is pretty freakin’ amazing in Greece), but you can’t eat it with feta – you can preserve feta in brine (salt water), which is why Poseidon had a fighting chance in the first place – but, yeah, not good enough. Which brings us to the greatest rival, nay – nemesis, of the first letter with an identity crisis, \pi: The letter \phi. You are most likely familiar with the letter-number \pi = 3.1415925123\ldots (you may have even seen the modern classic, American Pie, a tour-de-force, honest look at the life of Pi. No pun intended.) But, what about the number 1.618033\ldots? Well, I could tell you all about this number, \phi, named after the sculptor dude above, but I ‘d rather you figure out its history on your own through this simple math problem:

The divine proportion: Does there exist a function f: \mathbb{N} \rightarrow \mathbb{N}, such that f(1)=2, f(f(n)) = f(n)+n and f(n) < f(n+1) for all n \in \mathbb{N}?

Καλή τύχη, μικροί μου Φιμπονάτσι!