The group of physicists seriously engaged in studies of the “foundations” or “interpretation” of quantum theory is a small sliver of the broader physics community (perhaps a few hundred scientists among tens of thousands). Yet in my experience most scientists doing research in other areas of physics enjoy discussing foundational questions over coffee or beer.

The central question concerns quantum measurement. As often expressed, the axioms of quantum mechanics (see Sec. 2.1 of my notes here) distinguish two different ways for a quantum state to change. When the system is not being measured its state vector rotates continuously, as described by the Schrödinger equation. But when the system is measured its state “collapses” discontinuously. The Measurement Problem (or at least one version of it) is the challenge to explain why the mathematical description of measurement is different from the description of other physical processes.

My own views on such questions are rather unsophisticated and perhaps a bit muddled:

1) I know no good reason to disbelieve that all physical processes, including measurements, can be described by the Schrödinger equation.

2) But to describe measurement this way, we must include the observer as part of the evolving quantum system.

3) This formalism does not provide us observers with deterministic predictions for the outcomes of the measurements we perform. Therefore, we are forced to use probability theory to describe these outcomes.

4) Once we accept this role for probability (admittedly a big step), then the Born rule (the probability is proportional to the modulus squared of the wave function) follows from simple and elegant symmetry arguments. (These are described for example by Zurek – see also my class notes here. As a technical aside, what is special about the L2 norm is its rotational invariance, implying that the probability measure picks out no preferred basis in the Hilbert space.)

5) The “classical” world arises due to decoherence, that is, pervasive entanglement of an observed quantum system with its unobserved environment. Decoherence picks out a preferred basis in the Hilbert space, and this choice of basis is determined by properties of the Hamiltonian, in particular its spatial locality.

I like having it both ways — to view the overall evolution of system and observer as unitary, but still use probability theory for the purpose of describing the observer’s experience. The “collapse” of the state vector is really an update taking into account the observer’s knowledge; if we wished to we could instead describe the joint evolution of system and observer coherently without any discontinuous collapse.

If we insist on sticking with the coherent description at all times, we are forced to include in our description all the possible outcomes of all measurements along the way, which may seem extravagant. In practice, for the purpose of describing one observer’s experience, we don’t normally need to do that. Instead the observer updates her description as she learns more.

A related controversy concerns whether the quantum state is “ontic” (a mathematical description of physical reality) or “epistemic” (a description of what a particular observer *knows* about reality). I don’t really understand this question very well. Why can’t there be both a fundamental ontic state for the system and observer combined, and at the same time an (arguably less fundamental) epistemic state for the system alone which is continually updated in the light of the observer’s knowledge?

The viewpoint encapsulated by (1) – (5) is a version of what is sometimes called the Everett interpretation of quantum theory. It puzzles me somewhat that physicists I respect very much, who unlike me are serious devotees of quantum foundations (among them Carl Caves, Chris Fuchs, Adrian Kent, Tony Leggett, David Mermin, Rob Spekkens, … ), seem to find this viewpoint foolish, though perhaps I should not put words in their mouths. I admit it’s less precise than one might desire, and that one can feel a bit dizzy when thinking about a description of a physical system that includes oneself. I feel pretty comfortable with the Everett interpretation, though I try not to be dogmatic about it.

Anyway, I was inspired to post on this subject today due to a recent paper by Schlosshauer, Kofler, and Zeilinger, reporting the results of a poll taken at a quantum foundations conference in 2011. The poll probes the views of the conference participants regarding the interpretation of quantum theory. We should keep in mind that the physicists among the respondents (there were also philosophers and mathematicians) may be a highly biased sample of the general physics community; those attending a conference like this one are, of course, particularly passionate about foundational questions. A broader poll of physicists might have found rather different results. It’s a small sample as well (33 participants).

Overall I find the poll results rather hard to interpret, in part because many of the questions are deliberately ambiguous. But I was intrigued by the list, at the beginning of Sec. 4, of the statements supported by a majority of those surveyed:

1. Quantum information is a breath of fresh air for quantum foundations (76%).

2. Superpositions of macroscopically distinct states are in principle possible (67%).

3. Randomness is a fundamental concept in nature (64%).

4. Einstein’s view of quantum theory is wrong (64%).

5. The message of the observed violations of Bell’s inequalities is that local realism is untenable (64%).

6. Personal philosophical prejudice plays a large role in the choice of interpretation (58%).

7. The observer plays a fundamental role in the application of the formalism but plays no distinguished physical role (55%).

8. Physical objects have their properties well defined prior to and independent of measurement in some cases (52%).

9. The message of the observed violations of Bell’s inequalities is that unperformed measurements have no results (52%).

I’m surprised to find I agree with every one of these statements! Perhaps I am less out of sync with the quantum foundations crowd than I had imagined. Or is that an illusion?

I’m not sure about the value of polls like this one. But they are kind of fun anyway.

In my view, the value of such a poll is exactly the same as the value of a poll on whether string theory is the correct route to a theory of everything. In other words, it may give some insight into what people are currently thinking, but as far as nature is concerned it does not matter one hoot. Each of these ideas is either right or wrong and the future course of physics will be the judge of that, opinion be damned. Rational argument and experiment are the way to determine these things, not voting.

You may detect that I am slightly annoyed by the fact that quantum foundations is fairly regularly subjected to such polls, whilst other areas of physics are not. This is because it undermines my conviction that quantum foundations is an area of physics, capable of rigorous results and experimental tests just like any other area of physics, and not simply a topic of abstract philosophical debate. If we have learnt anything from John Bell it should be this.

“A related controversy concerns whether the quantum state is “ontic” (a mathematical description of physical reality) or “epistemic” (a description of what a particular observer knows about reality). I don’t really understand this question very well. Why can’t there be both a fundamental ontic state for the system and observer combined, and at the same time an (arguably less fundamental) epistemic state for the system alone which is continually updated in the light of the observer’s knowledge?”

There could be, but this is not what the argument is about. It is about whether the global quantum state of the universe (or any system large enough that the state can be considered pure and unitarily evolving) is ontic. If it is ontic then no one would deny that there are also epistemic states of subsystems of the universe that are different from it. However, if the global state is ontic then this undermines the epistemic explanations of things like nondistinguishability, no cloning, and teleportation, so it seems to me to be important to question whether such a view is inevitable.

Thanks, Matt. Maybe it would be interesting to conduct some (highly unscientific) polls on other questions at a blog such as Quantum Frontiers. Perhaps readers can suggest some suitable questions where a poll could be enlightening.

I basically “submit” the same text with some corrections of typing errors and some grammar errors; for the rest of the grammar and syntax errors, please forgive me … and feel free to delete (or not) the previous reply of mine! Regarding the grave (if not obvious) error of the verbal confusion between time-dependent and time-independent perturbation theory, I am ashamed … though some affection for time-independence (that occurred to me via a foreign language) could be well understood; that’s not an excuse but an effort to explain something that seems strange to me.

“I know no good reason to disbelieve that all physical processes, including measurements, can be described by the S-eq”

I am not satisfied from (with) this statement: the good reason might be … time-dependent perturbation theory of Quantum Mechanics and physical processes described only by this very theory, not by the Schroedinger equation and the operator of the unitary evolution of its solutions…

Here comes the unitary evolution and the Schroedinger equation once more, that is P.A.M. Dirac’s “The Principles of Quantum Mechanics” (Ch. V, § 27, p. 108) and the following sentences: “… but in between observations causality applies … and the system is governed by equations of motion which make the state at one time determine the state at a later time. … They will apply so long as the dynamical system IS LEFT undisturbed by any observation OR SIMILAR PROCESS.” That with not enough apologies to Dirac regarding some editing of his lucid, loveliest and much inspiring text…

Well, compare with § 42, p. 167-168 and two essentially different parts of that introductory (but crucial) text, this one: “… most quantum problems, however, cannot be solved exactly with the present resources of mathematics, as they lead to …” and that one: “… There are two dinstinct methods in perturbation theory … The second method must, on the other hand, be used for problems involving a consideration of time, such as those … ETC … Again, this second method MUST be used in … ETC.”

I don’t know of course what the future will bring or How The (supposedly) ALL MIGHTY (supposed) God knows everything about the future and all, however I do believe that the greatest P.A.M. Dirac was a humble but ardent Platonist…; what is “a similar process” if not some process only described by time-dependent perturbation theory (this … second … method…) like “absorption and emission” effects which, by the way, are NOT (necessarily…) observations!

Thus … in-between (!) processes only described (for the time-being) by time-dependent perturbation theory causality does apply (when?) … and the system is governed by equations of motion which make the state at one time determine…

Well, I ain’t convinced that these problems can’t be solved exactly due to insufficient (either present or future) resources of mathematics… Are you convinced about that … and why?

Sincerely yours, cunctatorg

P.S. Please think about this and then post a reply or something… Given this annoying series of copies with corrections to absurd errors (plus some abuse of the English language) feel free to delete (or not) all of these replies!

I saw Max Tegmark carry out a related poll in 1999, asking the audience which interpretation of quantum mechanics they favoured. At the end of the poll, Asher Peres got up and asked: “And who here believes the laws of physics are decided by a democratic vote?” Judging from the huge round of applause, Asher was the poll’s clear winner, by a wide margin.

(It’s possible John was present for the poll – it was at a conference at Cambridge in, I believe, 1999, at the Newton Centre.)

I don’t think I was there; I probably would have remembered that.

I do remember introducing myself to Asher in Torino in 1996. He said, “There is someone with a similar name in particle theory.”

Pingback: » A poll on the foundations of quantum theory. Gordon's shares

I just received Weinberg’s new book “Lectures on Quantum Mechanics” which has a long section (especially for the level of the book) on foundations/interpretations of quantum mechanics. In my naive view there doesn’t seem to be anything wrong with the current theory even the orthodox interpretation except for vagueness. Until “measurement” is defined by the theory or in some other way, and I have no idea how this would be done, there would seem to be no way to favor one interpretation over another. Compare this with special relativity where one can easily define a time interval (by using a light clock to bounce signals), and thus easily define an interpretation.

Weinberg’s book also seems to indicate that all the “derivations” of the born rule seem to involve circular arguments. I will have to look again more carefully at your notes and some of the other derivations.

I’m glad to hear the book is now out, and look forward to having a look.

The author acknowledges John in the preface.

I sat in his Austin classes from which the book was originated.

John was not in the Cambridge conference.

What is Weinberg’s conclusion ? I read he no longer consider the Many Worlds view plausible in his most recent papers and that he was showing some interest for GRW type theories, but does he say anything beyond that?

Weinberg states “My own conclusion (not universally shared) is that today there is no interpretation of quantum mechanics that does not have serious flaws, and that we ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is merely a good approximation”.

I am ambivalent about the first part of this statement, but I strongly agree with the second part. Experimentalists should “test quantum mechanics,” and such tests are enlivened if we have serious alternative theories making different predictions.

Weinberg’s passage can be read to good effect as a manifesto for quantum systems engineers, by the slightest of adjustments to its concluding phrase: “we ought to take seriously the possibility of finding some more satisfactory other theory, which is a good approximation to quantum mechanics”.

CorollaryA sufficiently strong & beautiful approximate theory is indistinguishable from a fundamental theory!“When the system is not being measured its state vector rotates continuously, as described by the Schroedinger equation. But when the system is measured its state “collapses” discontinuously. … Einstein’s view of quantum theory is wrong (64%).”

On his website, G. ‘t Hooft has stated, “I am led to a startling result: quantized superstrings can be mapped mathematically to classical strings living on a lattice. … The logic of Superstring Theory will be even easier than that of classical mechanics! … Superstring Theory will be based on integers living on a lattice. Only finite amounts of information are progressed, during finite time steps. No chaos.”

http://www.phys.uu.nl/~thooft website Gerard ‘t Hooft – Universiteit Utrecht

I have conjectured that ‘t Hooft’s deterministic superstring theory is empirically valid if and only if the alleged Fernández-Rañada-Milgrom effect is empirically valid. I am guessing that Einstein’s view of quantum theory is correct, but the equivalence principle is 100% correct for measured mass-energy and 100% wrong for nonmeasured mass-energy. Have most physicists underestimated Milgrom and ‘t Hooft?

‘t Hooft is one of my great heros in physics, as I mentioned here. He has proposed that quantum mechanics is an emergent phenomenon in a theory where the fundamental dynamics is dissipative classical physics. Knowing ‘t Hooft, this may be a deep idea, and I wish I understood it. Maybe I should try harder.

John, greetings from Oxford, which as you (maybe?) know is a hotbed of Everettianism. I spent an enjoyable afternoon with some of the local philosophers agreeing that any other approach was somewhat foolish. :)

I guess the only one of the list of poll questions with which I would disagree is “Randomness is a fundamental concept in nature,” although clearly there is some vagueness there. The wave function evolves in a perfectly deterministic way in Everett. Our *knowledge* goes from being perfect to being imperfect as this evolution proceeds, but I’m not sure whether it’s appropriate to call this “randomness.”

Sean, I hope you’ll do a separate post on your discussions.

Yes, like the other questions it is subject to interpretation. I interpret it as meaning that even if we have the most complete description of a physical system that nature will allow, we are still unable to predict the outcomes of measurements with certainty. In this sense, the randomness exhibited by the measurement outcomes is intrinsic, rather than a result of our ignorance.

Have fun at Oxford among the Everettista.

I would never call John Preskill foolish (or even his viewpoints). He has never come across as dogmatic to me, just as he has hoped not to be. But maybe I *will* quote the philosopher William James, from an 1879 lecture “The Sentiment of Rationality,” on something I think relevant to the issue:

“Why does W. K. Clifford fearlessly proclaim his belief in the conscious-automaton theory, although the `proofs’ before him are the same which make Mr. Lewes reject it? Why does he believe in primordial units of `mind-stuff’ on evidence which would seem quite worthless to Professor Bain? Simply because, like every human being of the slightest mental originality, he is peculiarly sensitive to evidence that bears in some one direction. It is utterly hopeless to try to exorcise such sensitiveness by calling it the disturbing subjective factor, and branding it as the root of all evil. `Subjective’ be it called! and `disturbing’ to those whom it foils! But if it helps those who, as Cicero says, `vim naturae magis sentiunt,’ [`feel the force of nature more’] it is good and not evil. Pretend what we may, the whole man within us is at work when we form our philosophical opinions. Intellect, will, taste, and passion co-operate just as they do in practical affairs …”

Of myself, I’ll just stop short and say that I must be “peculiarly sensitive” to something that maybe John is not. But maybe it’s also worth quoting Einstein in addition to James (it’s always worth quoting Einstein). From his “Reply to Criticisms” in the 1949 Schilpp volume:

“It may appear as if all such considerations were just superfluous learned hairsplitting, which have nothing to do with physics proper. However, it depends precisely upon such considerations in which direction one believes one must look for the future conceptual basis of physics.”

If the peculiar sensitivity of those who argue that quantum *states* are in the head rather than out in nature is worth anything, then it should manifest itself in the directions it turns physical inquiry and the fruits of those questions that perhaps no one else would have been led to ask.

Chris tells me this is the first time he has ever posted a comment on a blog. Quantum Frontiers is honored to be the site of his debut.

Beyond that I am speechless, as I find nothing in your comment to disagree with. Perhaps you were too subtle.

“… ‘t Hooft … has proposed that quantum mechanics is an emergent phenomenon in a theory where the fundamental dynamics is dissipative classical physics.” THINK ABOUT HOW RADICAL AN IDEA THAT REALLY IS!

According to ‘t Hooft, “We claim that our observations add a new twist to discussions concerning the interpretation of quantum mechanics, which we call the cellular automaton (CA) interpretation.”

http://arxiv.org/pdf/1207.3612v2.pdf “Discreteness and Determinism in Superstrings” by G. ‘t Hooft, 2012

In reference to t’ Hooft’s article “Discreteness and Determinism in Superstring”, Motl made the following statement, “Quantum mechanics plays an essential role in string theory both on the world sheet and spacetime. Without quantum mechanics, the spectrum of particles wouldn’t be discrete, the conformal symmetry and modular invariance wouldn’t work. Dualities wouldn’t exist, unitarity would break, all hell would break loose.

The same applies to continuity (i.e. non-discreteness) of the worldsheet variables that are essential for conformal symmetry which is essential for consistency as well, and so on. …”

http://motls.blogspot.co.uk/2012/07/diversity-of-observables-in-quantum.html Motl’s blog “The Reference Frame”, Tuesday, July 17, 2012, “Diversity of observables in quantum mechanics” (comments section)

I say that Motl makes an excellent point. There are only two possibilities: (1) ‘t Hooft’s CA work is either wrong or irrelevant or (2) ‘t Hooft’s CA interpretation of quantum mechanics is correct and, at least in terms of string theory, all hell breaks loose WITH TESTABLE PREDICTIONS.

Specifically, my guess is that ‘t Hooft’s work is the ONLY way to explain dark matter and dark energy. Do most string theorists have the wrong idea about dark matter? McGaugh wrote, “To my growing incredulity, each observation that was puzzling in the context of dark matter turned out to be confirmation of one of Milgrom’s long standing predictions.“

http://www.astro.umd.edu/~ssm/darkmatter/LCDMriff.html “Through a Universe Darkly” by S. McGaugh, University of Maryland

‘t Hooft’s proposal is indeed radical. One particularly interesting testable prediction, noted here, is:

“It will never be possible to construct a ‘quantum computer’ that can factor a large number faster, and within a smaller region of space, than a classical machine would do, if the latter could be built out of parts at least as large and as slow as the Planckian dimensions.”

This claim connects nicely with the discussion currently raging over at Shtetl Optimized.

The prediction is only testable in the narrow sense that a successful quantum computer would invalidate ‘t Hooft, failing to make a quantum computer for a long time, and even constructing plausible looking ‘proofs’ as to why it can’t be done would not falsify alternative theories. I am reminded of the mathematical constructivists dislike fo proof by contradiction, though I am not suer if the analogy (to levels of testability and falsifiability in physical theories) goes anywhere deeper.

John — I’m not crazy enough to try to persuade anyone of the case against Everett on a blog, but you raise a good sociological question, and maybe I can help a bit with your puzzlement.

First, I think it’s important to stress that there is essentially no agreement on what people mean by “the Everett interpretation” or “the many-worlds interpretation”. Everett was scathingly abusive about attempts by DeWitt and others to flesh out his ideas. Lots of people have tried since (sometimes sympathetically, sometimes pointing out how unsatisfactory the result is): for example Bell, Albert and Loewer, Zurek, Gell-Mann and Hartle, Deutsch (at least twice in very different ways), Wallace, Vaidman, Geroch, Barbour, Papineau all give different and mostly pairwise inconsistent versions. Most of them are presumably wrong about something important, and they’re all trying to solve essentially the same problem, so it maybe shouldn’t be so surprising a priori if all of them are wrong about something important.

To flesh that out a bit, consider a couple of questions. Do we need to add extra assumptions (and if so how strong and how plausible are they) to infer a picture of many effectively independent branching mostly quasiclassical worlds from unitary quantum theory?

And, supposing we had a many-worlds picture, can we use it to explain why we expect to see relative frequencies in experiments that approximately agree with the Born rule? Many people think the answer to the first is “yes”; many of those (including me) think the assumptions need to be very strong and not at all plausible or attractive. Many people think the answer to the second is “no”, and most of those (again including me) think that, on the other hand, theories involving probability in one world *can* explain the data — in other words Everettians fail scientifically where one-world versions of quantum theory succeed.

Now, you might not be persuaded that these are the right answers, but you would probably agree these are crucial points. You should also be persuaded, I think, they can’t be *obviously* the wrong answers. After all, if they were, Everettians wouldn’t have such trouble agreeing on how to get the answers they want (“no, or at least only modulo very weak and innocuous assumptions” and “yes”, respectively). So, if there’s any cause for puzzlement, it shouldn’t be that some people take a different view, but that the physics community as a whole hasn’t done a better job in getting to the bottom of this. Which suggest the thought that perhaps we need to train and hire more people in foundations…

It’s true that, generally speaking, we don’t teach quantum foundations to our physics students — those who are interested have to pick it up on the streets. We expect any Ph.D. in physics to know quantum field theory and general relativity, and we expect a graduate program to offer courses on advanced topics like string theory or condensed matter physics (maybe even quantum information). But few programs offer a course on quantum foundations.

One notable exception is Perimeter Institute, where Rob Spekkens has been teaching a course on foundations for several years. Last year’s lectures are available here.

Do you teach such a class at Cambridge? It would be interesting to know what is on the syllabus.

Rob’s course is excellent — I would highly recommend it to anyone interested in quantum foundations. We do have a shortish quantum foundations course in the Cambridge M. Math. The syllabus varies from year to year depending on who’s teaching it and what they choose to cover. Here’s this year’s:

http://www.qi.damtp.cam.ac.uk/node/247

“One notable exception is Perimeter Institute, where Rob Spekkens has been teaching a course on foundations for several years. Last year’s lectures are available here.”

I fell asleep halfway through reading the Spekkens syllabus. Wake me up when they find something useful.

LOL, no surprise.

I might add that I believe quantum foundations would attract much broader attention if alternative viewpoints pointed toward sharply differing experimentally verifiable predictions. Of course, people test Bell inequalities and worry about the loopholes, and Leggett deserves a lot of credit for pushing experimentalists to study macroscopic quantum coherence. But this program has been inhibited, in my view, by a lack of credible and clearly defined alternatives to the predictions of orthodox quantum mechanics.

Several thoughts on this. First, I appreciate physicists follow different motivations, and a diversity of approaches is definitely good, but personally, I’d like to understand nature and our best theories of nature, and if our current best formulations of (say) quantum theory don’t make sense or don’t explain experiment or don’t give a coherent picture of reality, I’d like to know whether we can do better, regardless of whether that leads immediately to new experiments.

Second, the different research programmes in quantum foundations often *do* point towards testably different predictions. Let’s not forget dynamical collapse models and the considerable efforts under way by experimentalists to test those, for example.

More generally, one of the reasons for thinking it’s worth resolving whether many worlds ideas make sense is that most (maybe all) of the realist alternatives naturally suggest testable generalizations of quantum dynamics.

Third, I think Matt’s point earlier about double standards holds very strongly here. A lack of experimental tests didn’t prevent a large part of a whole generation of theorists from working on string theory, for example. I’m guessing those who weren’t solely motivated by the sheer beauty of the mathematics did so because they believed the theoretical ideas were promising enough that there was a good chance they would eventually have an experimental payoff. That’s how I feel about quantum foundations (and it has produced far more interesting interactions with experiment, even though the community is much smaller).

If I can add a plug, Joseph Emerson, Rafael Sorkin, Wayne Myrvold and I are organising a conference at Perimeter Institute on precisely this topic (“The Quantum Landscape: Generalizations of Quantum Theory and Experimental Tests”) at Perimeter Institute from May 27-31 this year. Registration details should go up on the Perimeter website shortly.

I think testing dynamical collapse models is interesting, and that more should be done to make these models “credible and clearly defined” as I put it. Usually such models are rather ad hoc and phenomenological; we’d prefer to have a more fundamental theory of intrinsic decoherence that we can apply to a variety of different experimental settings. I intend to write a longer post about that sometime.

One might hope that string theory research and quantum foundations research can find some common ground, with each enriching the other. Or is that too wild a dream? (I guess ‘t Hooft would say no…)

Kent, could you expand a little on the part where you ask:

“Do we need to add extra assumptions (and if so how strong and how plausible are they) to infer a picture of many effectively independent branching mostly quasiclassical worlds from unitary quantum theory?”

I was hoping for a reply by Preskill that would start a interesting back and forth regarding these issues, but that didn’t happen.

Yeah, sorry. Matt Leifer, Chris Fuchs, and Adrian Kent are experts — they have been thinking hard about foundational questions for years and I have not. If I say more about what I think I’ll probably just wind up repeating myself…

Unless and until we have contrary experimental evidence, it seems reasonable to me to suppose that quantum systems evolve unitarily as determined by some Hamiltonian. In particular, then, a measurement is subject to such a unitary description.

To understand measurement, we consider three subsystems, the quantum system S to be measured, the (possibly macroscopic) apparatus A, and a large environment E.

Under unitary evolution S becomes correlated with A and E. We may say that information about the state of S becomes recorded, in a highly redundant fashion, in A. Furthermore, interference between different possible measurement outcomes is suppressed due to the correlation with E (“decoherence”). We obtain a description of measurement in which all possible measurements are retained (no “postselection”).

I believe that everything I have said so far is uncontroversial.

I also believe that I and other sentient observers are macroscopic quantum systems subject to the same laws as other inanimate systems. To describe what happens when I perceive a measurement outcome, I need to include my memory as part of system A. Now we have a description of the measurement in which all possible outcomes are retained, and my memory is correlated with the state of S. There is no mystery about why I perceive a unique outcome — whatever the outcome is, I feel certain about the outcome, and my perception agrees with the actual state of the system.

This part should also be uncontroversial, it seems to me, unless one believes that human observers are subject to different laws of physics than other systems.

The formalism as I described it so far does not provide deterministic predictions for measurement outcomes. So what are observers to do? Naturally, they use probability theory to quantify their ignorance about what they are about to perceive. This part seems to be highly controversial, particularly among philosophers, but I’m not sure why.

Finally, once we are willing to accept this role for probability theory, there is the question about how probabilities should be computed (the Born rule), which also seems to be quite controversial. I feel less confident about this part, but as I mentioned in my post, I like Zurek’s approach to this problem. He says that when alternative states of the system decohere due to entanglement with the environment (and only in this case can we consistently assign probabilities to the alternatives), then our probability assignments should not depend on which state of the environment is correlated with each alternative for the system. This leads to the conclusion that for a uniform coherent superposition of N alternatives we should assign probability 1/N to each alternative.

To extend the Born rule to more general superpositions, one seems to need to accept a distinguished role for the L2 norm on Hilbert space vectors. That arouses the suspicion that the reasoning is circular, which may be a fair criticism. I feel it may be acceptable, though, to just assume that the Hilbert space inner product is part of the structure of the theory. If the experts want to take potshots, this may be where I am most vulnerable.

As I have mentioned in other comments in this thread, I am open to the possibility that nature does not follow the currently accepted rules of quantum mechanics, and I think it is valuable to think about whether there are viable alternatives and how they can be tested.

Hello John, sorry for a late reading of your blog post, I only hope you’ll see my comment.

“To understand measurement, we consider three subsystems, the quantum system S to be measured, the (possibly macroscopic) apparatus A, and a large environment E.”

The environment (especially a large one) is the problematic part. The main test for the usefulness of any particular interpretation of QM is its application to quantum cosmology. Specifically, consider A to be your measuring apparatus of your choice, while S — the observed quantum system — is the rest of the Universe. In this setup, when A interacts with S to perform a measurement, there is no environment E available, and consequently no decoherence. Then the measurement problem blows up, and there is no suitable way of explaining why we don’t observe superpositions of various states of the Universe, but rather only one of the many possible measurement results.

In general, decoherence does not solve the measurement problem, but rather only “postpones” it. Ultimately, when you consider the whole S+E thing as your quantum system, there is no way of postponing it any more.

“To describe what happens when I perceive a measurement outcome, I need to include my memory as part of system A. Now we have a description of the measurement in which all possible outcomes are retained, and my memory is correlated with the state of S. There is no mystery about why I perceive a unique outcome — whatever the outcome is, I feel certain about the outcome, and my perception agrees with the actual state of the system.”

If you drop decoherence (which you must when measuring the whole Universe), this mystery is indeed still there: suppose you are considering a Universe which has a Schrodinger’s cat in the box. Acting as a measurement device A, you open the box and your memory gets correlated with the measurement of the Universe+cat. Typically, your memory of “cat is alive” would be correlated to the Universe’s state with a living cat, while your memory of “cat is dead” would be correlated with the other (orthogonal) state of the Universe+cat. However, there is nothing that prefers the (alive,dead) basis for the Universe+cat over the basis (alive+dead,alive-dead). Both bases are equivalent. Why don’t you have a memory of finding the Universe+cat in the state alive+dead? In other words, what prefers the outcome of your measurements to be one of the (alive,dead) states, rather than one of the (alive+dead,alive-dead) states?

Zurek has invented the “pointer basis” concept to explain the choice of a preferred basis, but it heavily relies on decoherence and the interaction Hamiltonian with the environment E, which are absent when you consider the whole Universe as a quantum system. So it doesn’t actually work in this case.

Therefore, I don’t see either the measurement problem nor the Schrodinger’s cat problem as solved in the Many-Worlds interpretation, if you consider quantum cosmology. In addition, most of the other interpretations of QM struggle with this as well.

Historically, QM has been formulated as a theory of microscopic/small systems interacting with the macroscopic/large apparatus and even larger environment. But if any interpretation of QM is to be taken as a serious, fundamental understanding of the world, one must stop thinking in table-top terms, and start considering the Universe as a quantum system S, to be measured by a (possibly microscopic) apparatus A, with no environment E available. I’m not sure which (if any) interpretations of QM are able to handle these kind of situations successfully.

I’d be interested to read your comment on this. :-)

Best, :-)

Marko

I think there is a role for decoherence theory even in quantum cosmology.

We should be cautious about interpreting a global wave function of the universe describing quantum correlations that are not accessible to any conceivable observer. If we stick with a particular observer’s viewpoint, we should trace out the degrees of freedom beyond that observer’s global horizon, driving a particularly sharp and intrinsically irreversible type of decoherence.

This notion of cosmological decoherence is especially clean and unambiguous in spacetimes that admit asymptotically infinite regions with zero cosmological constant and hence asymptotically infinite entropy (where this infinite region’s “environment” is also asymptotically infinite). Some, like Bousso and Susskind here, have suggested that such asymptotically infinite regions are actually required for quantum mechanics to have a satisfactory interpretation. I’m reluctant to go that far.

Thanks for asking. I’ve written a couple of articles on this (among other problems) that are on the arxiv. The trouble with discussing this and other foundational topics in blogs is that you need to write very carefully, and that takes ages and often can’t be done succinctly. It’s difficult partly because people who think there isn’t a problem have such a variety of reasons for thinking it isn’t a problem — so, whatever argument you address you’re likely to leave many readers feeling you’re talking past them. Maybe I’ll try to produce a more popular summary at some point, though.

According to Prof. Kent, “… Everettians fail scientifically where one-world versions of quantum theory succeed.” So far, yes. Why does energy exist? I have conjectured: Energy exists because the monster group and the six pariah groups allow D-brane gravitation and D-brane charge-based force to provide symmetries for a stable, oscillating multiverse that runs on a synchronized big-bang cycle of 81.6 billion years (± 1.7 billion years). Is my conjecture meaningless insanity? I say that the alleged Fernández-Rañada-Milgrom effect and the space roar profile prediction are sharp decisive tests of my idea on the epistemology of energy. Is ‘t Hooft’s CA interpretation of quantum mechanics compatible with an infinite universe? I believe the interpretation is quite incompatible with eternal cosmological inflation. Let us assume that nature is finite and superstring snapping brings gravitational energy from the boundary of the multiverse to the interior of the multiverse. In that case, one might expect that

superstring (via snapping process) —> (mystery particle(s)) + (gravitational energy transferred from the boundary to the interior of the multiverse).

According to ‘t Hooft, “With discrete degrees of freedom one can construct Hilbert space in a quite natural way by postulating that any state of the physical degrees of freedom corresponds to an element of this Hilbert space. Reversibility in time is required if we wish to see a quantum superposition principle; the norm of all states is then preserved if they are quantum superpositions of these basis elements.”

http://arxiv.org/pdf/gr-qc/9310026v2.pdf “Dimensional Reduction in Quantum Gravity” (version 2, page 2) by G. ‘t Hooft, 2009

At the very least, does ‘t Hooft’s CA interpretation of quantum mechanics require a matter universe and an antimatter universe?

I have some problem with interpretation of the result about “Einstein’s view of quantum mechanics”:

“[…] In wording our question, we deliberately did not specify what exactly we took Einstein’s view of quantum mechanics to be. It is well known, in fact, that Einstein held a variety of views over his lifetime [3]. The overarching themes we were after|and the themes most people, we believe, would associate with Einstein|are a subtle avor of realism, as well as the possibility of a deeper description of nature beneath quantum mechanics.

Interestingly, none of the respondents brought himself to declaring Einstein’s view as correct, although two people suggested that Einstein would ultimately be vindicated. […]”

If nobody voted for “realism + possibility of deeper desctiption” may be correct? Or it should be interpreted in some other way?

Pingback: Quantum foundations poll « Mostly physics

Pingback: Short Book Reviews | Not Even Wrong

Perhaps this is too empiricist (and I’m after the main event, having come here over the Woit link), but I have started to settle into taking QFT to be a stochastic signal processing formalism, which I take to say that there are no objects as such, not even “quantum systems”, but there may be systematic behavior of summary statistics of records of signals, including systematic correlations between compatible recorded signals, and, more generally, linear states over an algebra of observables that systematically models measurement incompatibilities. I take the idea that there are records of signals even while there are no objects to be moderately reasonable for an empiricist or, as a dangerous assertion, for anyone schooled in Copenhagen, but of course it deliberately leaves unasked realist questions about what a record of a signal is or might represent. For QM such an approach is not so natural, but in QFT the use of test functions matches very well indeed with the use of window functions in (stochastic) signal processing.

Regarding your (5), insofar as it is the statistics of discrete events (that is, of records of the times at which there were transitions between two or more metastable signal amplitudes) that allow us to narrow down the possibilities for a dynamics, perhaps we as much ought to say that the decoherence picks out the Hamiltonian instead of vice versa?

Interesting discussion above.

Very interesting post. The nine propositions seem reasonable, although I’m not sure about #3. Randomness could be epistemic, not ontic.

The Everett interpretation is just superfluous once you accept a physically grounded view like the Zurek-docoherence approach. The latter is clearly superior: it’s physical, includes everyone that needs to be involved, and doesn’t beg a lot of obvious questions.

Here’s an interesting point. The Cophenhagen interpretation was unsatisfactory for being incomplete, and apparently putting human consciousness at the center. However, the reverse view means that what we know objectively about quantum mechanics and the nature of observation tells us something about consciousness, human or otherwise. Say, a cat’s :)

Physicists now have the elements to ask a set of questions banished from Western science since the late Renaissance, that of intentionality and teleology. They seem conditional and contingent, linked to life and consciousness. Are the concepts reducible to unconditional, non-contingent spacetime and energy-matter elements, or irreducibly necessary on their own? They seem related to the concept of selectivity, and thus to information or entropy. And are spacetime and energy-matter themselves unconditional and non-contingent?

Will physics take over psychology and biology?

Pingback: Philosophical Sundays: Is There Free Will?

I want a quantum theory of conciousness

Maybe you’ll be interested in Scott Aaronson’s recent post here and the linked video of his talk.

Pingback: Polls in Science | Basic Rules of Life

Maybe I can add that registration is now open for the conference I mentioned above on generalizations of quantum theory and experimental tests:

http://www.perimeterinstitute.ca/conferences/quantum-landscape-generalization-quantum-theory-and-experimental-tests

Pingback: Nobel Laureates on the QM Interpretation Mess | Wavewatching

Helmholtz observed long ago that “similar light produces, under like conditions, a like sensation of color.” Locke placed color among the “secondary qualities,” which he thought had only a mental existence. Why, then, discuss color in physics? Well, but color is simply the wavelength of light, right? Well, no, contrary to what “everyone knows,” it is not.

Feynman wrote that color “has always appealed to physicists and mathematicians,” and indeed it would be difficult to imagine a topic with a more illustrious history, populated as it is by da Vinci, Galileo, Newton, Young, Helmholtz, Riemann, Mach, Grassmann, Maxwell, Schrödinger, Weyl and Einstein.

A handful of these authors tell us quite explicitly that color behaves like a vector — a fact borne out by the technology behind our color TVs and computer monitors. Whereas a wavelength, being a simple magnitude, is a scalar, and so with frequency, which is a simple rate, like speed.

Now that we have found sure footing, we can both broaden and tighten the observation from Helmholtz with a little help from Heisenberg, and say that the same state vector [psi], acted upon by the same operators A, B, C… produces the same spectrum of secondary qualities — variables only “hidden” in plain view.

Notice that thus far we have only recapitulated myriad familiar facts from everyday observation. Thus, we get up in the morning and things look, sound, taste and feel the way they always do—and where there are differences, we find physical causes for those differences. From a physical standpoint, it is naturally immaterial whether the relevant operator fields are located inside or outside the brain, thus obviating the putative “subjective” character of secondary sense-data.

What has been gained? Perhaps a great deal, for as the mathematician Steen reminds us, early on in the history of quantum theory, the “mathematical machinery of quantum mechanics became that of spectral analysis.”

A slightly subtler but fundamentally significant consideration attends those operators which leave color vectors invariant or symmetric, for here the subject opens out into gauge theory, as well as Lagrangian and Hamiltonian formulations of the equations of motion—and so all of physics and much of mathematics.

In adopting Heisenberg’s matrix mechanics, we need only enlarge the number of dimensions needed to incorporate these additional “elements of reality,” but we do that on a daily basis today, in any case. And, at a glance, it seems as though our use of matrices might also find a simple, intuitive place in M(atrix) theory.

_____________

Thus, the task is, not so much to see what no one has yet seen; but to think what nobody has yet thought, about that which everybody sees.

~Schrödinger

The aspects of things that are most important for us are hidden because of their simplicity and

familiarity.

~Wittgenstein

The problem with decoherence is that since it is entirely within the quantum framework, by itself decoherence cannot supply an interpretation of QM and thus, cannot explain how we obtain definite measurement results.

Even thought it can show the system in mixed states, these are improper mixed states, not proper mixed states that is required if one is to have definite measurement results.

The problem with taking a unitary evolution of the entire universe (or the largest physical system), as I see it, is that the Quantum framework is ultimately based on classical concepts where the observer is assumed to be outside of the system. To even include the observer into a quantum description is unwarranted, not to mention the entire universe. (yes, this is just Bohr’s view)

I very much like the idea of having it both ways — that the unitary, deterministic evolution given by the Schrodinger equation can be true as well as the probabilistic evolution with discontinuous collapse, despite the apparent contradiction between them. The first is a description of the whole physical universe as if from outside it, the second is an description of the experience of a physical subsystem (which is what we are), taking place inside the first. There appears to be a contradiction between them only if we fail to recognize that they are different types of statement. It is similar to the apparent contradiction between determinism and free will, which (I, and many others, believe) can be resolved in the same sort of way. This general type of apparent contradiction has been very effectively discussed by the philosopher Thomas Nagel, who calls the first type of statement a “view from nowhere”, while the second is a view from “now here”.

I also very much like the view that “the randomness exhibited by the measurement outcomes is intrinsic, rather than a result of our ignorance”. I think this follows from the Everettian programme of taking the Schrodinger equation seriously in all circumstances.

But there is a problem in saying that when observers adopt the Born rule, they are “using probability theory to quantify their ignorance about what they are about to perceive” (all quotes are from John Preskill’s contributions to this debate). The problem is that according to the Everettian view, there is no such thing as “what they are about to perceive”. There is nothing to connect a given component of the universal wave function at one time to any particular component at another time. So there is nothing to be ignorant about. I think this means we have to go back to ancient ideas of the future as being _open_. Statements about the future are neither true nor false (in general), but have truth values lying between 0 and 1; and these truth values are what we mean by “probabilities” in the intrinsic sense.

And now I’ve jumped into hopeless idiosyncracy, which must look like a complete non sequitur. I’ve tried to defend these ideas at greater length in arxiv:1009.3914, 1103.4318 and 1205.1479.

John, I really liked your statement, in reply to Marko: “We should be cautious about interpreting a global wave function of the universe describing quantum correlations that are not accessible to any conceivable observer.” This seems to me one of the most promising potential avenues for resolving the quantum measurement problem, although it may need to be combined with other ideas, and I’m not sure that the “fundamental decoherence” it might give rise to will give a story in which something definite happens. Thanks too for the link to the Bousso and Susskind article.

Some people, including David Poulin and Gerard Milburn,

http://arxiv.org/abs/quant-ph/0505081

http://arxiv.org/abs/quant-ph/0505175

have described what they call “fundamental” decoherence due to the intrinsically quantum nature of the reference frames we use in making measurements. I don’t have a deep conceptual grasp of this but would like to understand it better. I dimly perceive a link, at least a loose one, between this and cosmological decoherence… in both cases it is a breakdown of the assumption “small quantum system, big environment and measuring apparatus” that is at the root of things.

Howard

Pingback: The Most Embarrassing Graph in Modern Physics | Basic Rules of Life

My current _a priori_ estimate on QM is that the ‘subspace’ aether is hot but averages well (do a hotwater wavetank experiment for example), but the outer cosmos is infinite (we’re at [0 0 0 0])… however, said aether is so low-mass that absolute motion seems undetectable but relatively any two moving particles are ‘pinching’ the open space and fields between themselves, so, relativity seems to be the ‘easy guess’… These QM guys don’t even calculate the loss of mass-energy of electrons dropping into orbitals—putting them in the zone of guessing-the-teacher’s-language….