Ever since the movie Transcendence came out, it seems like the idea of the ‘technological singularity‘ has been in the air. Maybe it’s because I run in an unorthodox circle of deep thinkers, but over the past couple months, I’ve been roped into three conversations related to this topic. The conversations usually end with some version of “ah shucks, machine learning is developing at a fast rate, so we are all doomed. And have you seen those deep learning videos? Computers are learning to play 35 year old video games?! Put this on an exponential trend and we are D00M3d!”
So what is the technological singularity? My personal translation is: are we on the verge of narcissistic flesh-eating robots stealing our lunch money while we commute to the ‘special school for slow sapiens’?
This is an especially hyperbolic view, and I want to be clear to distinguish ‘machine learning‘ from ‘artificial consciousness.’ The former seems poised for explosive growth but the latter seems to require breakthroughs in our understanding of the fundamental science. The two concepts are often equated when defining the singularity, or even artificial intelligence, but I think it’s important to distinguish these two concepts. Without distinguishing them, people sometimes make the faulty association: machine_learning_progress=>AI_progress=>artificial_consciousness_progress.
I’m generally an optimistic person, but on this topic, I’m especially optimistic about humanity’s status as machine overlords for at least the next ~100 years. Why am I so optimistic? Quantum information (QI) theory has a secret weapon. And that secret weapon is obviously Scott Aaronson (and his brilliant friends+colleagues+sidekicks; especially Alex Arkhipov in this case.) Over the past few years they have done absolutely stunning work related to understanding the computational complexity of linear optics. They colloquially call this work Boson sampling.
What I’m about to say is probably extremely obvious to most people in the QI community, but I’ve had conversations with exquisitely well educated people–including a Nobel Laureate–and very few people outside of QI seem to be aware of Aaronson and Arkhipov’s (AA’s) results. Here’s a thought experiment: does a computer have all the hardware required to simulate the human brain? For a long time, many people thought yes, and they even created a more general hypothesis called the “extended Church-Turring hypothesis.”
An interdisciplinary group of scientists has long speculated that quantum mechanics may stand as an obstruction towards this hypothesis. In particular, it’s believed that quantum computers would be able to efficiently solve some problems that are hard for a classical computer. These results led people, possibly Roger Penrose most notably, to speculate that consciousness may leverage these quantum effects. However, for many years, there was a huge gap between quantum experiments and the biology of the human brain. If I ever broached this topic at a dinner party, my biologist friends would retort: “but the brain is warm and wet, good luck managing decoherence.” And this seems to be a valid argument against the brain as a universal quantum computer. However, one of AA’s many breakthroughs is that they paved the way towards showing that a rather elementary physical system can gain speed-ups on certain classes of problems over classical computers. Maybe the human brain has a Boson sampling module?
More specifically, AA’s physical setup involves being able to: generate identical photons; send them through a network of beamsplitters, phase shifters and mirrors; and then count the number of photons in each mode through ‘nonadaptive’ measurements. This setup computes the permanent of a matrix, which is known to be a hard problem classically. AA showed that if there exists a polynomial-time classical algorithm which samples from the same probability distribution, then the polynomial hierarchy would collapse to the third level (this last statement would be very bad for theoretical computer science and therefore for humans; ergo probably not true.) I should also mention that when I learned the details of these results, during Scott’s lectures this past January at the Israeli Insitute of Advanced Studies’ Winter School in Theoretical Physics, that there was one step in the proof which was not rigorous. Namely, they rely on a conjecture in random matrix theory–but at least they have simulations indicating the conjecture should be true.
Nitty gritty details aside, I find the possibility that this simple system is gaining a classical speed-up compelling in the conversation about consciousness. Especially considering that finding permanents is actually useful for some combinatorics problems. When you combine this with Nature’s mischievous manner of finding ways to use the tools available to it, it seems plausible to me that the brain is using something like Boson sampling for at least one non-trivial task towards consciousness. If not Boson sampling, then maybe ‘Fermion smashing’ or ‘minimal surface finding’ or some other crackpottery words I’m coming up with on the fly. The point is, this result opens a can of worms.
AA’s results have bred new life into my optimism towards humanity’s ability to rule the lands and interwebs for at least the next few decades. Or until some brilliant computer scientist proves that human consciousness is in P. If nothing else, it’s a fun topic for wild dinner party speculation.
So what does boson sampling have to do with consciousness have to do with artificial intelligence (unless you are referring to robots like Pinocchio that have feelings and can solve seemingly intractable problems)?
Please note that the Boson Sampling setup does not compute the permanent of a matrix, it just samples from a distribution that depends on the permanent of the matrix associated with the linear optics network. If I am not mistaken, calculating the permanent is P#-hard, which I believe is a class of problems more difficult to compute than the NP-complete ones. Boson Sampling doesn’t let you solve these hard problems, it just solves the problem of sampling from a probability distribution which is hard to sample from classically.
Juan, you are absolutely correct. Thanks for catching this and my apologies!
The idea still survives if for some “conscious” task needs sample some distribution, but generation of the distribution hard (e.g. in game theory mixed strategies need to use some probabilistic choice). Yet, I see here an opposite problem – it is still not proved that brain can use some quantum trick, but our technology certainly can. So the idea does not look like some argument against singularity.
If consciousness is in P, should we not be able to read each other’s minds easily?
Penrose actually goes further, I believe—to him, consciousness isn’t ‘merely’ faster than classical computers, it’s on a different level of the arithmetical hierarchy entirely, thus equivalent to an oracle machine (or maybe it’s off the hierarchy entirely, able to solve all instances of the halting problem—some stuff I’ve read about his proposal seems to imply even something of this sort). That’s why he doesn’t merely use quantum mechanics in his arguments, but depends on his nonlinear extension of QM in which gravitational effects produce an actual physical collapse of the wave function, which he claims (I’m hazy on the details here) to be able to account for the brain’s hypercomputational capacities.
And while it’s an interesting idea to suppose that our own brains make use of some quantum speedup effects, if that’s the case, then I would expect to be able to do something that classical computers can’t do (efficiently)—but I’m not sure I see a case for that. I don’t mean that I’m unable to factor primes quickly, or something—I’m also not able to multiply numbers quickly, at least not beyond numbers of trivial length, but this doesn’t mean that I don’t believe the human mind to be at least equivalent to a Turing machine (if extended with some appropriate supply of resources, such as infinitely much scratch paper and ink). It’s simply not an ability we’ve had great use for in our evolutionary history.
But still, I’d like to see an argument that humans can do some task efficiently which is believed not to be efficiently performable on a classical computer—Penrose at least has such an argument (although it’s wrong) when he claims that a computer described by a formal system F could never discover the ‘truth’ of F’s Gödel sentence, while humans can, and thus the capacity of the human mind exceeds that of every computer. Up to the point we have such an argument, I think, the most parsimonious hypothesis is that the human mind is ‘nothing more’ than a conventional computer, at least regarding its analytic capacities (when it comes to other mental phenomena, such as qualia or intentionality, the story might be different, but that’s another debate).
Humans interface with the world in remarkable ways. Computers not so much this makes all the difference. I would argue computers have an IO processing bottleneck not a memory or processing limitation.
I would argue that humans’ superior interfacing ability is due to 1) our senses, and 2) the special-purpose heuristics (https://en.wikipedia.org/wiki/Heuristics_in_judgment_and_decision-making) that have evolved over millions of years.
Jochen, thanks for your comment. It was thought provoking. Regarding “an argument that humans can do some task efficiently which is believed not to be efficiently performable on a classical computer”, I would like to separate out a few things:
1. The main point of this post is to show that there is a relatively new result in quantum information theory which lowers the bar towards physical systems doing computations outside P.
2. The biology underlying consciousness is still so poorly understood that it’s hard to point to a non-classical speed-up in any aspect of it, but I’d like to think that it is at least possible, and these Boson sampling results update my prior. It shifts the bar from ‘the brain needs a mini universal quantum computer before it can do useful non-classical computation’ to ‘the bar needs a little optical circuit.’ Which shifts the bar much closer to areas of biology where there is already evidence that quantum mechanics plays a role: http://en.wikipedia.org/wiki/Quantum_biology
Thanks again for your comment.
Well, don’t misunderstand me, I’m certainly not opposed to the possibility of a quantum explanation of consciousness. It seems to me that the classical arguments against one largely miss the point—i.e. that the brain (or those structures in the brain responsible for our mental processes) are too large, warm, and wet for significant effects of quantum coherence. But I’m not sure quantum coherence is the right thing to concentrate on: many features of matter, e.g., are only explicable on the quantum level, such as its stability, or as Descartes would say, extendedness, which ultimately rests on Pauli exclusion. So here, we have a direct example of a quantum explanation being relevant to the macroscopic realm. (Another one would perhaps be the least action principle, which sheds its unsightly teleology only when considered quantum mechanically.) So I wouldn’t want to a priori rule out the possibility that quantum mechanics could do for Descartes’ res cogitans what it already has done for the res extensa (and indeed, though it’s probably more of a curiosity, quantum mechanics did exactly explain that characteristic of matter that Descartes thought to be the defining one, namely its extendedness).
But at the same time, I’m less certain about claims that any specific quantum effect suffices for consciousness. It seems to me that if we had some actual access to nonclassical resources, then it should strike us as somewhat odd that the classical description seems to match our minds so very well. What I mean is, while I can walk mentally through the steps a Turing machine carries out during a computation (and indeed, Turing machines were originally proposed to capture our mental processes when e.g. doing math), I can’t do the same thing regarding a quantum computation; that is, my mental arithmetic and Turing machines seem effectively reducible to one another, which is not the case once quantum systems become involved. I can’t get a ‘mental picture’ of a quantum computation. But this would seem to be an odd feature if my mental processes are somehow quantum at the fundamental level—why does this not seem to carry through to my ability to produce mental pictures?
Of course, a lot of mental processing occurs unconsciously, and one just might banish the quantum processes into the unconscious; but that would seem to add ad hoc hypothesis to injury by first giving us the in-principle ability to do mental quantum arithmetic, only then to hide it from us in practice.
So, what my introspection tells me about my mental capacities is that I’ve got at my disposal essentially a universal symbol-manipulating system, i.e. a Turing machine equivalent (again, if outfitted with enough paper and ink). It does not seem to me to be the case that I can do anything more than that; indeed, any consideration of that ‘anything more’ seems unfortunately to incur serious mental slowdown. So the hypothesis that underneath all that, it’s quantum after all, doesn’t appear very well motivated (beyond trying to stave off the rise of the machines).
First, there is no evidence that the human brain requires anything more than macroscopic physics. Smaller animal ‘brains’ have been successfully modeled with computer simulations.
Second, the concept of computational complexity doesn’t exactly apply here. Computational complexity describes how a problem scales under an increasing parameter. It is meant to compare two instances of a problem at different scale parameters. For a single problem (modeling a human brain) there is no scale to refer to. An individual problem is always in P.
However, just because a problem is in P doesn’t mean it is easy to solve. Determining if a 10000 digit number is prime is in P, and computationally intractable. Computing the best path for the traveling salesman problem on 4 nodes is in NP and trivial.
b, I agree that there is currently no evidence that the brain requires anything more than macroscopic physics. But on the other hand, regarding consciousness specifically, we don’t understand the underlying biology at all. In parallel we are learning that quantum mechanics may actually play a role in certain biological processes. When you combine with that, the fact that Boson sampling lowers the bar towards doing quantum computations (compared to a universal quantum computer, for example) then I think it’s at least worth considering that something quantum is important in consciousness.
Regarding “modeling animal brains.” I agree that scientists have made tremendous progress modeling certain aspects of the brain, but by definition these models are almost exclusively limited to modeling well-understood processes–not consciousness for example.
Thanks for your comment.
Perhaps I’m the pessimism to your optimism, but over (at least) the past thirty years there have been many speculative methods incorporating quantum mechanics or quantum field theory into the brain. None have shown promise in the long run.
For your particular question, before you start speculating on how recent physics developments might explain conscienceless, I think you first need to clearly define what conscienceless really is. Are all animals conscience? Plants? Bacteria? Define it how you like but explore the consequences of you definition. I think this alone will start to shape the boundaries of how it may work.
As others have pointed out, so far there isn’t anything the human brain does that appears to be outside the bounds of a turing machine. It’s like trying to understand how a computer functions by examining on one side the output and interaction of the UI, then on the other side dissecting the CPU. At first it would seem impossible to connect the CPU to the tremendous variations at the UI, but with additional understanding it becomes clear that combining a relative small number of programming statements connects the two and accounts for the large variations in output.
The is a pretty heady monolog with a pretty simple answer. In spite of all the mental jibber jabber that comes with years of wishful thinking, and education which leads down the proverbial rabbit hole there is one axiom : A stream cannot rise higher than it’s source. You will never build a smarter than human machine. Humans are unstable, unreliable, easily to distract, have waves of emotions and are chemically driven. But the minds that control them have and evolved intelligence of over 350 million years. All of your years of education are but a raindrop in an entire ocean of possibilities.
A man can throw a rock to a higher point than the one to which he was born. He can even throw it so high (with a rocket) that it will never come back down. Your metaphor is no good.
Pingback: The Singularity Is Not Near - Shaun Maguire, Qu...
As pointed out, the author’s arguments about the necessity of quantum computing for biological processing are completely unsupported by any known physics. Furthermore, if there were any physical evidence for quantum computing in biological systems, and as far as we know there is not, the author presents no arguments as to why we could not easily exploit this physics with current technology, hence drawing the “singularity” even closer. If quantum computing could be done by biological systems which operate at frequencies, energy levels, and temperatures well within our grasp to emulate with our technology, then we should have no trouble with quantum computing. With the singularity that close, Shaun, you should consider retracting your thesis.
1. You’re right that there is currently no evidence for quantum computing in the brain.
2. However, these Boson sampling results substantially lower the bar towards this (compared with a universal quantum computer, for example.)
3. There is evidence for quantum mechanics playing a role in biology; see the references at: http://en.wikipedia.org/wiki/Quantum_biology
4. If quantum effects were to play a role in the brain, consciousness is one of the leading candidates (maybe consciousness is so poorly understood because people haven’t been looking towards quantum processes?)
One of the main points of this post was to counter the argument: “machine learning is progressing at a rapid rate, so soon enough we will hit the singularity.” The argument I proposed is tantamount to: “there are still processes in the brain that are extremely poorly understood, and we are having breakthroughs in our understanding of how accessible quantum computation is, so I’m increasingly skeptical that we will hit the singularity.” Neither argument is air tight, but it’s much more common to hear the former and I wanted to push a different opinion in this debate.
Thanks for reading the post.
Just addressing 3 above, the quantum effects presented are more coincidental. One can argue that biology is completely quantum in operation because without quantum mechanics you wouldn’t have atoms.
Photosynthesis and phosphorescence are certainly present in biology and are quantum effects, but these play a very different role then the types of things you suggest. Your suggestions would require biology to internally incorporate a quantum effect for the purpose on computation, sample the quantum effect, translate the sampled quantum effect to a macroscopic state, then externalize that state.
The closest I can think to where biology would wrap a quantum effect similar to this would be the ATP-ADP energy extraction mechanism.
I have yet to read a single, coherent argument that states exactly which property of consciousness necessitates quantum computing. What is clear is that if you damage neural fields, that quality of our consciousness rapidly deteriorates. As pointed out, the neural fields in human brains are not so different than those in monkeys, dogs, birds, or fish. We can model quite well the simpler of those systems. Most of the neural mechanisms of locomotion control and sensing are not so mysterious, and don’t require quantum computing. Not only is your argument not air tight, its ready for the mirror fog test.
maybe astrology is so poorly understood because people haven’t been looking towards quantum processes?
Pingback: Linkurile zilei. Lecturi de citi (marti) - Dan PopaDan Popa
Hello! I’ve been reading your web site for a long time now and finally got the courage to go ahead and give you a shout out
from Huffman Tx! Just wanted to say keep up the excellent job!
I think the Technological Singularity they are looking for is the Cray 1. It is technologically singular in that it took many years for the Cray 2 to come out. This is from the oil patch in North Dakota.
The entire field of quantum biology is being ignored here. Experiments over the last decade show that quantum processes occur in “warm, wet and noisy” environments in nature, as well as in in the laboratory (U. of New South Wales researchers have built a universal quantum computer out of pure silicon-28 that runs at room temperature.) Most tantalizing of all, and supportive of Penrose and Hameroff’s Orch-OR theory, is that quantum processes occur in photosynthesis, olfaction, bird navigation and…drum roll……in the microtubules in the neurons of the human brain. This means that consciousness might be caused by quantum processing after all. If so, the Technological Singularity is still near.