[NOTE – this is re-post from the original incarnation of this blog.]
Today’s Guardian is running a piece on the possibility of strong AI (also known as AGI) by physicist David Deutsch entitled “Philosophy will be the key that unlocks artificial intelligence“.
Despite the title, Deutsch isn’t arguing that philosophy is needed to speculate how future science might unlock the hard problem of consciousness. Instead he refers to further interesting philosophical questions regarding what constitutes a person, and what rights we might confer on an AGI.
I suppose it’s no surprise that my first thoughts are usually geared towards explaining consciousness itself and the explanatory gap; after all, I’m not happy with the current physicalist position. For me, asking how the physical make-up of the world we perceive through science can be reconciled with the subjective experience we all attest to having is a priority.
However, for pragmatic people actually doing the science who either don’t know, or for practical reasons don’t care, about the metaphysical questions (or for perfectly contented physicalists) my priority might not even count as a problem.
So I happily read and enjoyed the article for what is was supposed to be. But one comment and link did stick out to tweak my metaphysical funny bone:
“Some have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. Explaining why I, and most researchers in the quantum theory of computation, disagree that that is a plausible source of the human brain’s unique functionality is beyond the scope of this article.”
One of the co-authors is from the philosophy department, yet the paper makes several statements that seem to me to simply presuppose the physicalist view, and totally ignore the hard problem. For instance:
“In our wing analogy, it is unnecessary to refer to atomic bonding properties to explain flight. We contend that information processing in the brain can similarly be described without reference to quantum theory.”
The problem with likening phenomenal consciousness to a wing is that the function and make-up of the phenomenon of wings is completely explained via normal emergence, whereas phenomenal consciousness is not.
In other words, one can in principle deduce how wings come to exist given full disclosure on their microphysical make-up. So up from the binding of quarks by the strong force and the binding of electrons by the electromagnetic force, out pops the emergent solidity of matter and chemistry, and on upwards through biology to the make-up of a wing.And similarly one can deduce why they exist using principles from evolution, where we get a satisfying story of how large self-replicating systems of matter interact with the environment in such a way that functions like flight emerge.
However, the qualities of subjective phenomenal consciousness – our experienced internal world – cannot be explained in this way. Nothing from current microphysics up to current neuroscience even gives us a hint of what constitutes cognition or qualia. This is what philosopher David Chalmers calls the difference between normal weak emergence, and the strong emergence of phenomenal consciousness.
And again similarly, evolution doesn’t tell us why our functionality, which can in principle be perfectly simulated on a computer, additionally gives the system a felt experience. It seem an unnecessary epiphenomena, so why did it evolve?
Now of course, one might suggest that phenomenal consciousness is an illusion or that some future discovery will solve the hard problem, but that’s not the same as simply sweeping these issues under the carpet with an analogy that doesn’t seem to work.
The paragraph continues:
“Mechanisms for brain function need not appeal to quantum theory for a full account of the higher level explanatory targets.”
Here again, this sentence is only true if one ignores the “target” of the hard problem, and instead only aims at the functional aspects of consciousness.
The authors go on to explain why they feel quantum processing could not be a factor in processing:
“…quantum-level events, in particular the superpositional coherences necessary for quantum computation, simply do not have the temporal endurance to control neural-based information processing.”
“…it could perhaps be argued that extremely short quantum events somehow “restructure” neurons or neural interactions, to effect changes at the timescale of spiking, these speculations are hampered by the significant biological plausibility problems we explore in the next section.”
I do not know enough about the subject to refute the first statement, or press for an alternative where there is an more than one type of processing going on in the brain, but for sure the admission in the second statement allows us to substitute “control” (in the first) with “influence” to make the statement less plausible.
This is particularly effective since the objection to their own admission; the biological implausibility in the next section, turns out to be very shaky. To be fair that’s not the fault of the authors, because the paper was written six years ago. Since then, science has discovered several real and potential biological quantum phenomena, and the field of quantum biology is burgeoning and in the news.
One argument in the paper that does strike me as interesting is this:
“Even if quantum computation in the brain were technically feasible, there is a question about the need for such massive computational efficiency in explaining the mind. It is technologically desirable that a quantum computer should factor a large number into primes in polynomial time, but there is no evidence that brains can accomplish this or any similarly complex task as quickly.”
Here the word “mind” is used, but again subjective experience seems to be ignored. But the point does seem to have some power against cognition. This might suggest that any quantum processing is limited to qualia, which might include cognitive phenomenology, but not cognition itself.
The final part of the paper is introduced with this:
“Moreover, as we argue next, there is no compelling evidence that we need quantum computing or related processes to explain any observed psychological behavior, such as consciousness.”
The problem here is that Subjective consciousness is not synonymous with “observed psychological behaviour”.
The section goes on to attack Penrose‘s Orch-OR model of quantum collapse, it’s references to Godel‘s incompleteness theorem, and Hameroff‘s suggestions as to how Orch-OR might work in the case of the brain.
The main argument against Orch-OR seems to mostly rest on it requiring revisions to current scientific understanding, and the lack of evidence. I’d deny neither of those, but point out that it’s supposed to be a speculative idea to solve a problem, not a fully fledged theory. To take such ideas out of context by ignoring the problem they seek to solve doesn’t really show us anything.
I can’t argue the case regarding the use of Godel and Hammeroff’s ideas with Orch-OR, but their objections do appear to be on more solid ground there. As I say, the whole thing is highly speculative, as Penrose himself would admit. An alpha version of a model if you will.
In summary the authors admit the possibility of quantum computing (or quantum effects on classical computing) in the brain but suggest that it is less plausible than classical physics doing all the work.
I’d suggest in my own summary that after six years the jury is still out, but that the idea has become a lot less implausible. Additionally, it’s not helpful to make reference to phenomenal consciousness in a paper, and then ignore it in favour of other uses of the word in some of your arguments.
For both reasons, I don’t think the paper shouldn’t have been referenced.
UPDATE Feb 2013