The Emergent Cosmos and The Hard Problem of Consciousness

One of the many surprising ideas in modern physics is that of spacetime being an emergent phenomena. Despite emergence being a tricky concept to nail down, we are relatively familiar with the idea when contrasting features of the world around us at different scales. It explains, for example, how the liquidity of water emerges from the interactions of H2O molecules, and how heat emerges from the random motion of particles.

However, spacetime as described by Einstein’s theory of General Relativity is supposed to be the very fabric of the cosmos itself, and common sense may understandably lead us to wonder from what exactly it is supposed to emerge.

http://carazara.files.wordpress.com/2010/08/egg_20hatch1.jpg

But it gets worse for common sense.

Not only is spacetime theorized to be emergent, but its contents – mass and energy in the guise of the fields and particles described by quantum mechanics – are not immune to this reduction. Both leading contenders for a theory of quantum gravity, loop quantum gravity and string theory (together with the holographic principle), suggest that the cosmos in its entirety – the whole kit and caboodle one might say – could be emergent.

I am using the phrase “emergent cosmos” here rather than “emergent universe” to try to capture how if spacetime and all it’s contents (the “cosmos”) is emergent, then the Universe consists of more than just the cosmos. It is, at its foundations, something else.

As to what that something else is, our theories of quantum gravity are unclear. Here our everyday language fails us, because without space, time, matter and energy, even words like “it” and “is” lose their usual meaning. We are left only with the language of mathematics with which to imbue difference and relation, number and geometry. It is from this position that theories like Max Tegmark’s Mathematical Universe proceed.

All talk of an emergent cosmos is, of course, still controversial. But it is at least mainstream. When it comes to the Hard Problem of Consciousness however, things get murkier, and despite some movement from the likes of Tegmark in the direction of having science address the issue, it remains for the most part seen as a philosophical hangover of per-scientific thinking.

Within philosophical circles the issue is taken more seriously, as evidenced by the amount of words devoted to it by those who’d like to jettison the whole thing. But is still divisive, and the trend over the last fifty years or so seems (to my unprofessional eye at least) to be away from thinking its solution could revolutionize metaphysics, and towards being a obstacle to overcome.

Here, in the realm of the armchair blogosphere, we can safely diverge from that trend, contending that like an aspirin in a collection of interacting H2O molecules, the emergent cosmos may help dissolve the hard problem of consciousness.

However, as in my previous posts on the subject, this suggestion comes with a disclaimer. We have already had to accept the controversial idea of a an emergent cosmos to get here, and the divisive assertion that the hard problem is not illusory. Neither the water nor the aspirin may exist. And now we need to take the even more speculative turn of suggesting that the best place for the aspirin is in the water. But for those willing to entertain the idea that experiential consciousness may consist at the base, non-emergent, sub-Planckian scale (that we’ve previously termed the Potentiat), this does perhaps give us reason to be cheerful.

The hard problem is essentially the problem of explaining how experiential consciousness can arise from non-conscious mechanistic matter just by arranging that matter in a certain complex configuration such as those we describe as brains. Our usual conception of emergent properties seem to many to be inadequate to explain this. Unlike the liquidity of water, which seems like a reasonable end point of interacting H2O molecules when all their interactions are understood, interacting neurons, oscillating electrical waves, or even quantum objects, seem to give no hint that one of their end results will be subjective inner experience.

Traditional panpsychism and panprotopsychism seek to address the problem by granting all matter some amount of actual or potential consciousness respectively. But quite apart from any conceptual issues the schemes have, they are resisted by many for just that reason: they seem prima facie implausible based on our direct experience of the differences between conscious and unconscious systems.

We previous speculations we have instead ascribed panprotopsychism to the Potentiat alone, with subjective consciousness obtaining only in certain configurations of that non-emergent base.

By exclusively situating experiential consciousness in the Potentiat, we no longer need to explain how consciousness arises from matter. It is from certain configurations (number 3 below) of the already non-material Potentiat that consciousness obtains, and it simultaneously achieves this while also serving as the base from which the emergent matter of the relevant brain mechanism arises. And if one accepts downward causation from the emergent cosmos (what we have previously termed the Instantiat) to the Potentiat, it can even be the cause of the non-emergent configuration.

Emergent Phenomena

Click to enlarge

Additionally,  non-consciousness-producing configurations obtain instantiation of emergent spacetime, matter, and energy (number 2 above). So in effect, the emergent Instantiat we are familiar with through super-Planckian physics and special sciences is a) entirely non-conscious b) exists (in emergent terms) objectively for all observers, and c) exists independently of any observers. In other words it is much as traditional physicalists would have it. And indeed, there may be no reason to place any brain function other than experiential consciousness beneath that super-Planckian level.

The ontological expansion we have made is just the non-spaciotemporal Potentiat base, in which consists (under some configurations) conscious subjects that are correlated with emergent brains because they share the same source.

We dissolve the hard problem because we no longer need to explain how consciousness arises from matter, but rather how it arises from that non-spaciotemporal base. We also need to explain how the cosmos arises from that base, but that question is already being addressed by physics.

Of course, explaining how consciousness arises in the Potentiat may be no easy task in itself, but the target, being non-material, at least seems prima facie more suitable for a panprotopsychist treatment. And I’d also suggest that it is aligned better with our own subjective sense of experiencing, remembering and imagining the world, which to me at least seems more abstract than concrete.

More to come another time, so thoughts on a postcard please.

Advertisements

Standard Model reference diagram

[NOTE – this is re-post from the original incarnation of this blog.]

Don’t know your baryons from your bosons? Mixing-up your mesons with your muons?

Even those who know the most common particles and split them between the two main categories of fermions and bosons can still get somewhat confused when more exotic beasts are mentioned. And when you try to get your head around the less familiar categories, things can get a little confusing.

Here’s one of the classic representations of the Standard Model that you might see on the web:

https://www.fnal.gov/pub/inquiring/timeline/images/standardmodel.gif

Click image for source

It’s great, but only includes elementary particles, ignoring those made of bound quarks; the hadrons referenced in the H of the LHC.

Anyway, I was digging about looking for something unrelated in my documents folder when I came across a spreadsheet I’d created when trying to learn the particles and their categories.

So for anyone just wanting to get their head around the various categories you might read about (or hear about on Star Trek!), here’s my attempt:

Standard Model

 

And if that doesn’t help visually, here’s a better representation that illustrates the crossover between categories better:

 

Click image for source

A defense of non-epiphenomenal consciousness and free will.

[NOTE – this is re-post from the original incarnation of this blog.]

The existence of non-epiphenomenal consciousness and free will are two different, but related issues. Both are disputed by those of a physicalist persuasion, and both find themselves lacking any place within our current scientific understanding of the world. Indeed, they not only have no place, but also run contrary to a key precept of modern science: that there is no such thing as an uncaused cause.

click image for source

In the classical Newtonian picture of physics, the processes that lead to a particular brain state are governed by deterministic laws of nature. If in principle we could perfectly describe a starting brain state, then by extrapolation using those laws, we can predict with certainty a subsequent brain state. Quantum mechanics overthrows this view, revealing that fundamentally, all processes are probabilistic in nature. Instead of predicting with certainty, instead  we only have a probability that one result will win out over another (even if in macroscopic systems there are so many quantum elements that the law of averages means the probability is very high indeed). This introduces a random element to the possible evolution of systems over time, but doesn’t necessarily help with defending free will. A random result is not necessarily a free one.

This fundamentally random, but practicably deterministic state of affairs is what we observe in every area of nature we’ve ever cared to study. Physical processes alone are sufficient to explain the evolution of systems in time. So what role could mental processes have if they exist at all? And even if there is a role, by what conceivable mechanism could a mental process affect a physical process?  This is the problem of defending non-epiphenomenal consciousness.

https://i2.wp.com/images.crestock.com/2210000-2219999/2214892-xs.jpg

Click for source

Beyond questions of the efficacy of conscious systems looms the even more unlikely notion of traditional incompatibilist free will; a concept seemingly so contrary to what we know about nature that most philosophers and scientists appear to have abandoned it altogether. And it’s not difficult to see why. The suggestion appears to be that not only does the mind play a role in the evolution of brain states, but that it can also derail the chain of cause and effect by somehow tipping the probabilities in favour of what would otherwise be a vanishingly unlikely alternative options.

Given those facts, how can defenders of causally efficacious mind and free will construct a believable argument for their existence?

To be taken seriously, both non-epiphenomenal consciousness and free will are desperately in need of a viable mechanism. Without it, both are rightly open to attack as being only explainable by supernatural forces. And to be viable, I would argue that any proposed mechanism would have to both conform to our current best-fit scientific theories and be robust enough to be considered mainstream.

Some may claim that such questions are outside the scope of science altogether, being that evidence for their existence is purely subjective and therefore unverifiable by the scientific method. With most such phenomena I would agree. For instance, believers in gods may try to claim that their experience of the divine counts as evidence, while others use subjective experience to underpin all sorts of dubious pseudoscience and quackery. So right away, I should make it clear that  I consider non-epiphenomenal consciousness and free will worthy of explanation for one reason alone: they are – at first blush at least – subjectively universal phenomena. Even the most ardent physicalist must admit that without further reflection, we appear to have both. That of course is not proof – appearance often misrepresents reality – but it is I think, at least reason to investigate as best we can with an open mind.

An axiom attributed to ancient Greek philosopher Parmenides and later made famous in the modern Western world by William Shakespeare in King Lear says that “nothing comes from nothing“. The antithesis of this idea is the idea of creation ex nihilo, or “out of nothing”. The gods of many religious traditions are supposed to have pulled off such a trick at the beginning of the universe, and – unfortunately for defenders of non-epiphenomenal consciousness and free will – it’s a trick that agents seemingly also need to perform every time they exercise free will. They have to introduce or create some new event that is neither random nor wholly dependent on prior physical causes.

However, modern science has put that axiom under pressure, leading us to question whether it really is such a self-evident truth. It’s not that science has shown that matter or energy can be created ex nihilo (indeed, that would violate another key idea in physics; that of the conservation of energy enshrined in the first law of thermodynamics) but rather that modern science now suggests that the very concept of nothingness may be meaningless.

The quantum fields that make up the universe, such as the electromagnetic field and the Higgs field all have a ground state – a lowest possible energy configuration – slightly above zero, making them subject to quantum fluctuations. This is the case even in a complete vacuum, hence the name vacuum energy, although the property as applied to each field is known as zero point energy. But a vacuum is the only physical (i.e. non-abstract) definition of nothingness that makes sense within the bounds of the universe, so physically-speaking there is no such thing as nothing.

click image for source

Because excitations in quantum fields are one and the same as point particles in the standard model, this vacuum energy manifests as the creation of virtual particle/antiparticle pairs that briefly pop into existence and immediately annihilate each other. This fact applies not only to the vacuum or to space, but to every part of the universe. This vacuum energy can be thought of like the fizzing surface of a liquid, with each bubble being that brief pair of particles that burst into existence only to almost immediately pop out of it again, although it is important to note that this energy is usually both unmeasurable and unavailable to macroscopic processes – it is not some mystical energy field one can use to justify belief in dubious phenomena!

In technical terms, these particles exist for a time shorter than the Planck time, which means that due to the time-energy relationship in Heisenberg’s uncertainty principle, they remain unmeasurable and insubstantiated in the physical world. Hence the label virtual particles as opposed to actual particles we can measure.

However, just because they are virtual, one shouldn’t imagine that they play no role in the physical world. Not only have experiments shown them to be most-likely responsible for proven phenomena such as spontaneous emission, the Casimir effect, and the Lamb shift, but they are also generally thought to mediate the interaction of real particles in quantum field theory. For example, the exchange of virtual photons underlying the interaction of electrons in electromagnetism.

The only way these virtual particles can achieve actualisation and gain any kind of permanence is to draw on the energy in the surrounding environment, whilst avoiding mutual annihilation.

One situation in which this is thought to be possible is in the extreme environment of a black hole. These gravitational sink-holes bend space so severely that even the fastest moving objects in the universe – photons of light – do not have sufficient escape velocity to avoid falling into their clutches. This results in the formation of a boundary, or event horizon, from which no matter or energy can escape.

click image for source

Now consider a particle/antiparticle pair that forms at the event horizon of a black hole. In simple terms, if one of the pair forms inside the event horizon and the other on the outside, then they will not be able to interact and annihilate, and drawing on the gravitational energy of the black hole, they actualise. So both an observer on the interior of the horizon, and one on the outside witness the emission of particles as radiation. This is known as Hawking radiation after physicist Stephen Hawking who first conjectured its existence.

As previously stated, this isn’t really ex nihilo creation of matter or energy, because the creation process is driven by the intrinsic zero point energy of quantum fields, plus the energy of the surrounding system. Thus the principle of conservation of energy also means that the system involved must lose some of its own energy, or in the case of black holes the equivalent mass. In this way black holes starved of infalling matter are though to slowly but surely evaporate.

Another consequence is that the more mass or energy a system has, the greater the mass or energy of the particles that can be emitted. So whilst there are also hypothesized micro black holes, produced primordially in the early universe and perhaps still existing today, the Hawking radiation they would emit would consist only of low mass particles like electron/positron pairs or photons, which are massless and their own antiparticles. (Note that even in normal black holes, Hawking radiation is dominated by photons).

But black holes are not the only situation where this type of particle creation can occur. In theory, any energetic phenomena that forms an event horizon can perform the same trick.

One such phenomenon is known as the Unruh effect, and is a logical consequence of Einstein’s realisation that the gravitational force is equivalent to acceleration. Here an accelerating system gains kinetic energy from the gravitational field which then – from the point of view of an observer in the same relativistic reference frame as the accelerating system  – results in a radiation bath in that internal frame, as particle/antiparticle pairs actualise before annihilating. And just like the black hole case, because an accelerating system creates an event horizon (the reason for which is beyond the scope of this piece), the equivalent of Hawking radiation is also witnessed by observers outside that horizon.

In both examples, we have the formation of an event horizon creating a one-way barrier between an enclosed volume of space (the interior of the black hole and the relativistic reference frame) and the rest of the universe.

So, returning to consciousness, we have – superficially at least – an interesting parallel. In both, the external environment can influence the enclosed internal worlds via the flow of information into them, but from within those enclosed internal worlds one is only able to observe the external universe rather than interact directly with it. However, via a phenomenon such as Hawking radiation, that internal world is able to exert a physical influence back on its environment. By analogy, these phenomena correspond to a mechanism for non-epiphenomenal consciousness.

Now, I’m certainly not suggesting that consciousness resides in microscopic black holes – I’ll leave that to Romulan starships! Nor am I saying that the Unruh effect is responsible. I simply don’t have enough knowledge of physics or mathematics to surmise or calculate how small objects at short distances may or may not produce the acceleration necessary or an event horizon local enough. And I strongly doubt there is anything in mainstream neuroscience to offer as a framework for such effects in the brain.

I’m only suggesting that such seemingly ex nihilo creation would not-so-long-ago have been thought impossible without supernatural intervention in the world, but that zero point energy opens-up the possibility of a variety of effects that might – at least conceivably – be exploited by evolved systems.

Under those speculative lights, the mere possibility of horizon-enduced particle creation in connection to consciousness and the brain would provide a high-level explanatory mechanism for non-epiphenomenal consciousness. And if such creation could be directed and (perhaps chaotically) amplified, one might see how such internally-produced nudges might pave the way for free will.

https://i2.wp.com/s0.geograph.org.uk/geophotos/03/05/83/3058398_f033c313.jpg

Click for source

At such low energies, any such creation would have to be in the form massless particles like photons, and whilst this might bring its own problems in accounting for how they might deliver the needed nudges to existing processes, on the other hand, such effects should in principle be measurable and therefore testable. It should be noted that there is already some speculation about the role of photons in the brain, though stressed that this is not mainstream.

Of course, even if there is something in my speculation, many issues might remain unresolved, such as the hard problem of consciousness and how the mental domain might manage to muster and direct its will; not to mention under what ontology and laws consciousness itself might operate internally.

Also, there is danger here in stepping too far with speculative ideas. Scientists and rational thinkers are wary of any non-physicalist speculation on consciousness, I suspect because to do so opens the door to all sorts of religious and pseudoscientific nonsense that are neither objectively testable or even subjectively universal. So it’s important to not speculate more than a single step beyond our current knowledge, and to do so without any preconceptions of where one is heading.

But with that caution in mind, I still think it’s fair to say that this class of phenomena in physics at least shines a light into the domains in which we should search for clues. And even if such speculation proves fruitless, it serves to illustrate how science continually surprises us with unexpected phenomena. So while admitting that the existence of non-epiphenomenal consciousness and free will remain improbable, we should not lose hope. Closing the door on what are our most universal and all-encompassing experiences of reality – that our minds interact with and affect the physical world – is premature.

The imaginary number at the heart of quantum mechanics.

[NOTE – this is re-post from the original incarnation of this blog.]

I’ve no idea how many books, articles, podcasts and videos I’ve digested over the years on the subject of quantum mechanics, but it’s a lot.

What they all have in common is that they are aimed at the layperson, and therefore try to describe counterintuitive features of the theory such as superposition, the uncertainty principle, and entanglement using experimental examples and everyday analogies. Almost none of them take even the briefest of toe-dips into the actual mathematics behind the theory.

And that’s not surprising. After all, as Stephen Hawking wrote in A Brief History Of Time, “Someone told me that each equation I included in the book would halve the sales“. No-one outside academia likes to try to get their head around baffling equations, least of all those with no more tools at their disposal than high school maths. Like me.

click image for source

So, some time ago when I made an attempt to dig a little deeper by watching a series of Quantum Mechanics lectures by Leonard Susskind, I wasn’t expecting to get much out of it. I couldn’t have been more wrong.

OK, I couldn’t follow all the intricacies of the maths, and I certainly wouldn’t be able to do any of the calculations myself, but what I did gain was a good overview of the subtler concepts behind the theory, and an understanding of how the maths models and corresponds to those features, including the more counterintuitive ones, like superposition and the uncertainty principle.

The biggest revelation for me was the significance of the constant i, which confusingly is also sometimes known as j, mainly by engineers. i is the symbol that represents an imaginary number, a number that does not exist, specifically the square root of -1.

click image for source

This concept is at the mathematical heart of the theory, yet is almost never mentioned in the layperson’s literature, let alone explained. I first encountered i years ago before I started reading about quantum mechanics. A friend who works at a nuclear site who recounted how a scientist there had told him about it. We were both slightly incredulous that the maths behind quantum mechanics, the theory therefore that underpins all our nuclear know-how and much of modern technology is fundamentally based on something we don’t and can never know.

However, it’s not really quite that scary, and quantum mechanics remains by far the most experimentally accurate theory science has ever produced. As with most discoveries, the physical phenomena were uncovered first by experimentation, and only then were mathematical models found to fit what was being seen. These models were then be used to calculate the outcomes of further experiments, proving the theory, and enabling the practical use of the phenomena. The theory isn’t founded on the maths, the maths models the phenomena to give predictive power to the theory. (This is not to say that physical reality is more fundamental than mathematics or vice versa – that’s a philosophical question for another day – but rather it’s just the way scientific discovery usually works.)

So how is i used, and what does it represent? Well, for starters the concept was not created for use in QM, but pre existed and is used in classical physics. The following short clip from icedave33 (who I should also thank for clearing up my many misconceptions whilst writing this piece!) explains very well how i can be used to model 2D rotation on a 1D line:

The imaginary number i and multiples thereof, are used in a series (i, 2i, 3i etc.) to create an extra “imaginary axis” on a graph. The three normal spatial directions are combined on to the “real axis” as shown below:

click image for source

Points plotted against the real and imaginary axes then take the form of one real number (e.g. 2 or -2) and one imaginary number (e.g. 3i or -3i), making a composite that is known as a complex number (e.g. 2+3i or 2-3i or -2+3i or -2-3i). This full graph on which both real and complex numbers can be plotted is known as the complex plane.

In quantum mechanics, a system when it is observed can only be found in a few different observable states.  Before observation, the system is in superposition (a combination of these states) and each of the states is assigned a complex number.

A complex number can be represented as an arrow in the complex plane, as shown above. The “size” of the complex number is given by the length of the arrow, but the probability of finding the system in the particular state represented by the complex number of that size is relative to its squared value.

One of the ways to represent a state in quantum mechanics is as what’s known as a quantum state vector. A state vector is just a list of numbers, each corresponding to the value of a parameter of the state – for example quantum spin. States corresponding to the possible observable states are known as eigenstates.

In the maths, the equivalent of observing a parameter is to apply to the state vector a special type of mathematical operator known as a Hermitian operator. This will produce a new state vector.

If the operator is applied to a state vector already in an observable state, then the new state vector is just a multiple of the original. It is still a real number on the real axis. These are known as its eigenvalues.

However, if the operator is applied to a state vector in superposition, then the new state vector will not be like the old one. Instead the system collapses into an observable state and again we have a real eigenvalue.

Intuitively this seems wrong, because in the maths of real numbers it is somewhat like saying that adding one to a number doesn’t always increment it’s value by one, but that instead it depends on which number you begin with. To use an analogy with real whole numbers and fractions, it’s like saying that two plus one equals three, but also any fraction between two and three (2.1, 2.5, 2.999 etc) plus one also equals three.

Another analogy might be a clock’s second-hand that cannot stop between seconds because the mechanism doesn’t allow this to happen. The hand is always at a whole number value. Except that in the case of quantum mechanics, we know that the second-hand can reside between ticks, it’s just then whenever we measure the time by observing it, we always find that the second-hand has ticked into place.

So although superposition is unintuitive, the mathematics of imaginary and complex numbers and their operators models it precisely. Similarly, it’s the use of operators that allows the maths to model other features of QM such as the uncertainty principle. This states that is impossible to measure two parameters of a system’s state to a high degree of accuracy at the same time; for example position and momentum or time and energy. This happens because in quantum mechanical systems, certain mathematical operations do not commute. With real numbers, this would be like saying that two times three does not equal three times two, but that they yield two different answers. This corresponds to the way that measuring position then momentum, or momentum then position, can result in different answers.

So if one can see what the maths is doing, then is it true to say that one has a picture of might be happening physically?

For instance, to me at least, the maths suggests that quantum states in superposition are unobserved in the real world because they are at least partly “off somewhere else” on the imaginary axis, and that the act of measurement seems to “snap” them into existence in the “real” observable world by applying the Hermitian operator.

This is not to suggest that by “off somewhere else” I mean somewhere supernatural. Rather I’m suggesting that the state they are in is not realizable in the external objective word that we experience and measure. Perhaps like virtual particles they reside in some sub-planckian world between the ticks of the clock I used in my earlier analogy.

https://farm3.staticflickr.com/2093/2027749537_167fbeef0b_z.jpg?zz=1

Click for source

However, this is where science meets philosophy, and interpretation is everything. Not always in physics do the apparent properties of the maths correspond to how things are physically. The use of complex numbers in classical wave mechanics described above is a good example, but there, at least as I understand it, the use of the extra dimension introduced by i is more like a shortcut to avoid doing harder, more regular maths. In quantum mechanics I don’t believe that’s the case. The i is a mandatory part of the theory.

Additionally, there’s the possibility (or almost certainly the inevitability) that quantum mechanics will one day be superseded by an even more accurate theory that has different mathematics and thus revises the physical picture again. One thing’s for sure, like i itself, nothing in science is set in stone, but I wish I’d had at least a little introduction to the maths in some of the popular books I’d read previously.

No god-smuggling or wizardry here.

[NOTE – this is re-post from the original incarnation of this blog.]

I sometimes find myself having to defend my insistence on wanting to save consciousness and free will from the ravages of staunch physicalism, and thus having to define my reasons in doing so.

One worry I have is that my views might be seen as surreptitiously trying to smuggle-in pseudoscience or religion, so let me explain why I have no interest in defending either.

I am only interested in phenomena that are either backed-up by traditional peer-reviewed empirical evidence, or are universally accepted by subjective agents to be real (at least at first sight).

https://i2.wp.com/www.seemsartless.com/photos/full/puppet-wizard.jpg

Click for source

So for me, both consciousness and free will qualify for explanation by the second method. Even Dan Dennett, before he had a good think about it, would I’m sure admit to believing that both were genuine phenomena.

However, the concepts of a god or gods, and the ideas of pseudoscientific disciplines like homeopathy qualify by neither method.

They are phenomena that could be real but have no convincing evidence in their favour, either objectively by scientific inquiry or subjectively by universality.

The complexity of the universe might be seen by some as evidence for god by virtue of the watchmaker argument. But this is not first-hand evidence of a phenomenon, but rather a deductive argument based on other independent phenomena.

So, no god-smuggling here. Or wizardry.

Strong AI, quantum biology and consciousness.

[NOTE – this is re-post from the original incarnation of this blog.]

Today’s Guardian is running a piece on the possibility of strong AI (also known as AGI) by physicist David Deutsch entitled “Philosophy will be the key that unlocks artificial intelligence“.

Despite the title, Deutsch isn’t arguing that philosophy is needed to speculate how future science might unlock the hard problem of consciousness. Instead he refers to further interesting philosophical questions regarding what constitutes a person, and what rights we might confer on an AGI.

I suppose it’s no surprise that my first thoughts are usually geared towards explaining consciousness itself and the explanatory gap; after all, I’m not happy with the current physicalist position. For me, asking how the physical make-up of the world we perceive through science can be reconciled with the subjective experience we all attest to having is a priority.

https://i2.wp.com/s3.frank.itlab.us/photo-essays/small/oct_29_4766_wet_leaf.jpg

Click for source

However, for pragmatic people actually doing the science who either don’t know, or for practical reasons don’t care, about the metaphysical questions (or for perfectly contented physicalists) my priority might not even count as a problem.

So I happily read and enjoyed the article for what is was supposed to be. But one comment and link did stick out to tweak my metaphysical funny bone:

“Some have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. Explaining why I, and most researchers in the quantum theory of computation, disagree that that is a plausible source of the human brain’s unique functionality is beyond the scope of this article.”

I clicked the link to find an inter-disciplinary paper from the University of Waterloo, Canada entitled “Is the brain a quantum computer“, where they argue the case that it is not.

One of the co-authors is from the philosophy department, yet the paper makes several statements that seem to me to simply presuppose the physicalist view, and totally ignore the hard problem. For instance:

“In our wing analogy, it is unnecessary to refer to atomic bonding properties to explain flight. We contend that information processing in the brain can similarly be described without reference to quantum theory.”

The problem with likening phenomenal consciousness to a wing is that the function and make-up of the phenomenon of wings is completely explained via normal emergence, whereas phenomenal consciousness is not.

In other words, one can in principle deduce how wings come to exist given full disclosure on their microphysical make-up. So up from the binding of quarks by the strong force and the binding of electrons by the electromagnetic force, out pops the emergent solidity of matter and chemistry, and on upwards through biology to the make-up of a wing.And similarly one can deduce why they exist using principles from evolution, where we get a satisfying story of how large self-replicating systems of matter interact with the environment in such a way that functions like flight emerge.

However, the qualities of subjective phenomenal consciousness – our experienced internal world – cannot be explained in this way. Nothing from current microphysics up to current neuroscience even gives us a hint of what constitutes cognition or qualia. This is what philosopher David Chalmers calls the difference between normal weak emergence, and the strong emergence of phenomenal consciousness.

And again similarly, evolution doesn’t tell us why our functionality, which can in principle be perfectly simulated on a computer, additionally gives the system a felt experience. It seem an unnecessary epiphenomena, so why did it evolve?

Now of course, one might suggest that phenomenal consciousness is an illusion or that some future discovery will solve the hard problem, but that’s not the same as simply sweeping these issues under the carpet with an analogy that doesn’t seem to work.

The paragraph continues:

“Mechanisms for brain function need not appeal to quantum theory for a full account of the higher level explanatory targets.”

Here again, this sentence is only true if one ignores the “target” of the hard problem, and instead only aims at the functional aspects of consciousness.

The authors go on to explain why they feel quantum processing could not be a factor in processing:

“…quantum-level events, in particular the superpositional coherences necessary for quantum computation, simply do not have the temporal endurance to control neural-based information processing.”

“…it could perhaps be argued that extremely short quantum events somehow “restructure” neurons or neural interactions, to effect changes at the timescale of spiking, these speculations are hampered by the significant biological plausibility problems we explore in the next section.”

I do not know enough about the subject to refute the first statement, or press for an alternative where there is an more than one type of processing going on in the brain, but for sure the admission in the second statement allows us to substitute “control” (in the first) with “influence” to make the statement less plausible.

This is particularly effective since the objection to their own admission; the biological implausibility in the next section, turns out to be very shaky. To be fair that’s not the fault of the authors, because the paper was written six years ago. Since then, science has discovered several real and potential biological quantum phenomena, and the field of quantum biology is burgeoning and in the news.

From photosynthesis to the magnetic sensing of robins, nature seems to have found a way for quantum effects to not have a problem with the “warm, wet” conditions in biological systems.

source = photons in my back garden, bouncing off Robbie

One argument in the paper that does strike me as interesting is this:

“Even if quantum computation in the brain were technically feasible, there is a question about the need for such massive computational efficiency in explaining the mind. It is technologically desirable that a quantum computer should factor a large number into primes in polynomial time, but there is no evidence that brains can accomplish this or any similarly complex task as quickly.”

Here the word “mind” is used, but again subjective experience seems to be ignored. But the point does seem to have some power against cognition. This might suggest that any quantum processing is limited to qualia, which might include cognitive phenomenology, but not cognition itself.

The final part of the paper is introduced with this:

“Moreover, as we argue next, there is no compelling evidence that we need quantum computing or related processes to explain any observed psychological behavior, such as consciousness.”

The problem here is that Subjective consciousness is not synonymous with “observed psychological behaviour”.

The section goes on to attack Penrose‘s Orch-OR model of quantum collapse, it’s references to Godel‘s incompleteness theorem, and Hameroff‘s suggestions as to how Orch-OR might work in the case of the brain.

The main argument against Orch-OR seems to mostly rest on it requiring revisions to current scientific understanding, and the lack of evidence. I’d deny neither of those, but point out that it’s supposed to be a speculative idea to solve a problem, not a fully fledged theory. To take such ideas out of context by ignoring the problem they seek to solve doesn’t really show us anything.

I can’t argue the case regarding the use of Godel and Hammeroff’s ideas with Orch-OR, but their objections do appear to be on more solid ground there. As I say, the whole thing is highly speculative, as Penrose himself would admit. An alpha version of a model if you will.

In summary the authors admit the possibility of quantum computing (or quantum effects on classical computing) in the brain but suggest that it is less plausible than classical physics doing all the work.

I’d suggest in my own summary that after six years the jury is still out, but that the idea has become a lot less implausible. Additionally, it’s not helpful to make reference to phenomenal consciousness in a paper, and then ignore it in favour of other uses of the word in some of your arguments.

For both reasons, I don’t think the paper shouldn’t have been referenced.


UPDATE Feb 2013

For more on the burgeoning field of quantum biology, see this collection videos from the recent Quantum Biology workshop at the University of Surrey.