The Hard problem

Andrew Brown for Wired magazine

Professional philosophers can turn argument into a martial art. Nervous spectators pull their beer bottles out of the way in case a backhand logic chop should sweep them off the table. In the Empire Bar in Tucson, Arizona, one night in early April, the young man defending himself dressed like a jobbing rock star: T shirt, skinny legs in jeans, and a Ramones haircut with a fluffy fringe and then great curly masses of brown hair reaching down his back.

He was blocking points deftly with his outspread palms. His opponent was older, still wiry, with black hair and moustache: the more the young man David Chalmers, blocked, the more the older man, Bruce Mangan, used his arms to make points, until he was arguing from his shoulders, like a boxer.. Finally, he seized a beer bottle and thrust it in front of his opponent. Look, he shouted. There's only one thing I want to know. Do you think this beer bottle has consciousness?

There was a pause. "Well, it might have." said Chalmers. A ripple of appreciative relaxation ran round the audience. The bout was over. One of the spectators took the bottle from Bruce Mangan and carefully tore a strip from the label, which he waved in front of Chalmers. "So what happens now?" He asked. "Where has the consciousness gone in the paper? Has the label got its own little consciousness?"

That's how professional philosophers relax. It is also extremely serious. Nearly 1000 people among them two or three Nobel laureates, had gathered in Tucson in early April for the second conference entitled, Towards a Science of Consciousness. There were neuroscientists, philosophers, psychologists, quantum physicists and AI gurus; and every participant worth talking to had at least half a dozen good theories, mostly incompatible. All, however, were united by the belief that this is the most exciting intellectual frontier in the world today. Science, here, seems to be closing in on the essence of what makes us human.

Yet there is a paradox at the heart of this work. The more we learn, the further we come from we wanted to know. There seems to be no room for scientific knowledge in the world that science discovers. The more we learn about how the brain works, the less we can tie up this knowledge with how it feels to have a brain - be alive and to think. Science has always discussed the world from the outside. Can it ever encompass how the world feels from the inside? We now have brain scanners which will show in real time where in the brain something is happening when we learn. But even if these scanners could show each individual neuron firing, we still can't begin to understand why that should feel the way it does. We may be able to find which pattern of neuron firing makes up a thought - that is what Chalmers has named the easy problem, even though it is fiendishly difficult.

But even when all the easy problems are solved, and we know how memory works, and how brains process information, so that mental events can be mapped onto physical ones, there will remain still the apparently unanswerable question of why they should map like that.As Chalmers points out, the difference between the easy problems and the hard problem is not that the easy problems are easy, but that we have some idea of how they might in principle be solved. This contrasts with the question, why does consciousness arise from a physical process? That "why" is what Chalmers calls the hard problem: what makes it hard is not that the answer seems distant, but that we don't know where to start looking. So The Hard Problem has become a catchphrase for the rapidly growing field of consciousness studies, and Chalmers is its prophet.

David Chalmers speaks in a light, hurried voice. His words come in quick flurries, like snow, and as he talks you realise you can see less and less and are more and more deeply stuck in a world of improbable contradictions. "When it comes to the hard problem, my feeling is that you need something that goes beyond physical theory, because everything in physical theory is compatible with the absence of consciousness; my feeling is that you have to take consciousness as axiomatic, like time and space. The problem comes with constructing a theory that will link them all together. You want to get down to something that's deep and fundamental - a set of laws that's simple enough you can write them on the front of a T-shirt."

This is an extraordinarily ambitious project, and so very attractive. Chalmers says the scientific study of consciousness "Is now like physics before Isaac Newton came along. No one knows what is really happening." And the Tucson conference was full of young men who fancied themselves as the next Newton. Ever since Francis Crick, one of the discoverers of the structure of DNA, announced in the Eighties that he was going to spend the rest of his life studying consciousness, a sense that this is going to be the next big thing has been spreading through the scientific community. This is odd, because for fifty years it was necessary for scientific respectability to pretend that consciousness didn't exist. Even the study of perception was academically suspicious.

Now, consciousness is everywhere. The first Tucson conference, two years ago, drew 400 people; this one drew 1000, and the next, scheduled in two years' time, will probably draw even more. "Most people have no difficulty in seeing consciousness in cats or dogs - once you get down to flies it's more difficult." says Chalmers. "But there may be some very simple form of consciousness, experience, without much in the way of thought or activity - something about consciousness that is pre-intellectual. Some of our machines may have that now."

With this remark he leaps across one of the great chasms that divide the field. Nothing has done more to sharpen the issues involved in consciousness research than the promise, or the spectre, or artificial intelligence. Chalmers, who studied for two years under the AI guru Douglas Hofstadter, professes himself agnostic on the issue. "The deep question," he says, "is why any physical system, whether machine or animal, is associated with consciousness. But brains did it, so why shouldn't machines too?"

Others are much more certain. Dan Dennett is a large, long-legged man, with a great rounded skull like an ostrich egg, and a beard like God, in whom he does not believe. Wherever he stepped into the corridors or ante-rooms of the Tucson conference he became the immediate focus of a ragged, admiring ellipse of students and disputants on whom he beamed down with sharp benevolence. He is the inventor of one of the classical thought experiments of AI: If I replaced your grey porridgy brain with a shiny new black one, cell by cell, bit by bit, cell by byte, when would you notice? Why would you care? This is not question about whether I, the experimenter would notice the difference, or whether you would pass the Turing test. The boldest goal of a consciousness scientist is to make a machine that knows it's a person, the same way as we do.

Cell by cell, bit by bit, your neurones would be turned to silicon. But each time each piece of silicon would have exactly the same connections and behaviour as the cell it replaced. The neighbouring cells would get exactly the same responses as if the their delicate electric and chemical feelers - the dendrites - were still brushing against other cells and not plugged into the wiring. No one has yet found anything magic about the way the neurones signal to one another. It is immensely complicated, but it's only chemicals and electricity. These can be measured and replaced. In fact some of this has already been done, in the outer suburbs of the brain. There is a treatment for deafness which replaces a defective nerve with circuitry. Treatments for blindness that would replace the eye with a television camera are already thinkable.

This sort of reasoning has led Dennett to conclude that hard problem, as defined by Chambers, is no more than a mirage. When all the "easy" problems have been solved about how the brain processes information, we will discover that the hard problem has simply disappeared. Indeed, one of the things that makes the hard problem so difficult is that lots of people can't see that it is a problem at all. For people like Dennett - who are nicknamed, for obvious reasons, zombies - consciousness will turn out to be no more than the sum of meaningless algorithmic processes. He defends these view with enormous clarity, force, and charm. None the less, he seems to be rowing back in recent years from the very strong AI position with which he made his name. Though he still says that, in principle, any chunk of silicon (or anything else) which can perform the same functions as a brain would by definition be conscious, he now admits that some of these functions seem to be much more specialised now than they did even ten years ago. The more we know about the human brain, the most complicated object in the known universe, the less it seems likely that we can ever reproduce anything like its complexity artificially.

Some people still believe it can be done, but certainly not by the methods which were fashionable when Dennett first started work. Danny Hillis, for example, the founder of Thinking Machines, one of the first successful parallel processing companies, and now vice-president of R&D for Disney, believes that the only way to get a machine complicated enough to have a possibility of consciousness is to breed it:

"Imagine something like the Internet, multiplied times 100, and imagine all the machines on it exchanging programs, and imagine using those programs to design a system which would run not on one machine but on the whole network - then I think you have the image of something that might be complicated enough to conscious." He told the conference.

"I have a feeling that once you do that, it becomes easier to accept the idea of something like a conscious machines. I think people who have a strong intuition that machines can't be conscious have that feeling not because they overestimate the wonders of consciousness but because they underestimate powers of machinery.

"A lot of arguments against machines thinking are made from exaggeration and distortion." Hillis takes particular aim at Sir Roger Penrose, an Oxford mathematician who was partly responsible for the conference's being in Tucson: Penrose's partner in research is Stuart Hameroff, an anaesthesiologist at the University of Arizona. Penrose believes that consciousness is fundamentally incomputable, and that it involves a sort of understanding that cannot be reduced to algorithms and instantiated in a computer. Because of this, he argues, consciousness represents one of the frontiers of randomness where the laws of science lose force.

The second frontier is the gap between the determinist world of unobserved quantum particles, happy in their law-bound wave functions; and the equally determinist world we live in, which they join once they have been observed. The point is that the moment of observation, known as the reduction of the quantum state, produces outcomes, which can only be statistically predicted in our present state of knowledge; and where each individual event seems to be as completely random and incomputable as anything could possibly be. So he believes that the mechanisms of consciousness are probably connected to this transition. The laws that explain the one will explain the other, so that if we really knew how our minds work, we could also tell whether Schroedinger's Cat was alive.

Working in collaboration with Hameroff, Penrose believes he has located the place in the neurons where these quantum events take place: minute stiffening structures called microtubules. The proteins that form the walls of the microtubules can flip between two different shapes, which allows them, he believes, to function as cellular automata, as in the game of Life. But they can also exist, for an instant or two, in a state of quantum superposition, like Schroedinger's cat; and it is the flickerings through that third state that constitute conscious events. Their theory is resonant, especially when Hameroff explains that psychedelic drugs promote superposition in the microtubules, and the more superposition you have, the more consciousness; or when he adds that the consciousness event is a blister in space-time, and so must be quickly reduced to normal physics: "If not reduced, a blister in space-time would shear off into multiple universes - and we hate it when that happens."

For all anyone knows, the Penrose-Hameroff theory may even be true. But most people reckon the chances of that are so small you could slip them between a couple of electron shells. And even if the microtubules function as they are supposed to in this theory, there is the difficulty pointed out by yet another philosopher, Patricia Churchland, that we have no hint of an idea as to why quantum events should cause consciousness. The hard problem remains untouched.

Hillis, in his lecture, asked the audience how many believed a machine could be conscious. About 30% were certain it could; about the same number certain that it couldn't. The largest party was agnostic. But it has to be said that the antis had all the best tunes.

In Jaron Lanier's case, this was literal. He opened the proceedings with a self-composed piano piece which started off strange and brilliant but became progressively less strange. This formed a counterpoint to his reasoning, which was unflaggingly brilliant, but grew stranger and stranger. As well as being a musician he is a code god: he invented the term Virtual Reality, and then the gadgetry to make it real. He has a towering physical presence, being well over six feet tall and about four feet round the waist, with a great shock of maize-coloured yellowy-brown dreadlocks reaching almost to his waist, and a beard which gives his face a perfectly triangular jaw like an Egyptian painting's. His eyes are bright pale blue. He is certain that talk of machine consciousness veers between futile and really dangerous.

He's a genius, of course, and knows it in the same sort of way he knows he's tall. In conversation, you feel as if you are talking to the spirit of the prairies: a restless, inexhaustible wind running on forever. America, he says, loves frontiers, and always peoples them with abstractions, like freedom, or computing. But it is all a mistake. The abstractions do not exist: the frontiers are only a trick of perspective.

First he disposes of the Turing test: Only a fucked-up gay Englishman being tortured with hormone injections could possibly have supposed that consciousness was some kind of social exam you had to pass, he says. There are two logically possible ways to for a machine to pass the Turing test: either it gets smarter, or we get dumber, and he is in no doubt that the likelier reason is that we have got stupider and adapted our reasoning to the machine's expectations.

Then he demolishes the idea of computation. It is an arbitrary process, he says. It is not built into the structure of the universe, but only into the way we understand it. Anything can be measured in numbers, and any set of numbers can constitute a program for some thinkable computer. Even a meteor shower could be read as numbers, and so be read as a program on some computer somewhere. Would we therefore say that the meteor shower was computing, or conscious?

Then he has a go at the physical reality of the universe. We were sitting outside as he said this, facing each other across a concrete table which was one of the most solid objects you could hope to find. But, he said, on the level of Quantum Electrodynamics, which is the most reliable science we have, it turns out to be an illusion.

"I hate to tell you this, but QED does not acknowledge the existence of gross objects. From the point of view of physics, there are particles in positions, but no gross objects: the particles don't constitute a table:" He gestured at the stone table where we sat in the dusty evening, scents of sage-brush and diesel borne round us on a gentle wind. "Gross objects aren't needed by physics. They're actually extraneous. Ask a neutrino."

I felt as if the prairie wind had whisked me off to Oz. "This is genuinely hard stuff," he said. "Hard to think about and hard to talk about.". Then he returned to the more mundane objections to a mechanistic view of consciousness. It narrows our vision of the world, he said. The danger of believing that consciousness is computation is not that it causes us to misunderstand machines, but that it makes us misunderstand our own nature..

Lanier places the hopes that some people have for consciousness research firmly in the American tradition of faith in some apocalyptic fix to all life's problems. Indeed, the programmatic atheism of the main conference was thrown into high relief by the simple faith of some of the attendees that technology could perform all the functions the ignorant once expected of God. This irrational longing gives the field of consciousness research a lot of its excitement, yet it is only the very naive who will admit to it. I had supper one night with a man who had made enough money in mainframes never to need to work again. He could travel the world gratifying his curiosity; and the miracle he expected from consciousness research was immortality. Humans, he felt, should be migrated onto better hardware when the old stuff wore out. This is the natural development of the classic Dennett thought experiment where you replace each part of my brain by a bit of silicon, thought by byte, until I woke up one day to find myself running on entirely different hardware, as if I, a mere human, were as clever and well designed as System 7.5.

If there are consequences to humans of being like machines, they are not new, and may not even be interesting, since we have always been that way. On the other hand, the consequences of thinking of ourselves as soft machines may be considerable. If consciousness works according to scientific laws, then a lot of what we have taken to be essentially human, such as free will, seems to disappear. Some people welcome this. The English scientists Colin Blakemore and Susan Blackmore both spoke against free will from their separate perspectives. Blakemore, a perceptual physiologist, best-known for sewing up the eyes of new-born kittens to see how this affected their development, argues that free will must be an illusion. What happens in the brain is conditioned by what happens in the mind; what happens in the mind is determined by physical laws. Therefore, if you know the state of the brain at any one moment, you can predict its next state, and it's next, and so on, presumably, to the point of death. Free will is then exposed as an illusion. If only we knew enough, we could see that all our states of mind are equally conditioned by the physical, law-bound realities of the brain.

This position looks like science. It is certainly based on scientific research programme, and draws much of its authority on the fact which almost all psychological research reveals, that much of the brain's activity is self-deception. We are constantly aware of what is going on in our heads, and much of what we are aware of is false. The process of turning the blooming, buzzing confusion of the world into a coherent story involves a tremendous amount of suppression, distortion and illusion. For instance, there is a well-known experiment in which the subject is wired up to an ECG machine, given a button to push, and told to push it whenever he feels like it; and to say whenever he feels the urge to push the button. That's all. It would seem an untrammelled exercise of free will. Yet the experimenters, watching, know when the subject is going to push the button before he can himself: there is a particular burst of electrical activity in his brain half a second before the subject himself realises that he wants to press the button.

Because of this and similar effects, Blakemore proposes that the distinction between voluntary and involuntary acts is delusory: it is no more than "folk psychology" to use the jargon sneer. This is a view with potentially enormous consequences. David Hodgson, an Australian judge who has taken a keen interest in the problems of consciousness research, points out that the distinction between voluntary and involuntary acts is essential both to the preservation of human rights, since justice as we presently understand it demands that criminals be punished only for acts which they intended. The distinction is also essential when it is applied to the victims of crime, as well as to criminals: if the whole idea of consent and free choice is an illusion, how can rape be a crime, since it is defined as sexual intercourse without consent?

"I think this raises a really hard question for people who want to do away with folk psychology of voluntary action, belief, and intention; and replace it with the concepts of neuroscience, which have a reputable place in the scientific world of cause and effect." Hodgson told the conference. "What do they see as replacing the consent of the woman as the crucial factor in determining whether an act of sexual intercourse is lawful or non-lawful? And if they can't give a non-evasive answer to that question, then I think they should reconsider their programme."

Perhaps the answer could be supplied by the Secret Policeman's Brain Scanner. In the best traditions of philosophy and computer science, this is a device whose efficiency is unimpaired by the fact that it hasn't actually been built: I invented it half way through the conference. But it is already a very useful tool for examining consciousness researchers. It consists of a small, portable headset, full of clever little scanning widgets and a screen, which the experimenter studies. It gives a complete readout of the state of the brain at any one moment, just like a hardware debugger can in silicon.

The subject puts the headset on, and the screen then shows exactly which neuron is doing what. At least it shows which neurone does what on a chemical, or electrical level. The question the machine is designed to answer is whether this would also tell us what is going on at a personal level: what thoughts, feelings, and sensations inhabit the mind whose brain we have illuminated. If the secret policeman's scanner would do that, then the hard problem would become much less interesting. It wouldn't have been answered, but answering it wouldn't increase our knowledge. It would be like the question "why is it DNA and not something which encodes genetic information?", to which the only answer is that lots of acids or proteins might to the job; DNA just happened to be the one that does. What's interesting is how it does it.

If it worked, the secret policeman's brain scanner could answer all the "how" questions in the brain. If it worked, it would have another implication: that I could know much less about what I was thinking and who I am than the secret policeman who is scanning me. But would it work?

For some people, the answer is obvious. Colin Blakemore, when I tried the scanner on him, was confident that it would reveal what was going on in the subject's mind so an outsider could understand it. David Chalmers thought it would certainly show "the broad outline of what's going on and the likely behavioural dispositions." But he also thought it might take 500 years to produce a working version of the secret policeman's brain scanner: the human brain contains around 100,000,000,000 neurons, and each of these may be connected directly to 10,000 of its fellows and is connected indirectly to every other one, within six or seven hops.

Neuropsychologists are less sure that the secret policeman's brain scanner would be useful even if it could be built. There are two main reasons for this scepticism. The first is that everything we learn suggests that the functional unit of the brain is not a neuron but a network of neurons; and that these networks are not hard-wired but constantly shifting coalitions, with no more stability or definition than a cloud of fireflies. What is more, each neuron can be a member of several different firefly clouds at once, playing different roles in each. Even when quite precise regions of the brain are mapped down as essential to certain jobs - usually after a small stroke has stopped the work being done there - memories and ideas appear to be distributed all over the cortex, linked only by association which are different in the case of every brain.

Douglas Watt, a neuropsychologist from Quincey, Massachusetts, says that the whole project of the secret policeman's scanner is doomed because our brains are just too complex and too individual. The neural architectures of human brains start with genetic differences (identical twins have brains which most closely resemble each others') and continue to diverge and individuate even before we are born: there are environmental effects detectable in the womb. Once we are born, learning anything involves a physical process of modifying and extending the connections that make up our firefly neural nets. All this means that even the simplest networks, such as the one in our speech area which recognises a single syllable, may be located a couple of million neurons away in his brain from where it is in mine. What is more, the exact location of this network in my brain will shift over time, and as I learn other languages and distinguish more carefully between phonemes.

One does not need a human brain to establish this. The classic experiment was done on rats. They were taught to recognise a particular smell, and a wonderfully subtle arrangement of electrodes detected the exact area of the olfactory bulb in their brains which twinkled into life when they did so. So far so good for the secret policeman's scanner: we know that when these particular neurons go off, the rat is conscious of a particular smell. It was a really fine experiment. What lifted it into the realms of genius, however, was the next part of the experiment; the rats were then trained to associate this smell with some other stimulus - a flashing light, or a buzzer: the usual forms of entertainment laid on for laboratory rats. After that training, the rats were once more exposed to the same smell, without the additional stimulus - and this time an entirely different group of neurons fired off. Even to a rat, it seems, there is no such thing as a pure sensation. Everything is stored, and experienced, in relation to other sensations and to emotions. There's nothing thought but good- or badness makes it so.

"A recent project at Yale found there was nearly no such thing as an emotionally neutral word." says. Watt. "Orienting information supplied by low-level emotion is essential for working memory; and working memory is essential for consciousness.

"We have two cultural icons, Data and Spock, who are clearly sentient beings, and show better purpose than most of us, and yet claim to have no emotion. I think this is just impossible and these characters are just contradictions in terms."

Despite Watt's pessimism about the secret policeman's brain scanner, there is considerable progress being made in some of the easy problems. We know a lot about where in the brain the things we don't understand are hiding. Normally this is discovered when things goes wrong. Small strokes can explode in the brain like smart bombs, taking out one vital function, but no more than that. A blood vessel bursting here will rob you of the power of speech; another, in a slightly different place, will leave you able to speak and hear, but not to understand language; another will produce "blindsight", a condition in which the patient cannot consciously see anything yet if asked to guess what would be there to see, will almost always guess right. One of the effects of this research is to make it quite clear that there is no place in the brain where "we" live. In Dennett's terms, there is no Cartesian theatre: nowhere where a little person sits inside our head watching a movie about what goes on in the outside world, so that what this homunculus sees is the contents of our consciousness.

The oddest stroke effects were produced by the patients of Dr Vilayanur Ramachandran, of the University of California in La Jolla. They are women who have not only been paralysed down one side by a stroke, but robbed by the same calamity of the knowledge that this has happened to them.

If someone "normally" paralysed is asked to pick up a tray of drinks, they will use their one good hand to pick it up from the middle. If one of Dr Ramachandran's patients is asked to do so, they will grasp one side of the tray as if their left hand was grasping the other, and lift confidently. "Oh how clumsy I am" they exclaim when everything spills everywhere. They simply cannot see that a hand is missing. They are lucid in all other respects: able to tell him when and where they had a stroke, but simply unable to admit even to themselves that this stroke has paralysed them. He described one patient who was convinced that her left hand, which could not move at all, was touching his nose. "I couldn't resist the temptation ... 'I said, Mrs B: can you clap?'

"She said, 'of course I can clap.'

"I said, 'Clap!'

"She went -" he moved his right hand in a lurching motion through the air to the point where it would have met the left hand.

"This has profound philosophical implications", he continued, as laughter rippled round the conference hall. "Because it answers the age-old Zen master's riddle - 'what is the sound of one hand clapping?' You need a damaged brain to answer this question."

The joke against philosophy here has an edge: the relations between the neuroscientists and the philosophers are not as cordial as the philosophers might believe. The "Brainstabbers" as John Searle calls them, can see that they need philosophical sophistication as they close in one the mystery of consciousness. They are less sure that they need actual philosophers to provide the sophistication and claim the credit afterwards.

Dr Ramachandran believes that his discoveries provide neurological justification for Freudian theories of denial, repression, and other defence mechanisms. This is an extraordinary turn-around. For decades now, nothing could have been more unfashionable in serious academic psychology than Freud. Yet he believes that the pattern of denial shown by his stroke patients points to a continuous struggle n the part of both brain hemispheres to produce a coherent picture of the world. When isolated facts are reported which might upset these theories, the reaction of the left hemisphere is to ignore them as mere blips in the data. Most of the time, this will be the correct response. When it is not, the right hemisphere throws a reality check.. The right hemisphere deals with new, inconsistent data by throwing over the old belief structure, in a Kuhnian paradigm shift. In stroke patients who cannot recognise their condition this mechanism stops working. The right hemisphere messages never get through and then, he says, "There is no limit to the delusions that the left hemisphere will engage in."

The condition is not permanent. Though it will reassert itself, it can be dissipated for a few moments, by squirting ice-cold water into the ear on the unparalysed side. The effect is easy to miss, because if you squirt cold water into the wrong ear, as Dr Ramachandran did the first time he tried it, you are left with a patient who is confused, and angry that anyone should have squirted cold water without warning or reason into their ear, but still unaware that they are paralysed. But if the water is squirted into the ear of the damaged hemisphere there is a period of confusion and then about ten minutes when the patient knows perfectly well that she has been paralysed - cannot imagine not knowing this, in fact. Six hours later, they have forgotten the whole episode, and are once more convinced that everything is working properly.

There were over 500 papers delivered at Tucson. Many of them were deliciously cranky: one of the Californian papers in the "poster sessions", where anyone could display their ideas on a pinboard, claimed that "Pioneering scientific research has demonstrated the experience of love to facilitate coherent harmonic patterns in the EKG, to improve both immune function and hormonal regulation, and enable focused attention to modify the conformation of human DNA. Love has also been shown to be causal in modulating the structure of water to provide a matrix for information storage and transformation."

Anyone who could pay could display at these poster sessions, since the organisers of the conference took the view that we don't know enough to be certain that any approach to consciousness is completely off the mark. Research is bringing two paradoxical messages: firstly that consciousness is inescapable; it is the medium through which we apprehend even the unconscious world. We don't have access to anything outside our own consciousness. At the same time, we become conscious of a second conclusion: the consciousness has no obvious reason to evolve. Obviously an animal that deals with the world in an intelligent, co-ordinated fashion has an advantage. But most of the intelligent, co-ordinated decisions crucial for an animal's survival are made far faster than consciousness can process anything, according to Jeffrey Gray of the Institute of Psychiatry in Denmark Hill, South London. As an example of the sloth of consciousness awareness, he points out that in the time it takes a tennis player at Wimbledon to become consciously aware that his opponent's serve has crossed the net towards him, he must have already struck his own return back over the net if he is to stay in the game. Even quite high-level trains of action, like commuting by car, or talking over breakfast with your spouse can be performed without any conscious input.

Gray has his own theory about the location of consciousness in the brain. He sees it arising from a spray of feedback loops between networks, which are essentially checking whether the world is behaving as we expect it to. In his model, consciousness works rather like a newspaper: it does not report everything that happens: only the things which are unexpected (though not so unexpected as to make no sense at all). In a loop that takes about half a second to complete, he believes "the contents of consciousness perform a monitoring function to check whether the actual and predicted states of the perceptual world match or not." And this continuous activity is where we live.

The other popular large-scale view, which may well be compatible with Gray's, is the idea of a global workspace, developed by Bernard Baars, a sturdy, bearded Californian, tanned like a hazelnut, which claims that consciousness takes place in an area of the brain accessible to all the separate sub-processes of our mind, any one of which may suddenly come to dominate the public arena. The technical details, as with Gray's theory, are hugely complicated; but all these theories, along with ten or twenty others, are still attempts at the easy problems. However clearly, and in whatever detail they map the structure of the brain alongside the workings of our mind, they still don't approach the hard problem of why these two should be related.

In the end it is only the hard problem that draws people into the field, even when they hope to solve it, as Dennett does, by proving that it is an illusion. That proof would be quite as important as a solution to the problem: as important as anything discovered by Copernicus, Newton, or Darwin. That night in the Empire bar, after Bruce Mangan and David Chalmers had finished their sparring, Patrick Wilken, one of the spectators, turned to me. He is the executive editor of Psyche, the most severe of the three journals of consciousness research to have appeared in the last five years. He also moderates the psyche-D mailing list, one of the few places on the Internet where real work gets done in public. "You realise what all this is about." He said. "We're trying to invent a new soul."

Over the days that statement grew on me. There was plenty of time to brood on its meanings. The conference was so exciting that I found almost at once that I couldn't sleep for more than an hour an a half at a time. Instead of deep sleep, I could only dream. Instead of dreaming, I would wake up. And waking in my hotel in my hotel room in the thin desert dawn, I couldn't see the sky over the mountains, blue and purple like the flank of a rainbow trout. Instead, I could only see little pack trains of cholinergic and aminergic neurons trudging up and down my brain, from its roots to its thoughts, no longer fireflies, but fire ants. One morning I woke up with a jangle in my head, too: "There's a fire that calls me miracle; that will not cause me chemical" - the phrase repeated over and over, up and down a scale like the trudging neurons. I was eavesdropping on my brain trying to make sense of itself. Whoever I was.

Go back to my cuttings or to My home page.