ÁñÁ«ÊÓƵ

A motor to defy the mechanic

<ÁñÁ«ÊÓƵ class="standfirst">The Engine of Reason, The Seat of the Soul - Journey to the Centers of the Mind
August 18, 1995

Paul Churchland's book is concerned largely with the achievements of a particular kind of brain model involving the concept of "parallel distributed processing". This approach, rather than treating the brain as if it were a digital computer doing specific step-by-step computations or analyses, treats it as a network that can be trained to behave in particular ways by adjusting the strengths of the links between the elements of the network (the "neurons"). These networks can be simulated on a computer, allowing one to investigate what kinds of skills fall within their scope. For example, the "input neurons" can be exposed to patterns of information corresponding to faces, and the links adjusted so that the network as a whole generates in its "output neurons" the signal designated as being correct, such as codes for the names of the individuals concerned or for whether they are male or female. It turns out that, having been trained to produce the correct output from particular examples, neural networks often tend to produce the correct output in situations for which they have had no training as well; in other words the networks are able to generalise. The abilities of networks to classify and to generalise have been tested on a whole range of problems, for example pronouncing words in the English language correctly given their spelling, distinguishing between sonar echoes from underwater rocks and ones from underwater mines, and dividing words into grammatical categories given exposure to a collection of sentences in which the words appear.

Since artificial neural networks are unexpectedly effective at tasks of recognising and categorising (better, indeed, than earlier kinds of program that tried to categorise on the basis of explicit rules), it is natural perhaps to imagine one has stumbled on a key principle of functioning of the brain, one which might explain all human capacities if one knew how to apply it correctly.

This is evidently what Churchland wants his readers to believe. But is it a credible claim? Not, I think, in the rather extreme form that he advocates. His enthusiasm for neural networks leads him to deny that other viewpoints can have anything significant to say: in particular, he contests Noam Chomsky's claim that human language capabilities depend on processing capacities specific to language. The main basis for his attack is a simulation by Jeff Elman of a system that appears to be able to learn to distinguish between grammatical and ungrammatical structures in English simply on the basis of being presented with sufficient numbers of grammatical sentences, and without the assistance of any processing capacity specific to language.

Such achievements appear (at first sight, at any rate) to contradict an assertion of Chomsky's. It is questionable, however, whether they require an abandonment of Chomsky's essential principles. The fact that a mechanism not specifically attuned to language can acquire language eventually no more rules out the existence of a specific mechanism for language in man than the possibility of acquiring balancing skills without a special organ of balance rules out the existence of an organ of balance. Indeed, the long time that Elman's program takes to acquire grammatical competence (exposure to 10,000 examples of sentences with a vocabulary of only 21 words being needed to acquire a dozen rules of grammar) suggests that this is not the process by which human beings acquire language.

ÁñÁ«ÊÓƵ

Churchland's ideology also leads him to suggest that there is at bottom no difference between our own linguistic skills and those of a chimpanzee. He cites in support of this the case of a chimpanzee that was able to respond correctly to requests such as "Kanzi, go and get the ball that is outdoors and bring it to Margaret". This again fails to prove his point, since comprehension of such a sentence demands only the ability to hold in memory a few key words ("get", "ball", "outdoors", "bring", "Margaret") and make use of them in a sensible way. Human linguistic skills in general cannot be reduced to such terms.

One might well wonder why it was necessary to write innateness out of the picture altogether, instead of arguing for a more plausible integration of the potentialities of innateness and neural networks. One is tempted to see this as the consequence of science being in practice not entirely the dispassionate search for the truth that it is frequently advertised as being, but sometimes also having major adversarial elements. If one disapproves strongly of a particular person's ideas, as Churchland clearly does, a balanced approach is liable to go out of the window. His enthusiasm for neural nets leads him likewise to an unbalanced account of moral understanding, over- emphasising the learning and categorisation aspects of social behaviour and ignoring the possibility of innate moral mechanisms.

ÁñÁ«ÊÓƵ

In a chapter entitled "The Puzzle of Consciousness", the author tries to argue that the idea that there is a problem in fitting consciousness into one's world-view is merely the outcome of confusion in the minds of those who believe this to be the case. He suggests that the visual sensation of the colour red (an example of "first-person knowledge") differs from third-person knowledge of the same (such as a neuroscientist's understanding of the processes involved in seeing the colour red) merely in their being two different ways of knowing the same thing: first- person knowing is "self-connected knowing". Very true, perhaps, but how far does that statement get us with really understanding the difference? Such understanding Churchland postulates as the glowing promise of the future, a future where we will have a theory that will "reconstruct all known mental phenomena in neurodynamical terms" in the same kind of way that we can now analytically reconstruct the phenomena of heat in molecular terms.

The point at issue here is what we mean by reconstruction. A scientific account of consciousness, as normally understood, would end up with some kind of "account of what a person is experiencing". The difference in opinion between Churchland and someone such as the philosopher John Searle seems to be a difference in what kind of account they would consider acceptable. Searle considers a scientific account would be unacceptable because direct first-person knowledge of something such as pain is crucially different from scientific knowledge of what is happening in the brain when a person experiences pain. Churchland on the other hand sees them as in principle the same, and accuses Searle, in claiming that there is a clear difference, of making an unjustifiable statement. It is no good, according to him, saying that the difference is obvious, because we now recognise that introspection is not infallible: "We can have a false or superficial conception of . . . the character of our inner states".

The limitations in Churchland's thinking are starting to become apparent. In talking about the "essential character" of our inner states he is implicitly referring to their character according to a scientific perspective. Searle's point is that there are other perspectives, such as the experiential, from which it will not be the case that such explanations will be adequate. As Searle observes, and Churchland fails to understand, no account in terms of neurons will lead in itself to understanding why pain feels the way it does.

Using the idea that our introspections are fallible and so can be ignored under all circumstances seems a currently popular line of argument, used for example by Dennett in his book Consciousness Explained. One hopes that this fashion will be short-lived.

ÁñÁ«ÊÓƵ

Susan Greenfield does not fall in with this kind of thinking. Her philosophy is superior to Churchland's, perhaps because she does not try to fit her views into an ideology. In Journey to the Centers of the Mind she expresses doubts as to whether a purely scientific contemplation of the brain, reducing it to the behaviour of neurons, is adequate to the issue, and advocates that careful attention be given to the details of conscious experience in whatever ways may be appropriate. Curiously enough, this radical difference in viewpoint compared with Churchland's makes hardly any practical difference in the way the two of them approach the problem of consciousness; they both look for general features of conscious experience and both try to relate them to brain models (Greenfield, however, acknowledges the possibility that the brain may not provide the whole story). Compared to Churchland's analyses, Greenfield's are rather more detailed, and thus perhaps more interesting from the point of view of the scientific enterprise. Greenfield argues, for example, that brain states of a certain kind, associated with particular categories of situation discussed in detail, tend to trigger off coherent cognitive activity, with the original focus acting as a kind of "epicentre". She explains our apparent continuing identity in terms of a picture used by Dennett to argue away such a continuing identity: we develop relatively fixed "meta-habits" which (as postulated by Dennett) decide what varying aspect of the whole should be in charge of operations at any given time. Her detailed proposals for fitting together the psychological aspects of consciousness and the neurophysiological ones will doubtless be seized upon by the experts keen on uncovering both their faults and their merits.

Are we perhaps at last on the way to bringing consciousness properly within the scope of science? Whose view is correct, Churchland's that consciousness is no different from any other scientific problem, or Greenfield's that scientific language gives us no terms of reference for dealing with the subjective, that consciousness is in some sense too diverse for science to be able to cope with it? Perhaps the example from the physical sciences of deterministic chaos gives us a clue. There, it became apparent that some ways of thinking were inappropriate, and needed to be replaced by others. I suspect that treating consciousness properly will similarly not just be a matter of filling in details as Churchland proposes but also require a change in ways of thinking. The widespread hostility existing at the present time to ideas such as that the sharing of personal insights may be a valid way of gaining information about consciousness is perhaps a sign that the scientific community in general has some way to go in this regard. As the barriers coming from hallowed traditions start to dissolve, we may well begin to put ourselves in a position to be able to say with some justification that science can at last understand consciousness.

Brian Josephson is a Nobel laureate in physics and professor of physics, University of Cambridge.

<ÁñÁ«ÊÓƵ>The Engine of Reason, The Seat of the Soul: A Philosophical Journey into the Brain

Author - Paul M. Churchland
ISBN - 02620 3224 4
Publisher - MIT Press
Price - ?29.95
Pages - 329

ÁñÁ«ÊÓƵ

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs