榴莲视频

Statistically significant

<榴莲视频 class="standfirst">Neural Networks for Pattern Recognition
十月 13, 1995

What is a pattern? It could be that nice design on Auntie Bettie's wallpaper, a fingerprint of a particular criminal, the behaviour of a commodity on the stock exchange or the signature on a cheque. The list is endless. What is pattern recognition? A cynic might say "that which humans do well but computers cannot do well at all".

In fact, humans are good at some kinds of pattern recognition ("the wallpaper is like the flowers in the garden") and computers are better at others ("are there any accounts that have not been paid in this month's list of 32,000 invoices?").

My first quibble with Chris Bishop's book is that it should have indicated in its title that the subject is statistical pattern recognition. This, in fact, is an important subset of the vast field to which I have been pointing.

It might tell you whether a patient's symptoms contain a healthy pattern but it will not tell you whether a picture is of a table or a chair as one table need not have anything in the way it looks which correlates statistically with another table.

It is a subset which is well served by the fact that there is a class of neural networks capable of learning the statistical characteristics of patterns by being shown enough of them. Accepting the subset it can then be said that this is an excellent book in the specialised area of statistical pattern recognition with statistical neural nets.

But a health warning may be in order - this is not a book for the engineer with a pattern recognition problem. The book concentrates on deriving and explaining a variety of statistical techniques but says little about applications which may benefit from these elegant methods. The engineer can look forward to being educated by it rather than advised. I guess that this is laudable in an era in which words like "wealth creation" and "technology transfer" seem to dominate the funding of research. The fundamentals need to be understood before any attempt can be made to turn a technique to advantage. So this is a good starting point for new students in those laboratories where research into statistico-neural pattern recognition is being done.

The introductory chapter provides an entry into the world of statistical techniques as well as an inkling of the author's enjoyment of living in such a world. The examples for the reader at the end of this and every chapter are well chosen and will ensure sales as a course textbook. Galloping through the contents, there is almost everything you might wish to know about statistical systems but were probably afraid to plunge into to date. A useful set of appendices provides good mathematical support for the book. Of particular note is the chapter on radial basis functions, which are a way of using neurons as filters of the characteristics of a pattern and then combining the effort of many such filters to achieve an overall decision. RBF techniques are now becoming very heavily used instead of a tedious computing-intensive method known as error back-propagation, which was the scourge of most neural network laboratories fired by the renaissance of the subject in the early 1980s. I was also pleased to see the introduction of the VC measurement (no, not the vice chancellor's opinion of your work). This is the the work of Vapnik and Chervonenkis which in a broad sense says how many categories of pattern can be discerned by a particular neural structure. But probably the most exciting chapter is the last one on Bayesian techniques. Based largely on the work of the neuromathematician David MacKay, these schemes appear to throw more light on structure-learning-function considerations in neural nets that has been possible to date. One particular problem in neural systems has been the question of generalisation. Say that there is a set of examples on which a net can be trained. Almost any network ought to perform quite well after training when tested on this selfsame set. The question is, how will the system perform on patterns that are in some way similar to the elements of the training set but are not included in it? Bayesian techniques are making inroads into this problem of generalisation by defining expectations for different levels of model complexity.

Having decided that this is a first-class book for the researcher in statistical pattern recognition, what of its deeper implications? The number of researchers in this area is limited. Who else should read it? One opinion is offered in the foreword by Geoffrey Hinton of Toronto University, who was one of those responsible for a revival of interest in the field in the mid-1980s: "It is a sign of the increasing maturity of the field that methods which were once justified by vague appeals to their neuron-like qualities can now be given a solid statistical foundation. Ultimately, we all hope that a better statistical understanding of neural networks will help us understand how the brain actually works. . ."

Does this mean that anyone interested in how the brain works must now arm themselves with the statistical techniques in Chris Bishop's book? I think that the answer is resoundingly negative. It has been the mistaken belief of scientists whose daily job is not necessarily related to the brain, that techniques of type X (where X is that scientist's major skill) will provide the key to the understanding of the brain.

Regrettably, statistical neural networks are just another brand X. True pattern recognition behaviour in human beings remains a mystery. It does not always follow the laws of statistics as its major mechanism. It is the result of a complex interaction between what the subject perceives and memories stored in dynamic activity of some parts of the brain. While real neural networks are undoubtedly at work, it is likely that statistical analysis will be only one of a myriad of techniques from which understanding will emerge.

It is far better to realise that this authoritative book throws light on techniques invented by skilful mathematicians than to confuse matters by giving it interdisciplinary attributes to which the author rightly does not aspire.

Igor Aleksander is professor of neural systems engineering at Imperial College, London.

<榴莲视频>Neural Networks for Pattern Recognition

Author - Chris Bishop
ISBN - 0 19 853849 9
Publisher - Clarendon Press, Oxford
Price - ?55.00
Pages - 481

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.