Software has become too complex and even accidentally killed people. The way ahead is to produce simpler, user-friendly technology, argues Patrick Hall.
I have practised as a software engineer for more than 30 years, and I have increasingly become concerned by the failure of many software products to support people and their activities. Things do not seem to progress.
Over the past few years I have worked with sociologists and others, Steve Woolgar (Brunel), Janet Rachel (East London), and Fiona Hovenden and Hugh Robinson (Open University), to attempt to understand what is going on. This has proved extremely fruitful, leading to many insights from the detailed observation of practice. But we have now moved away from this to look at software and its travails as a general cultural phenomenon, a manifestation of the transition from modernism to postmodernism.
Software and its development processes are undeniably rooted in modernism, in the Enlightenment. Within the Enlightenment tradition it is held that problems can be solved through the application of rational thinking and science. We have seen this strikingly in computing viewed as mathematics, development methods made formal and grounded in mathematics, and much new mathematics developed.
ÁñÁ«ÊÓƵ
The process of developing software has been divided into stages requiring specialist skills, a Taylorist or Fordist approach, with a strong emphasis on measurement (called "metrics" within computing) supported by quotations from Lord Kelvin leading to the injunction "You cannot control what you cannot measure". Behind all this lies the doctrine of progress, that with the continued application of scientific processes things will continually improve. A key paper of the early 1980s was titled "Programming: Sorcery or Science?" In it Tony Hoare delineated the "progress" from craft to mathematical and engineering methods, from pre-enlightenment to the Enlightenment .
But this agenda, this grand narrative, has been ruptured, just as the grand narrative of modern architecture was ruptured by the failure of mass housing as "machines to live in". This rupture has come to be known as the "software crisis", and has been manifest in a number of ways.
ÁñÁ«ÊÓƵ
A concern for safety critical systems arose in the 1980s following a number of disasters implicating software (see for example "Software's Chronic Crisis" by Wayt Gibbs in Scientific American, September 1994). The most significant of these was the death of several people due to software failure in the Therac 25 radiological treatment machine. Yet, as Donald MacKenzie (Edinburgh) has pointed out, there have been remarkably few deaths attributable to software failure, given the hype. There were major debates in the mid-1980s around papers by Philip Leith (Queens Belfast) and James Fetzer and David Parnas who questioned many of the claims of formal methods, of logic programming, and of artificial intelligence. Recent articles by Darrel Ince (Open University) in The THES introspecting within computer science are further symptoms of this underlying rupture.
Of course the discipline has attempted to repair the rupture, often with considerable success. The mathematics underpinning development methods has been advanced and continues to be advanced, with much of that advance arising in the United Kingdom and Europe, particularly at the programming research group in Oxford under Tony Hoare's leadership and the laboratory for foundations of computer science in Edinburgh under Robin Milner's leadership.
Development processes have been modified to embrace iterative methods using prototypes. A frequently noted difficulty has been understanding what a user "really" wants, recognising how difficult it is for the technically inexperienced to visualise the impact of technology that is merely available in a description. Software prototypes are intended to help this understanding.
This approach has been varied to involve users throughout the development process within "participatory development", drawing upon the socio-technical approaches of Enid Mumford (Manchester University) and others.
Object oriented approaches have developed to become the current orthodoxy, with programming languages like C++ and Smalltalk and a host of research languages in the ascendancy. Work on design frameworks, and design patterns derived from architecture, and some say anthropology, are of active interest, and promise much.
But this repair activity can only ameliorate the situation. Of course this modernist approach must continue, and of course I will be party to that, but what of that other side of the divide, postmodernism?
The failure of software has arisen from complexity - the very softness of software and its flexibility leads us beguilingly into building larger and larger systems, adding encrustations of extra capability because it seems desirable and seems so easy to do. But each encrustation interacts with other encrustations and soon the whole system moves beyond our ability to comprehend it. Now, should we really be surprised? With Jean-Francois Lyotard, we should espouse "incredulity towards grand narratives". All around us attempts by humans to create complex artefacts in architecture, in engineering, in social and political systems, have failed. This is the postmodern condition. There are a number of developments in computing which must be viewed as postmodern, and other places where postmodern developments may be anticipated.
Bennetta Jules-Rosette observing the usual travails of software development, but in Kenya and the Ivory Coast, concluded "My research reveals that postmodernity has arrived in Africa". Muffy Thomas (Glasgow) has argued that software fits Derrida's linguistic theories of the simultaneous increase in control and indeterminism, and that we cannot expect absolutely correct or safe software.
ÁñÁ«ÊÓƵ
Postmodernism encourages pluralism, recognises difference: software markets are now global. Major suppliers like Microsoft, Lotus, Claris all recognise that more than half their markets are outside the United States and most of these are in languages other than English. They can no longer force English on everybody, and would hope for simultaneous launch of new products in all their major languages worldwide.
This requires that software is internationalised, prepared in a form that is as near neutral as possible with respect to the intended market "locale" of language and culture and local convention. Leading edge approaches would exploit recent developments in computational linguistics.
ÁñÁ«ÊÓƵ
The coding standard ISO 10646 that accommodates all scripts, has achieved wide popularity, but alas has not been adopted for Windows 95. Locale specific issues can run quite deep, for example colour means different things in different cultures, and interaction metaphors like the desk top beloved of PC makers may be too Western and inappropriate elsewhere.
We must ask what view of the technology would be appropriate in other parts of the world, but when I have paraphrased this and asked colleagues "What would computing technology be like had it been invented in Asia or Africa?" I have received incredulity that it could be anything other than the way it is.
The participatory approach has not worked - Andrew Clements and Peter van den Besselaar have pointed out isolated successes, but concluded that permanent changes to practice have not been produced. In my view the problem is that the intended recipients and beneficiaries of the technology have not been empowered, and what is needed is that the technology is shaped so that users can assert complete control. Shirin Madon (LSE) has reported on installations in India where the introduction of purpose built packages has failed - but the introduction of generic office packages of spreadsheets and wordprocessors has worked. As a technologist I must shape the technology so that I can hand it over and then walk away and leave the user to undertake the final shaping to his or her specific purpose. This requires developments in software architectures and software componentry, but we are almost there.
Before we can get there, something must be done about that modernist practice of planned obsolescence. Suppliers of office products bring out new releases once or twice a year, containing many new features many of which we do not want, with existing features achieved differently requiring that we relearn their use, and a set of defects that is different from the previous release but not necessarily less numerous.
The one thing we can be sure about is that the new software will require more hardware. For developing countries this is disastrous in its financial implications and threatens to perpetuate the division of the world into technology haves and technology have-nots.
At a recent meeting of the Commonwealth working group COMNET-IT, I suggested to representatives of Commonwealth countries that they should use their purchasing power to demand feature-light but defect-free software that does not make huge demands on hardware - I hope that something can be achieved.
One potential user empowerment comes, interestingly, from neural networks. Here the technology is generic, and the function is not programmed in by an expert, but is learned from samples of the intended behaviour acquired from the user.
The motivation behind neural networks is entirely modernist, but their success is postmodern.
With multimedia and CD-Roms there is a marked move away from "productivity" software like spreadsheets, to content like that of encyclopaedias. A significant aspect of content delivered on multimedia is the shift from linear to non-linear presentations. Linear presentation as in books, films and music keeps the author in control of the sequence in which the recipient experiences the content, and though the readers might "rewrite the text" they only do so in the sense of constructing particular meanings.
With multimedia and non-linearity there is a profound change, for readers now resequence and reorganise the material, and could in principle add their own material. The distinction between writer and reader becomes blurred.
Postmodernism recognises uncertainty and chaos, and perhaps this is what we see in the Internet. This is unregulated, and causes governments concern for national and governmental security, frequently displaced into concern for morals. It enables citizens to communicate freely, and with that dissident groups and criminal gangs alike. People can lodge information on the Internet under pseudonyms and we may not know who they are. We no longer know who the authority is, or whether they will be there next week.
The modernist tradition has been internalised by computing technologists. It needs to be externalised and through that the technology liberated. Perhaps all these postmodern views simply emphasise what we already knew - that the technology may be produced by technologists, but it is not owned by them, and is only legitimated by its benefit to the recipient and consumer, who is potentially every citizen of the world and who must ultimately assert control over it.
ÁñÁ«ÊÓƵ
Patrick Hall is a professor in the computing department, the Open University.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login