ÁñÁ«ÊÓƵ

Humanists and social scientists must help shape the future of AI

<ÁñÁ«ÊÓƵ class="standfirst">Scientists and technologists must participate in a genuinely collective project to regulate technological development, says Nicholas Dirks
July 2, 2023
A face made up of digits, symbolising artificial intelligence
Source: iStock

Can AI be regulated for the public good? The European Union is the first body to give it a try. But it won¡¯t be easy.

The highly anticipated is an attempt to set a about the ¡°who, what, when, where and why¡± of the AI tools at our current disposal. Technologies that pose a ¡°clear threat to the safety, livelihoods and rights of people¡± will be banned, and other ¡°high-risk¡± technologies, such as scanning tools, will be tightly regulated.

As long ago as 1951, Alan Turing, who is credited with laying the foundations for AI, said: ¡°It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers¡­[Intelligent machines] would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.¡±

Although it has taken much longer than Turing predicted, we now find ourselves facing the prospect of having to deal with precisely that scenario. The in March from the Future of Life Institute calling for a pause in the development of AI, signed by such leading tech figures as Jaan Tallinn and Gary Marcus, signalled a new consensus about the possible dangers of AI. The subsequent warning about the unchecked development of AI from , a major pioneer in the field who stepped down from his post at Google to speak freely, has only confirmed the extent to which the creators of this new technology have come to fear its potential impact.

ÁñÁ«ÊÓƵ

We already know that AI will increasingly shape decisions in business, finance and government, while potentially upending the creative arts. We also know it will exert additional pressures on the reliability of news and media given its total agnosticism with respect to matters of fact and truth. Such agnosticism is also why algorithms risk not only replicating but exacerbating existing human biases. And AI raises some of the same questions as new gene-editing technologies do, poised as they are to test the boundaries between therapeutic treatments and designer enhancements.

So who should be held accountable for the possible harms caused by these technologies? How do we even define harm? And who will set ethical norms, practical boundaries or operational protocols? Despite the EU¡¯s best efforts, the fast-paced nature of technological change and its inherent complexity is such that policy and regulation will inevitably fail to deliver significant controls in a timely manner. But should these decisions, then, be left simply to those who are developing the technologies themselves?

ÁñÁ«ÊÓƵ

AI certainly has the potential to enhance productivity, efficiency and accuracy across business, healthcare and government and to increase our human potential in many still uncharted ways. And might it be said that some of the worries about AI are the stuff more of science fiction than genuine threats, at least in the foreseeable future? Yet we have to take these worries seriously, not least in acknowledgement of the fact that, while scientific discoveries and innovations have done much to improve our lives, science more broadly has not always made the world a better place.

The invention of the automobile created jobs and provided mobility and opportunities for humanity, but it has also led to increased air pollution and exacerbated climate change. Albert Einstein revolutionised our understanding of classical physics, but his work also provided the foundation for the development of the atomic bomb. And, most recently, Vivek Murthy, the US Surgeon General, issued based on research conducted by both tech companies and behavioural social scientists showing that their attention-demanding algorithms can have devastating effects on young people. AI will not only exacerbate some of these effects: it also has the capacity to do even greater harm.

Big tech companies did not create the problems of AI by themselves, and they certainly cannot solve them on their own. Humanists, social scientists and community leaders of diverse backgrounds should work with scientists and engineers to bring fundamentally new perspectives to bear on the risks that such extraordinary advances bring with them.

Humanistic disciplines have been cut and denigrated within universities over the past few years, but we need anthropologists and philosophers ¨C among many others ¨C to work with organisations inside and outside government to chart the frameworks within which basic regulatory and governance protocols might be developed. The scientists and technologists must participate in a genuinely collective and fundamentally humanist project.

ÁñÁ«ÊÓƵ

The EU¡¯s attempt to put some restraints on the use and application of AI is a good start, but it should be seen as just that ¨C a start. AI has the potential to do much good, but to ensure that its coexisting potential to do harm is not realised, policy leaders need to engage thought leaders and experts on the societal impacts of new technologies.

We are courting disaster if we leave regulation to those who are inventing AI technologies and those who stand to benefit enormously from their unconstrained success.

Nicholas B. Dirks is president and CEO of the New York Academy of Sciences and professor of history and anthropology at the University of California, Berkeley.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs