ÁñÁ«ÊÓƵ

Philosopher: more thinking required on role of AI in education

<ÁñÁ«ÊÓƵ class="standfirst">The broad trends in AI may be unstoppable, but we must find better ways of directing and regulating them, argues Toronto¡¯s Mark Kingwell
February 4, 2018
Child and robot
Source: Getty

Developments in artificial intelligence are now ¡°a runaway train¡± and urgently require the attention of philosophers and other humanists, a professor has argued.

Mark Kingwell, professor of philosophy at the University of Toronto, spoke on ¡°Humans and Artificial Intelligence ¨C What Happens Next?¡± at an alumni event in London.

¡°A ¡®singularity¡¯ is demonstrably coming when our attempts to enhance our own intelligence are outstripped by artificial entities,¡± he told the audience. ¡°Yet there are no such things as neutral technologies. We have to ask what interests they serve.¡±

Speaking to Times Higher Education after his talk, Professor Kingwell spelled out the implications for universities.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Developments in AI such as ¡°the consumer-tailored version of an algorithm that helps you find your learning style¡± and the delivery of courses through ¡°personal connections to modules on computers¡± were already threatening to ¡°put a lot of lecturers out of work, because you wouldn¡¯t need them to be replicating the same material to physical audiences¡­The endgame would be a scenario where you don¡¯t have to speak to a human at all: you just order your modules online and get your results. Global programmes could be created in some sweatshop.¡±

Although he was ¡°persuaded by the balance-of-outcomes argument about driverless cars¡±, Professor Kingwell said that other recent trends in AI were ¡°a bit like a runaway train¡±, which, ¡°on balance, I feel negative about¡±. He was particularly concerned about applications in medical diagnostics, where ¡°the algorithms are in effect used for triage and control of access to complex and expensive surgery. It seems to me very tricky to allow that to happen without reflection.¡±

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Here, philosophers and other humanists, Professor Kingwell went on, could draw on a long tradition of asking ethical and political questions about changing technologies, ¡°nuclear weapons, drugs, even handguns. Many, if not most, hospitals now have bioethics committees. [However unstoppable present trends seem] you can still have a conversation and push back.¡± Such humanistic thinking could also contribute to the regulatory framework governing ¡°the necessary limitations on certain kinds of programmes and their applications, just as we would ask similar questions about a new pharmaceutical¡±.

Yet Professor Kingwell, who is writing an academic handbook about the ethics of AI, often found himself frustrated by funding priorities within universities.

¡°The technical AI project at UT has been given millions of dollars,¡± he explained. ¡°We asked for thousands of dollars for our ethics project, and we were given nothing. It¡¯s familiar: the humanities are perceived to be both costless and useless, but we¡¯ve got something to contribute, too; and we might need some money, because we want smart people to come and talk to us.¡±

matthew.reisz@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (1)
People don't seem to realise that corporate model robotic pathway HR-led-uniformity demanding management tick-box metrics imperatives are preparing the university academic workforce, and other employees elsewhere, for management by robots. "They are already here!
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT