Using artificial intelligence to decide which students to admit or researchers to hire risks creating “self-fulfilling prophecies” that could simply reinforce which kind of people win opportunities, an expert in digital ethics has warned universities.
AI in admissions has recently emerged on universities’ agenda: the president of Imperial College London, Alice Gast, said last year that she expected AI would “augment” the process, while several Hong Kong universities have said?that they are using the technology to find the student characteristics that predicted future success.
But speaking at a major conference on AI hosted by the University of Oxford on 18 September, Carissa?痴é濒颈锄, a research fellow at Oxford’s Uehiro Centre for Practical Ethics, said that she had several worries about the technology being deployed in education.
It is a “big concern” that AI was being used to “assess people, whether it’s professors, teachers, students, and to filter candidates”, she told Times Higher Education.
For example, an?AI system might analyse data on researcher career trajectories and find that people who did a PhD at certain universities had more success in the future.
On this basis, the AI might conclude that it made sense to award grants to applicants from those universities over others.
But this risked simply reinforcing patterns that already exist, Dr 痴é濒颈锄 warned.
“We will never know whether the postdoc?who did not receive a grant might have become a successful academic,” she argued. “As long as predictions are used to allocate resources and opportunities, the risk of self-fulfilling prophecies seems inevitable.”
Another concern is that algorithms use proxy data – which could be as arbitrary as where people live, or their Facebook friends – to predict how well someone will fare in the future, she?said.
Advocates for using AI in admissions argue that machines are less prone to the cognitive biases that?could unfairly sway the decision of a human admissions officer.
But some types of AI, such as those based on so-called neural networks that mimic the human brain, are seen as “black boxes” because it can be unclear why they have made a particular decision. “When the algorithm recommends someone, or brands someone as risky, we may not know why that is,”?Dr 痴é濒颈锄 said.
Humans have to justify their decisions with reasons, she pointed out, but with algorithms, “reasons are missing”.
Just as with new drugs, there should be randomised controlled trials to see what impact AI systems have on the distribution of opportunities, she argued, before they are “let loose on the world”.
Dr 痴é濒颈锄 also took aim at universities introducing what in some cases might be “tech for the sake of it”.
She asked: “When we introduce tech into universities and education, are we doing it for the benefit of students, and are the benefits really worth the risk? And what are the alternatives?”
“Sometimes, low tech is surprisingly robust, and cheaper, and safer. If you think about books as a technology, they are incredibly robust, and much more so than any kind of digital tech that is glitchy, and has security issues and so on,”?Dr 痴é濒颈锄 said.
For example, the filming and recording of lectures is a form of “surveillance” that “diminishes creativity and independent thinking”, she added. “When I lecture in university classrooms where there are cameras and microphones, there is typically less debate on sensitive issues, for instance. No one likes to be on record exploring tentative ideas.”
Print headline: AI in admissions is a ‘big concern’