榴莲视频

HuaweiHow can universities meet the ethical challenges of AI?

How can universities meet the ethical challenges of AI?

istock-huawei-ai-ethics
Source:?
iStock

A panel at THE Live furthered the debate about the best applications of AI within the higher education sector

A two-day event with the remit to “change the story” about higher education, 2019 featured a range of panels that debated and discussed the challenges facing higher education.?

To address the issues of delivering ethics in the era of artificial intelligence, THE’s digital editor, Sara Custer, was joined by Nathan Lea, senior research associate at UCL Institute of Health Informatics, which works with Huawei, and Kate Devlin, senior lecturer in social and cultural artificial intelligence at King’s College London.?

Dr Lea began the discussion by acknowledging that ethics committees have struggled to get to grips with computing and tech issues. Similarly, those departments responsible for tech and computing don’t necessarily understand the subject area they are handling data for.

Describing AI algorithms as autonomous – not yet sentient but sophisticated – Dr Lea emphasised: “We are not programming something; we are educating something that is, to date, unpredictable.”?

The challenge of this, as both speakers acknowledged, was to take into account the potential bias of the data and the potential “prejudice” of the decision-making algorithms, particularly around the issue of recruitment where there is the potential for discriminatory factors within those algorithms. Video surveillance and facial recognition are two other areas notably fraught with concerns around privacy and bias.?

“We have to appreciate that engineering can dehumanise problems when it is breaking them down into manageable chunks,” Dr Lea added. “There are so many of those chunks now that we can’t manage them in the traditional paradigm.”

It was agreed that some of what is considered as AI is, in fact, “fancy statistics”, but also that public understanding of the field is key. Many people, Dr Lea noted, get their knowledge via “the media, science fiction or entertainment”.?

Kate Devlin added that AI is often overhyped and falls below expectations, citing a number of “AI winters”, where the progress of potential breakthroughs has slowed. Hype is an important consideration when it comes to public perception, but even more so in relation to businesses taking up AI uncritically, something that Dr Devlin said happens regularly.

The application of AI or machine learning in higher education specifically is focused around learning analytics and tracking student progress, plagiarism and testing hypotheses in research. The former category has a number of implications around privacy, as Dr Devlin pointed out. For example, sensitive information about mental health could be collected and passed on throughout the student’s learning journey. Dr Devlin was particularly concerned about who was accountable for information gathered by AI, and she again mentioned the uncritical nature of AI take-up, referencing an example of a performance-tracking venture with no peer review papers.?

Transparency and accountability are, of course, crucial in this area. Dr Lea made the point that GDPR is relatively new and is evolving. This underscored his belief that we have the frameworks to try to tackle the challenges that AI brings. He cited various examples, such as academic journals not generally publishing articles unless they have undergone an ethics review. “Don’t throw more oversight at the problem. We know we have one – we need to try to understand it to regulate it properly,” he said.

Europe has a strong reputation for the promotion of ethics within AI, and Dr Lea gave an example of how helpful information can be disseminated. It arose from an NHS review of information security, which said that neither good nor bad practice in the field of tech was being shared. “It’s a fundamental tenet of my work to say ‘share what you like’ – you learn from it. It’s so important.”

No discussion of AI comes without mentioning threats to employment. Ms Custer flagged the THE 2019 survey of university presidents and AI experts, which suggested that there was an expectation that AI would create jobs rather than steal them. Both speakers broadly agreed with this and pointed out that there are various other economic factors that could cause uncertainty. Rather than let the prospect of a robot takeover paralyse us, Dr Lea echoed the sentiments from THE report when he said: “We need to offer enriching jobs, careers and pathways for as many people as possible.”

?about Huawei and higher education.

Join?the THE Live mailing list ?for all the latest?THE Live news?and exclusive offers.

Brought to you by