ÁñÁ«ÊÓƵ

ChatGPT ¡®experimentation¡¯ puts universities in ¡®dangerous position¡¯

<ÁñÁ«ÊÓƵ class="standfirst">Experts tell THE and Leeds event that institutions are facing risks they don¡¯t understand
April 18, 2023

The frenzy of excitement surrounding ChatGPT has led to ¡°enthusiastic experimentation¡± that?might be opening universities up to risks they are not in a position to fully understand, experts in digital learning have warned.

The potential uses and abuses of generative AI tools have provoked lively debate within higher education in recent months, with a particular focus on how the new technology will challenge and change modes of assessment.

However, speaking at the Digital Universities UK event, held by Times Higher Education in partnership with the University of Leeds, Stuart Allan, director of digital learning at Arden University, said that this was ¡°only a problem if your assessment is designed for recall and summarisation¡±.

Of greater concern, he argued,?was that ¡°the legal and policy framework hasn¡¯t really caught up with the technology yet. We are in quite a dangerous position at the moment, where technologies are being adopted without the implications being thought through. What we will see in the next six to 12 months is a rowing back on some aspects of that.¡±

ÁñÁ«ÊÓƵ

This warning was echoed by Melissa Highton, director of learning, teaching and web services and assistant principal for online services at the University of Edinburgh, who said that ¡°at Edinburgh we welcome our robot colleagues, and for many years we have had robots doing considerable work for us ¨C helping with things like [converting] audio and video to text, for example.

¡°But I think one of the interesting questions is whether those robots belong to you when they are working on a set of data that you own, or whether you are putting your data into someone else¡¯s service.¡±

ÁñÁ«ÊÓƵ

She observed that there had been ¡°some very enthusiastic experiments going on with things like ChatGPT¡± in recent months, but cautioned that this entailed ¡°a lot of university data going somewhere, and we don¡¯t know where¡±.

Referencing the Microsoft-owned developer of ChatGPT, she said: ¡°Open AI isn¡¯t open in any way ¨C that name is a bit of ¡®open-washing¡¯, making it sound virtuous.¡±

Highlighting the business imperatives at play, she predicted that ¡°if [such tools] are free now, they won¡¯t be for long¡±, and added that ¡°there is also enormous power consumption involved in AI ¨C so this enthusiasm for robots?might have to feature in conversations about sustainability too¡±.

Simon Thompson, professor of hybrid learning and director of flexible learning at the University of Manchester, also highlighted the risk that an AI arms race could emerge if issues of academic integrity and assessment were not handled in an appropriate way.

ÁñÁ«ÊÓƵ

He asked the packed room of delegates who among them had not experimented with ChatGPT, and not a single person raised their hand.

¡°This will be similar to your classroom,¡± he said. ¡°So it¡¯s naive for us to assume that your students will not be using these systems.

¡°But we need to avoid getting into a cat-and-mouse game where we have a new product that produces text generatively through AI, then someone else releases a product that catches that, then a new version of the product is released that can circumnavigate that detection tool, and then here¡¯s an update of the product that identifies the AI text again. The only winners in that game are the vendors, who will sell us a product on a product on a product.¡±

Instead, he advised universities to take the opportunity first ¡°to critique generative AI systems ¨C it is absolutely our role to critique and talk about that with our students ¨C but then also to have a shift in what I call human-centred effort.

ÁñÁ«ÊÓƵ

¡°The invention of the calculator was an example of that happening: allowing us to do more complex maths because of a new invention.

¡°What can we now do that is more complex because of generative AI tools, but still with a human at the heart of that process, allowing us to move four, five or six steps ahead of where we were before? Trying to stop generative AI would be a lost cause and a waste of effort; we need instead to think about how we use them as humans and educators.¡±

ÁñÁ«ÊÓƵ

Dr Highton added a note of caution about making assumptions about students¡¯ use of emerging AI technologies: ¡°We have to be very careful not to discriminate against students who most need assistive technologies by suggesting that their use is always cheating,¡± she said.

john.gill@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers ¨C for good or ill

16 March
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (3)
Some very good points here.
True.
Universities should be places where students are encouraged to use very tool available to enhance their learning and ability to operate in the real world. Yes. this means learning to use AI tools. What needs to change is the role of assessments. Get rid of it. What you need is a better system that certifies whether students have achieved the learning outcomes or not. Leave the sorting on ability to those who want to employ them (most good employers do that anyway with - interviews, aptitude and psychometric tests, group discussion, setting real life problems to solve, demonstrating skills etc.). It really isn't a big deal, get rid of it.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs