榴莲视频

Blind faith in tech bros driving cheating, say ChatGPT critics

<榴莲视频 class="standfirst">University of Glasgow philosophers behind viral ‘ChatGPT is bullshit’ paper claim student AI use is linked to dubious techno optimism of billionaire Silicon Valley moguls
七月 2, 2024
Source: istock

Misguided techno-optimism – driven by the enormous media profile of billionaire “tech bros” such as Elon Musk – could explain why more students are asking artificial intelligence to write their essays despite the mediocre results it returns, the authors of a viral research paper have argued.

?has been read more than 400,000 times since it was published in Ethics and Information Technology?in early June.

The paper frames large language models using Princeton University philosopher Harry Frankfurt’s influential 2005 book?On Bullshit?and takes issue with the term “AI hallucinations”, suggesting that the outright falsehoods often generated by AI should be understood as “bullshit” rather than by a more flattering metaphor that humanises AI.

This would correct a growing view that these machines “are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived”, explains the paper by University of Glasgow philosophers Michael Hicks, James Humphries and Joe Slater.

The paper’s runaway success comes amid?growing reports during this year’s marking season?of widespread AI use by students. Some academics have dubbed the bland AI-written content found in student scripts “botshit”.

Dr Hicks said he hoped the paper’s suggested terminology might dissuade students from using the untrustworthy technology.

“If students are unprepared for class, they might feel that it’s easier to outsource their thinking to a large language model that is powered by an expensive and hugely hyped supercomputer – particularly when it acts like it understands things. In fact, students probably understand things better than they think,” said Dr Hicks, adding that he had recently seen “many more C and D marks” among first- and second-year students, most likely as a result of AI.

Dr Humphries suggested students’ misguided faith in ChatGPT to write essays was linked to a wider belief that technology represents a panacea for most of society’s ills.

“Over the last five or so years we have been told that tech bros will solve our problems and large language models should be understood in this context,” said Dr Humphries, who claimed Elon Musk’s proposed solution to California’s public transport problems –??– showed that this faith was not always well earned.

“The world has problems but too often we’re told it’s better to give the power to someone who has done a bit of computer programming,” he said.

jack.grove@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> Reader's comments (2)
Spot on. Glad to see that I am not alone in seeing bland regurgitated AI content in students essays that score low grades. Brace yourself for the appeals!
Interesting comment, Happy. When you say "low grades" do you mean you fail students' essays if you detect AI content, or do you pass them to minimise appeals? Just wondering how you and others handle such discovery.
ADVERTISEMENT