ÁñÁ«ÊÓƵ

One in three researchers now using ChatGPT at work: survey

<ÁñÁ«ÊÓƵ class="standfirst">Academics tell Elsevier that they recognise the potential of generative AI but want to see their concerns addressed
July 9, 2024
Robot in laboratory
Source: iStock/zilli

Nearly one in three researchers have started using ChatGPT for work purposes, according to a major survey.

In a survey of more than 2,200 researchers globally, conducted by academic publisher Elsevier, 31 per cent say that they have used the generative artificial intelligence tool as part of their work.

In the , 41 per cent of researchers questioned say that they feel positively about the development of AI tools including generative AI, while 48 per cent say they have mixed feelings, and 10 per cent say they are unsure. Only 1 per cent say that they feel negatively about AI.

Seventy-four per cent of researchers think that AI will have a ¡°transformative¡± or ¡°significant¡± impact on their work in the near future.

ÁñÁ«ÊÓƵ

And the vast majority of researchers felt that AI could have at least some benefits across teaching, research and academic publishing.

However,?scholars?acknowledge that AI tools have disadvantages. When asked to name the top three disadvantages, AI¡¯s inability to replace human creativity, judgement and empathy is cited by 39 per cent of respondents, as is a perceived lack of regulation and governance. Thirty-two per cent express concern over a lack of accountability over the use of generative AI outputs, while 25 per cent complain of factually incorrect or nonsensical outputs.

ÁñÁ«ÊÓƵ

Overall, 26 per cent of researchers say that they have ¡°fundamental¡± or ¡°significant¡± concerns about the ethical implications of AI, while 50 per cent say that they have some concerns.

Kieran West, Elsevier¡¯s executive vice-president for strategy, said: ¡°AI has the potential to transform many aspects of our lives, including research, innovation and healthcare, all vital drivers of societal progress. As it becomes more integrated into our everyday lives and continues to advance at a rapid pace, its adoption is expected to rise.

¡°Researchers and clinicians worldwide are telling us they have an appetite for adoption to aid their profession and work, but not at the cost of ethics, transparency and accuracy.

¡°They have indicated that high-quality, verified information, responsible development and transparency are paramount to building trust in AI tools, and alleviating concerns over misinformation and inaccuracy. This report suggests some steps that need to be taken to build confidence and usage in the AI tools of today and tomorrow.¡±

ÁñÁ«ÊÓƵ

The Elsevier survey comes after a separate survey of more than 2,300 researchers, conducted by Oxford University Press, found that 76 per cent use some form of AI tool in their research, with machine translations and chatbots cited as the most popular tools, followed by AI-powered search engines or research tools. AI is most used for discovering, editing and summarising existing research, the OUP report found.

chris.havergal@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers ¨C for good or ill

16 March
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs