Peer reviewers assessing grant applications for one of Australia’s major funding bodies have been accused of using artificial intelligence chatbots to produce their feedback.
Applicants for grants of up to A$500,000 (?262,000) awarded under the Australian Research Council’s Discovery Projects scheme alleged spotting the “tell-tale” signs of ChatGPT when receiving feedback from assessors, according to the .
One had even forgotten to remove the “regenerate response” prompt that appears at the bottom of all ChatGPT-created text, it was claimed.
Applicants said the reports were a “generic regurgitation” of their applications with little evidence of critique, insight or assessment, ARC Tracker said.
It added that the practice was “entirely predictable” because of time pressures on researchers and part of the problem was that the ARC had “done nothing” to prevent the use of AI chatbots in assessing grants?because?it?was not mentioned in the guidance issued to assessors.
ARC Tracker said the funder should consider banning those found to be using ChatGPT for this purpose and reporting them to their universities.
Philip Dawson, an academic integrity researcher and co-director of the Centre for Research in Assessment and Digital Learning at Deakin University, that the behaviour “should be treated worse than just an inappropriate review”.
“Research grants are meant to be confidential and sharing them with [ChatGPT creator] OpenAI is a significant IP breach,” he added.
Responding to the concerns, the ARC said it was “considering a range of issues regarding the use of generative artificial intelligence (AI) that use algorithms to create new content (such as ChatGPT) and that may present confidentiality and security challenges for research and for grant programme administration”.
It reminded all peer reviewers “of their obligations to ensure the confidentiality of information received as part of National Competitive Grants Programme processes”.
It said the Australian Code for the Responsible Conduct of Research set out that “individuals are to participate in peer review in a way that is fair, rigorous and timely and maintains the confidentiality of the content”.
ARC said it had robust processes in place to consider?concerns about how confidentiality had been managed during a review.
“Release of material that is not your own outside the closed research management system, including into generative AI tools, may constitute a breach of confidentiality. As such, the ARC advises that peer reviewers should not use AI as part of their assessment activities.”
Guidance on this area would?be updated “in the near future”, the ARC added.