Source: Getty
The problem of “negative” citations undermining the accuracy of metrics-based approaches to research assessment could be overcome by dividing citations into six different categories, a researcher has suggested.
Bibliometric analyses of research quality use the number of times a paper is cited as a proxy for its quality.
However, critics point out that some papers are cited precisely because they are believed to be flawed. That point is made many times in the published responses to the Higher Education Funding Council for England’s current independent review into the use of metrics in research assessment.
In a paper, “What lies within: superscripting references to reveal research trends”, published last month in the Sage journal Perspectives on Psychological Science, Eric Anicich, a PhD student at Columbia University’s business school, describes the approach as “inherently flawed” and likens it to a marketing approach that believes “any publicity is good publicity”. He says the current research record is like “a very long and expanding book with no table of contents”.
His proposed solution is that every citation of a paper should be superscripted according to which of six broad categories it falls into. This would indicate whether the cited paper is conceptually consistent or inconsistent with the authors’ findings, or whether the original results have been replicated or not. Other categories would indicate where theories or methods in the cited paper were used.
Mr Anicich says that online indexing services could then calculate summary statistics for papers, individuals, subject area or institutions indicating how many citations to them fell into each category.
He suggests that the system could be trialled in psychology, which has had particular problems with reproducibility; he believes that transparency around this issue would be improved by such a system.
Mr Anicich concedes that implementing such a system would be a challenge in financial and organisational terms, but suggests that it would not be impossible if a “task force” of interest parties – such as “leading scholars, prominent editors, established publishers, and powerful citation indexing platforms” – were to pledge their support early on.
He believes that citations superscripted as “inconsistent” would be fairly common while “failure to replicate” would be rare because journals tend not to publish replication failures.
Ton van Raan, professor of quantitative studies of science at Leiden University in the Netherlands, said he was unaware of any research into the frequency of negative citations.
But in his long experience, he said, they were relatively rare in the sciences because scientists largely “ignore bad and incorrect work”. However, he added that the social sciences and humanities often contain “wars of schools” and might therefore be more prone to negative citation of papers from opposing schools.