Trickery by editors to boost their journal impact factor means that the widely used metric “has now lost most of its credibility”, according to Research Policy journal.
With many editors now engaged in “ingenious ways” of boosting their impact factor, “one of the main bastions holding back the growing scourge of research misconduct” has been “breached”, the publication warns in an editorial.
In the past two decades, the reliance on impact factors when deciding which academics are promoted or granted tenure has grown.
One of the most widely used impact factors is calculated by Thomson Reuters by dividing the average number of citations given to articles in a journal by the total number of papers. Normally the figure is calculated for articles published over the previous two years.
“Editors’ JIF-boosting stratagems – Which are appropriate and which not?”, by Ben Martin, a professor of science and technology policy studies at the University of Sussex, lists a number of potentially suspect ways journals manipulate this figure.
For example, editors may try to “coerce” authors into citing papers in their journals in return for inclusion.
Also criticised is a relatively new stratagem, the “online queue”, where journals make a number of papers available online but without having published them. This allows them to push up the number of citations they receive, but as articles are not counted until they are published, these papers do not add to the denominator by which the citations are divided.
Another reason this strategy is effective is because citations accumulate more rapidly in years three and four rather than in the first two years after release.
One “leading” management journal, which the editorial does not name, had an online queue of 160 papers stretching back nearly three years at the time of writing.
Because of these problems, “the JIF would now seem to have little credibility as an indicator of the academic standing of a journal”, the article warns.
Thomson Reuters describes its journal citation reports as a “systematic, objective means to critically evaluate the world’s leading journals, with quantifiable, statistical information based on citation data”. The tool has in the past delisted journals suspected of suspiciously high levels of self-citation.?