For those who long for scientific assessment to get beyond journal impact factors, the road to Mendeley could well also lead to redemption.
The discontent with the status quo was eloquently demonstrated last week in a widely read blog post by Stephen Curry, professor of structural biology at Imperial College London, which began: "I am sick of impact factors and so is science."
Professor Curry railed against what he termed the "statistical illiteracy" of treating impact factors - a measure of the average number of citations garnered by research papers in a particular journal over the previous two years - as a proxy for the quality of all papers in the journal, and of their authors.
One topical example of the growing reliance on impact factors is the redundancy programme being undertaken by Queen Mary, University of London. The performance of academics at two schools has been assessed according to a number of metrics, with journal impact factor being used to judge the quality of individuals' own publications.
ÁñÁ«ÊÓƵ
The perceived statistical illiteracy of such practices - which, according to Professor Curry, are spreading like a "cancer" through the academy - stems from the fact that impact factors are typically skewed by a small number of papers with a large number of citations. Most of the other papers in a journal will have had fewer citations than the impact factor figure.
Some respondents to Professor Curry's article suggested that it would be more statistically literate to base impact factors on the median (middle-ranked) or modal (most frequent) citation count in the distribution. Others argued that only the number of citations garnered by individual articles should be recorded.
ÁñÁ«ÊÓƵ
However, Professor Curry pointed out that citations accrue only very slowly. He said he preferred metrics that tapped into the kinds of views and expertise typically exchanged during coffee breaks at academic conferences. And, for him, the closest measurable approximations to such interactions occur on social networking sites.
Rich harvest of online data
Apart from mainstream sites such as Facebook and Twitter, several networking services aimed specifically at academics have been established in recent years.
Because their primary function is typically to allow scientists to share and discuss papers, online resources such as Zotero, ResearchGate, CiteULike, BibSonomy and Mendeley particularly lend themselves to the development of what are often known as alternative - or "alt" - metrics.
Of these, Mendeley - named after genetics pioneer Gregor Mendel and Dmitri Mendeleev, the inventor of the first periodic table of the elements - tends to elicit the most excitement on account of the size of its database and the richness of its data.
According to Mendeley's co-founder and chief executive Victor Henning, the 65 million unique documents it contains - uploaded by nearly 2 million users - make it about 30 per cent larger even than mainstream citation databases Scopus and the Web of Knowledge, while three recent studies estimated that its coverage of current peer-reviewed research papers, either via abstracts or full documents, was between 93 and 98 per cent.
Some publishers have even begun to supply the site with abstracts and previews of their articles. Springer is one such, and according to Wim van der Stelt, its executive vice-president for corporate strategy, the move has already boosted traffic to the full versions of articles on its own site. He said Springer would soon be experimenting with displaying alt-metrics for its own articles.
Mendeley statistics are drawn on - with the site's blessing - by alt-metrics providers such as altmetric.com and total-impact.org. The latter measures how many public Mendeley groups - formed around specific topics - reference a paper, as well as how many people have added it or its abstract to their personal library, and how many of those people are students or from developing countries.
Dr Henning said Mendeley also tracks which documents and pages are read in its PDF viewer, and for how long. This information is not yet made available to metrics providers, but will be eventually.
ÁñÁ«ÊÓƵ
But Mendeley does not always leave it to others to exploit its data. Earlier this month, it launched what it billed as its own version of journal impact factors - which according to recent studies bear a significant positive correlation with the latter.
However, Dr Henning emphasised that Mendeley's "Institutional Edition" - to which Stanford University and the Harvard-Smithsonian Center for Astrophysics have already signed up - was aimed at fulfilling the original (and, in Professor Curry's view, unobjectionable) purpose of impact factors - namely, helping librarians decide which journals to subscribe to.
ÁñÁ«ÊÓƵ
Real-time statistical updates
The key advantage of Mendeley statistics, according to Dr Henning, was that they could be collected and updated in real time.
Mendeley's figures also related to readership and publication destinations that were specific to the subscribing institution, he said. But Dr Henning said it would eventually either produce global journal usage statistics itself or release the data so that others could.
He acknowledged the danger that such data could be misused in the same way as impact factors. "However, our hypothesis is that by making the data publicly available, there will be more transparency and independent validation," he added.
He also insisted that, for all its popular association with article-level metrics, Mendeley was "agnostic" about what others did with its data.
"If ReaderMeter.org wants to aggregate on an author level, or someone else wants to aggregate on a journal level, that's fine with us too," he said.
As well as Mendeley data, total-impact.org also presents an array of other article-level metrics, including numbers of tweets, Facebook "likes" and Wikipedia citations.
Newer but not better, say critics
But David Colquhoun, professor of pharmacology at University College London, said he would object just as stridently to Queen Mary's redundancy programme if it were based on alternative rather than standard metrics because there was no evidence that any metrics were accurate predictors of future scientific success.
"The people who propose things like alt-metrics are kids playing with computers, with no appreciation of what real science is and, worse, no appreciation that hypotheses need to be tested," he said.
For his part, he was highly sceptical that alt-metrics would ever be able meaningfully to measure a paper's quality, not least because they could be more easily "gamed" than standard metrics.
"The idea that Twitter will substitute for reading a paper is just ludicrous beyond words. Can you imagine the buzz around Peter Higgs' 1964 papers, or any other serious bit of basic research? If [alt-metrics] were taken seriously for selection and promotion, it would kill serious science," he said.
But for Cameron Neylon, director of advocacy at the Public Library of Science, alt-metrics aspire not to assess a paper's "universal quality" but to indicate whether it is useful for a particular purpose.
"Sometimes I need data, sometimes methods, and sometimes I want the argument well laid out," he said.
ÁñÁ«ÊÓƵ
"What I really want to know is...whether [the paper was] useful for people like me doing things like those I'm doing. [Mendeley] bookmarks are one tool that can help with that, but we need the full suite of measures to really build discovery tools that will solve this problem."
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login