榴莲视频

Formula for fallibility

<榴莲视频 class="standfirst">
十月 7, 2010

We found your summary of Jim Tomlinson's observations on the Byzantine processes employed by research councils particularly apposite given our recent experience with the Economic and Social Research Council ("Grant award? It could be you", 30 September). Perplexed by its decision not to fund a grant application for follow-on funds, which received three glowing reviews and one more measured but positive one, we sought further information about the decision-making process. We had to file a Freedom of Information Act request as the ESRC does not routinely provide such feedback because it is too "complex and costly".

We were puzzled to find that our proposal had been awarded three A+ scores and one A- but had nevertheless received an overall grading of A-. Further FoI requests clarified the decision-making rules and elicited an anonymised list of all project gradings. We can report that no qualitative judgement had been made in deciding the overall scores: the mechanism is entirely algorithmic.

In effect, four A+ grades give an overall A+, three A+ and one A translates to an A and, as in our case, one A- produces an overall A-, despite three A+ scores.

Of the 36 proposals in this call, seven "made the cut" and proceeded to the committee stage for full adjudication; just under half (16) were in the A- category, and were automatically screened out at this point. Thus, with one slightly out-of-line reviewer, all is lost, without - and this is the important point - anyone reading the proposal or the reviews and making an informed human judgement.

This process may be administratively efficient and convenient but is a lottery indeed, which appears to depend on the infallibility of reviewers, contradicted of course by the presence of disagreement.

We will not dwell on the self-evident deficiencies of this process and the obvious ironies, but it is hard to see how the ESRC's claim that its peer-review system is an "international benchmark of excellence" providing a "guarantee of quality" can be supported by these data. Rather, the process appears to fall well short of the standards of academic peer review used in journals, where referees' comments are always read, discrepancies always noted and carefully evaluated, and justification for editorial decisions always given.

David Wastell and Sue White, Bramhall, Stockport.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.