Last year, a splashy headline in USA Today caught my attention: Science journalism being what it is, the article links not to the meta-analysis that drew these conclusions but to Stanford Medicine¡¯s advertisement of it. Still, the original article, , does indeed contend that ¡°erect penile length increased 24 per cent over the past 29 years¡±.
Hmm. If you¡¯re sceptical, so was I ¨C and, sure enough, looking over the meta-analysis and checking the original studies, I found a few problems. First, while the authors claim to have included only studies in which investigators did the measurements, at least three of the largest they draw on were based on self-report ¨C which, for obvious reasons, often proves unreliable. Second, there was no consistent method of measurement, with most studies not even noting the method used, rendering comparisons impossible. Finally, the authors inflated the total number of members measured.?
In case you¡¯re wondering, I¡¯m not a part of the . I¡¯m an English professor at a liberal arts college.
I sent my concerns to the corresponding author and then to the journal¡¯s editor. The rhetoric of their response was fine: the authors acknowledged the problems and even thanked me for pointing them out, which must have been hard. Nonetheless, though they vowed to revise the article, neither they nor the journal editor has yet published a correction eight months on.
ÁñÁ«ÊÓƵ
What distinguishes this case from the raft of flawed studies that critics have exposed in recent years is that this study is a meta-analysis, the supposed gold standard in science. If meta-analyses, which are designed to weed out poorly conducted experiments, are themselves riddled with rudimentary mistakes, science is in deeper trouble than we thought.
The humanities, naturally, are even worse. Historians and literary scholars wrest quotes from context with abandon and impunity. Paraphrase frequently proves inaccurate. Textual and quoted passages are amputated at the most convenient joint.
ÁñÁ«ÊÓƵ
One lesson to draw, of course, is caveat lector: readers should be vigilant, taking nothing on faith. But if we all need to exercise rigorous peer review every time we read a scholarly journal, then the original peer review process becomes redundant. The least that reviewers should do is to check that authors are using their sources appropriately. If an English professor could see the penis paper¡¯s grave errors, how on earth did the peer reviewers not see them?
abandoning pre-publication review in favour of open post-publication ¡°curation¡± by the online crowd. But this seems a step too far, even in a digital environment, likely leaving us awash in AI-generated pseudo-scholarship.
Better to re-establish a reliable filter before publication. Good refereeing does not mean skimming a manuscript so you can get on with your own work. Neither does it mean rejecting a submission because you don¡¯t like the result. It means embracing the role of mentor, checking the work carefully and providing copious suggestions for revision, both generous and critical. In essence, it is a form of teaching.
The problem is that it is little regarded on the tenure track. Conducting rigorous peer review is unglamorous and unheralded labour; one earns many more points for banging out articles with eye-popping titles, even though a healthy vetting process is necessary for individual achievement to be meaningful.
ÁñÁ«ÊÓƵ
We need to raise the stakes for reviewers by insisting on publishing their names and, ideally, their reports, too, . Anonymous referees get no recognition for their labours, but, contrariwise, their reputations remain untarnished when they approve shabby work. Neither encourages careful review. Anonymity should be available exceptionally, for reviewers worried about being harassed by third parties when the topic is especially contentious and for junior scholars concerned about retaliation from seniors.
Optimistically, two natural consequences of public reviewing would be thoroughness and civility. What¡¯s more, peer reviewers would enter into a reputation economy that drew on the power of the networked public sphere. Journals should offer , including on the published referee reports, helping to sort strong referees from weak ones.
Editors would also have at their disposal a wide swathe of signed referee reports from across their field on which to draw when deciding whom to task with vetting new submissions. As it stands, aside from the habit of tapping personal and professional acquaintances, editors tend to rely on scholarly reputation, handing a few ¡°star¡± academics disproportionate control over what is published?¨C?even though such figures are not necessarily good editors of others¡¯ work, any more than they are necessarily good teachers. Generating and critiquing scholarship require different skill sets.
Editors should not extend invitations to peer reviewers who have repeatedly overlooked flagrant mistakes, as determined by post-publication review. On the positive side, high-quality reviews should count as scholarship, not just service to the profession, as they form an integral part of scholarly production. And if book reviews merit a distinct CV section, so do peer reviews.
ÁñÁ«ÊÓƵ
No doubt plenty of scholars continue to offer valuable peer review, but plenty do not. And it is clear that, in this case, too, it will take more than self-reporting to identify who genuinely falls into which category.
is associate professor in the department of English and creative writing at Susquehanna University, Pennsylvania.
ÁñÁ«ÊÓƵ
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login