榴莲视频

Yes, peer review sucks. But attention-economy hellscapes would be worse

<榴莲视频 class="standfirst">Obliging everyone to undertake post-publication review would aid discoverability in a world without traditional journals, says Robert de Vries
二月 10, 2023
Raised hands in a flaming landscape
Source: iStock

Peer review sucks. That is the conclusion of by?the American psychologist Adam Mastroianni. He’s not the first person to?say this, of?course. Other academics . But Mastroianni has struck a chord with his compellingly unabashed argument that peer review should be abandoned.

It helps that he’s right – peer review really does suck. It?does a?terrible job of weeding out bad science, but a?surprisingly great job of?slowing down and tripping up good science. But I?want to?focus on?what comes next. If?we scrap peer-reviewed journals, what on?earth do?we replace them with? Is?it simply the case that peer review is?the worst system of?publication – except for all the others?

The key issue that any alternative system has to grapple with is discoverability. I’ll use my as an example. This paper followed the traditional publishing model. I?submitted to a peer-reviewed journal, and, after more than a year and two rounds of revisions, it was published. It didn’t set the world on fire, but a steady trickle of citations over the years suggests that at least some people working in my field are reading?it.

What would I?have done with this paper in a world without peer-reviewed journals? I?could have followed Mastroianni’s example and just . Except I?didn’t have a website. And even if I?did, no?one would have visited. I?could have used social media to promote it, but I?don’t use social media because, to adapt an old Stewart Lee joke, “the internet is a flood of sewage that comes unbidden into your home. Social media is like you constructed a?sluice to?let it?in”.

This is a point made by several . In a world without journals, a paper’s visibility will be determined largely by?its authors’ ability and willingness to generate attention. A?paper by a second-year PhD student with zero social media game would almost certainly sink without trace.

So without peer review, how will we avoid being swamped by an ocean of dreck? How will we prevent the devolution of scientific publishing into a YouTube-style attention-economy hellscape?

A good place to start has got to be the existing system of preprint publishing. Preprint repositories, such as the physics arXiv and its , are in essence minimally filtered databases of research papers in various states of completion. We could simply abolish journals and ask researchers to upload their papers to these repositories instead; however, the result would be a discoverability nightmare for the reasons we’ve already covered. Instead, I?believe that a truly viable system would need at least three additional features.

First, there must be a way to assess and communicate research quality. The obvious way to do this would be to allow readers to publicly comment on and rate research papers. This is a form of , which allows readers to easily see how a paper has been received by other scientists (unlike traditional pre-publication reviews, which typically disappear after a paper is published). This is , but it is likely to play a much larger role in a world without journals.

Incidentally, post-publication review also limits the power of . In the current journal system, these reviewers can block a paper from being published at all. But under the post-publication model, they can only leave a negative public review (the merits of which other readers may judge for themselves).

Second, we will need to fall back to a much older conception of the academic journal – not as a venue for finished research products, but as a forum for scientists to talk to each other. These forums could be implemented as separate community-run “channels” on central repositories (different from “”, which involve editorial oversight). Each would ideally be quite niche – formed by a community of scientists as a venue for discussing a single topic, or even a single hypothesis. This would help keep the flood of new papers manageable.

Finally, we need a way to break the link between the visibility of research and the ability to grab attention. Quality metrics derived from post-publication review would help: positively reviewed papers would float to the top of their respective forums (and those with rave reviews could be escalated to a more generalist channel – replicating the function of journals such as Science and Nature).

But authors would still have to hustle to get any reviews in the first place (a?situation familiar to any Amazon seller or YouTube creator). To solve this problem, every new paper should be sent to random forum members for review. To retain posting privileges, forum members would have to review a small number of submissions, say every few months. These “reviews” could be as simple as a thumbs?up, to signal to other community members that a paper is worth their time. These mandatory reviews would provide crucial visibility to those least able or willing to play the attention game.

I am not claiming that this is a perfect system – there will inevitably be problems I’ve not thought?of. But the question we should ask of any new publishing model is?not “does it have flaws?” but rather “are the consequences of those flaws worse than those of the system we already have?”. As Mastroianni so persuasively showed, this is a much lower bar than many people realise.

Robert de Vries is senior lecturer in quantitative sociology at the University of Kent.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> 相关文章
<榴莲视频 class="pane-title"> Reader's comments (7)
This might or might not "fit" quantitative sociology, but I have doubts that it applies or reflects the rest of sociology, let alone the humanities or most of the academy.
Thanks for your comment. As I said, I'm sure this is not a perfect system! But I would be interested to hear how you think it would not work for non quantitative sociology, or for the humanities or other disciplines. Could you expand on what you mean a little bit?
This might or might not "fit" quantitative sociology, but I have doubts that it applies or reflects the rest of sociology, let alone the humanities or most of the academy.
This might or might not "fit" quantitative sociology, but I have doubts that it applies or reflects the rest of sociology, let alone the humanities or most of the academy.
I fully acknowledge I may have misunderstood but surely such repositories have/will have a search function to enable discoverability. And in any case, discoverablity is a prerequisite of post publication review--isn't it? So, I am a bit confused as to how this proposal will work.
I think perhaps I should have offered a definition of 'discoverability'... An analogy with YouTube might help. YouTube has a search function, and if you know exactly what video you are looking for ahead of time, then you can easily find it. That is not what 'discoverability' means in this context. Instead think about what are the chance of coming across any particular video among the hundreds of thousands posted every day. Or to flip it around to the creator's perspective, what are the chances of anyone coming across the video you just posted amongst the flood - basically nill. I.e. despite being technically findable through the search function, your video still has very low 'discoverability'. In a laissez faire post-publication peer review system, papers would only attract reviews if people happen to read them and feel moved to leave a review. Hence if people were to simply upload their papers to a pre-print repository, the vast majority would never be seen or receive a review. This is a real issue for basically any online content creator or seller: YouTube video creators, Amazon sellers, people who sell video-games on Steam etc etc. To address this, creators/sellers have to engage in a whole host of strategies to try to get their creations/products seen by anyone - including things like hustling on social media (e.g. trying to get accounts with large followings to mention you) and trying the 'game' platform algorithms by manipulating keywords. This is the 'attention economy hellscape' I refer to in the article. I DO NOT want a future where scientists have to engage in these kinds of antics to get their papers seen. It would reward people who are good at this sort of thing (which is unlikely to be positively correlated with scientific competence) and people who already have large online followings. My proposed solution to this is twofold. First, separate the single firehose of new papers into discrete, topic-specific forums. And second, make sure that any new paper posted to a given forum is sent to at least one or two forum members, who will be required to give a review (or at least a thumbs up/thumbs down) on pain of losing their posting privileges. This is not something that e.g. YouTube could get away with, but these individual forums are supposed to be where scientists go to air and debate their ideas, so some kind of minimal engagement with other people's papers wouldn't seem to high a cost to bear.
The prestigious Journal of Truth is published by the learned Society for the Discovery of Truth. In a world where authors of research articles simply post their articles to preprint repositories, what can the society and the readers of their journal do to promote truth and expose falsehood? The obvious answer is for the Society to set up a reviewing organisation which would scour the internet for suitable articles and then get their trusted reviewers to review them. Then potential readers of articles which would, in the old days, have gone to the Journal of Truth for carefully vetted reading on Truth, would simply switch their allegiance to Truth Reviews. This would have several big advantages over the old system. There would be no publication and distribution costs for the Society to meet. They could also review papers which might not have been submitted to the Journal of Truth so their range could be far wider. And, perhaps most importantly, papers on Truth could be reviewed by reviewers from other disciplines – not peers but experts in other areas: the Relativist Review platform, for example, might review some of the papers on truth and perhaps encourage Truth seekers to see things from another perspective. So, yes, I agree peer review should die, and I think it would almost inevitably be replaced by a more flexible system of reviewing organisations. I have posted a more detailed account of this idea on the preprint platform arxiv.org.
ADVERTISEMENT