Academia is beset with divisions. To name but three, there are academics versus professional staff, teaching versus research staff, and those on permanent contracts versus those who live with the fear of?the imminent end to their employment.
In my opinion, one of the most destructive is that between those who conduct research and those who are recognised for conducting research. This division represents a gross unfairness in the academic system and it severely limits the research we can conduct. Accordingly, recognition of research success must change. But for this to happen, we need to understand the full range of people who are vital to the conduct of research. That’s why we’d like UK-based academics to take part in the .
The age of the lone genius is over. Modern research is built on diverse teams of people, with traditional research roles – principal investigators, postdocs, PhD students – working alongside a range of specialists in everything from writing software to managing participants in medical trials. Our institutions have been slow to recognise the work of these specialists.
Regardless of how important your team view your work to be, if you lack formal recognition within your institution for your research contribution, your existence is unlikely to be anything but precarious and you will not be able to advance your career. After a few years of this existence, many people choose to find a home for their in-demand skills outside academia.
The research excellence framework (REF) provides 20 categories recognising research outputs, from musical compositions to databases. It’s truly an attempt to understand the broad impact of research and you might hope and expect that the huge effort invested by managers in understanding their institutions’ research while preparing submissions would help broaden their interpretation of academic success. However, history indicates the opposite.
With each assessment round, we’ve become ever more focused on publications; in the last REF, around 97 per cent of submitted outputs were publications. Yet people in both the emerging research roles and the older unsung ones that have sustained academia, such as technicians, are highly unlikely to be named in publications, even if they have made a contribution to the work that is as important as any of the named researchers’.
In other words, our inherent bias towards publications as the only success metric means that we’ve taken a perfectly acceptable framework for understanding the breadth of research and broken it.
It’s the same with the way that institutions gauge individual success. Anyone who has faced a promotion panel will understand that the publications you write and the funding you secure remain the overwhelming object of their attention. Everything else feels like window dressing. But people in non-traditional research roles are typically not allowed to apply for funding, either.
We could push for more people to be named in publications, but why cling to an already broken metric? Plus, it’s unlikely to make much difference; technicians have existed for as long as there’s been science, but the research community is still reluctant to recognise them. In my opinion, the argument for including more people in publications is the equivalent of that for trickle-down economics: throwing those at the bottom a few crumbs (a few lowly authorship positions, if not mere mentions in the acknowledgments) justifies the vast disparity between their rewards and?those of the individuals at the top. But it doesn’t. What is needed is a wholly different approach to gauging success.
Such a change would not be easy to enact, but there’s a phenomenally good reason for doing it anyway. My initial experience of this problem came from running the campaign to recognise research software engineers: the people who develop the software that is a cornerstone of almost all modern research. We now have thousands of people around the world working in this role, whose success is judged on the software they create rather than the publications they don’t write or the funding they’re not allowed to apply for. This is obviously good for them personally, but the beneficial impact on research is much greater because it makes these rare people who combine an understanding of research with expertise in software engineering far more likely to stay in academia. If we can do this for more roles, we will produce an environment in which research will thrive.
No one knows how many vital-but-unrecognised roles exist, which is where the Hidden REF comes in. It’s a competition that recognises all contributions to research, bar one: publications.?Of the many , from training materials to citizen science, possibly the most important is that of the “hidden role”: a person who helps conduct research but who generally isn’t recognised by conventional metrics. It includes librarians, research managers, research software engineers, technicians, university administrators – and anyone else who wishes to be included.
By taking part in the Hidden REF, you will help us understand how much work is hidden and how many people go unrecognised. It’s only once we know the size of the community that we can understand the impact of neglecting it, and then lobby institutions to broaden their definition of success. into the competition is straightforward: a simple summary of the output or person of no more than 300 words in length. The deadline for submission is 14 May.
Everyone in research knows someone without whom they could not conduct their research. The Hidden REF is the perfect opportunity to recognise these people in the short term, and to provide the evidence needed to fight for their recognition in the future.
Simon Hettrick is chair of the Hidden REF and deputy director of the Software Sustainability Institute at the University of Southampton.