For many academics today, research is not about pushing intellectual boundaries. It is not about investigating a fascinating issue so much as it is about churning out publications, demonstrating impact and generating revenue in order to meet the performance targets upon which institutional reputation and individual careers depend.
The temptation to cut corners is immense. Tricks include getting your name on a paper that you contributed little towards, or ¡°salami-slicing¡± the same research across several publications. More seriously, some researchers falsify ¨C misrepresent ¨C their data, or even fabricate them entirely. Some universities tacitly encourage such behaviour and the boundary between academic integrity and malpractice is becoming blurred.
The absence of shared understandings and the risks to career and reputation make the nature and extent of academic misconduct a delicate issue to investigate. Research that I conducted with my colleague David Roberts, reader in biodiversity conservation at the University of Kent, aimed to provide maximum protection for participants by combining focus group interviews with an online survey that used an ¡°unmatched count¡± technique specifically designed to elicit responses to sensitive topics by allowing respondents to indicate malpractice without specifically implicating themselves.
Although this technique has quite a wide margin of error (see graph, below), the results of the study, which focuses on the UK and was funded by the Society for Research in Higher Education, are very worrying. Roughly one in seven of the more than 200 respondents to our survey reported having knowingly plagiarised someone else¡¯s work. Focus group participants were clear that quoting directly from a publication without acknowledgement was unethical but there was less certainty around using other people¡¯s ideas. As one researcher put it: ¡°The extent to which you acknowledge ideas that have influenced you as a footnote, or simply just by referencing them, is a very vexed question.¡±
ÁñÁ«ÊÓƵ
Issues to do with who should be included as an author on a paper are a source of frustration to many of our interviewees. One told us: ¡°What particularly annoys me is when people get themselves on papers who haven¡¯t had anything to do with the research.¡±
There was a sense that the research excellence framework had made issues of authorship take on a far greater significance. As one interviewee put it: ¡°I do this because it¡¯s a game that we¡¯re playing. I ask the question: ¡®Is it possible for me to be a joint last author because then I can include that in my REF submission?¡¯ And sometimes people say yes and sometimes people say no, but when it¡¯s your career on the line, you need to get something into the REF.¡±
ÁñÁ«ÊÓƵ
Even more worryingly, almost a fifth of those we surveyed reported fabricating research data. Our interviews suggest that attitudes towards fabrication vary considerably according to discipline. One ¡°quantitative researcher¡± was very alert to the issue: ¡°When people have [data] that¡¯s perfect, you can tell there¡¯s something wrong.¡± On the other hand, one social scientist discussed lacking interview data for a particular project and deciding to interview herself to compensate. She did not see this as straightforward fabrication but more akin to critical reflection.
Many of the academics we interviewed suggested that they and colleagues felt pushed into acting in ways that were if not unethical then at least lacking in integrity because of the pressures put upon them. This is seen most clearly in attitudes towards self-plagiarism. More than a third of those surveyed reported having published extracts from the same piece in more than one location. But, for some, this was not unethical but simply an efficient and common-sense means of maximising publications. As one academic explained: ¡°I don¡¯t think that self-plagiarism is the unethical thing; the unethical thing is the structural over-production that forces these things.¡±
<ÁñÁ«ÊÓƵ>Prevalence and perpetrators of research misconductÁñÁ«ÊÓƵ>
Notes: Figures refer to the 215 fully completed responses to an online survey of UK academics conducted in March and April 2016, reported in¡±Academic Integrity: Exploring Tensions Between Perception and Practice in the Contemporary University¡± by Joanna Williams and Dave Roberts, published by the Society for Research into Higher Education on 30 June. ¡°Fabrication¡± refers to inventing data. ¡°Falsification¡± refers to misrepresenting data. ¡°Ethics form misuse¡± refers to ¡°completing forms in such a way as to ¡®complete the process¡¯, rather than fully disclosing all possible ethical issues¡±. ¡°Reference misuse¡± refers to ¡°using references to support predetermined arguments rather than illuminate debate¡±. ¡°Authorship abuse¡± refers to having obtained authorship on a paper ¡°despite having done little to deserve it¡±. ¡°Salami slicing¡± refers to knowingly splitting results to maximise the number of publications. The ¡°unmatched count¡± technique tends to elicit higher positive responses to sensitive questions than direct questioning because it allows respondents to indicate malpractice without specifically implicating themselves. For further details, see p.11 of the report. The error bars reflect 95 per cent confidence intervals
Academics at newer, less research-intensive universities were more likely to locate the lack of integrity in the ¡°system¡± ¨C especially the REF and university league tables. As one participant told us: ¡°If you want to talk about integrity in research, you need to start at the top and not with individual researchers.¡± Academics at older universities were more likely to see malpractice as an issue of ¡°rogue¡± individuals reacting to institutional pressure to maximise metrics. ¡°You¡¯re going to be guided by the norms of the institution you¡¯re in,¡± one interviewee told us. ¡°In an environment that¡¯s not very conducive to research, there are temptations to do things that are cheap and cheerful, or to cut a corner here or there: not manipulating data, not doing anything wrong, but just slicing a little bit because you know it¡¯s economical.¡±
Many others expressed a similar sense of injustice that academics were expected to compete in the REF as equals despite working in vastly different institutional circumstances.
Having to operate within a system perceived to be disreputable has led some academics to alter their practice. Many are careful to cite their own previous publications because, as one interviewee put it, ¡°I want reviewers to know that¡¯s my [previous] paper coming through...So is that a lack of integrity on my part because I decide to play a game which everyone else is playing?¡±
Others describe an institutional emphasis on quantity pushing them into ¡°salamislicing¡± their research: reflecting that, rather than making a point in the publication they are currently working on, they ¡°could strategically¡keep it for a different piece of writing¡±, as one interviewee put it.
The pressure to publish can also lead to the misuse or misrepresentation of research data. One scenario described by a psychologist working in a research-intensive university was that ¡°you do a great big study. Say you include 20 questionnaire measures. Only five of them work out the way you wanted [so] you just say you¡¯ve measured [those] five...You haven¡¯t written anything that¡¯s incorrect¡You measured those five things and these were the relationships between them. But you [don¡¯t say that you] also measured 15 other things that didn¡¯t work out.¡±
ÁñÁ«ÊÓƵ
Many focus group participants were unconvinced that systems designed to take greater account of ethics in academic work, such as ethics committees, were successful. There was a sense that formalisation can turn integrity into a box-ticking exercise. As one interviewee put it: ¡°It¡¯s a bit like quality: the more unsure of it we are, the more processes we put into place to give the appearance of [it].¡±
Different attitudes to academic integrity emerged in relation to career stage. Those at the beginning of their careers tended to be far more open about suggesting that academia was ¡°just a game¡±. It was only towards the end of their careers that people were more likely to see integrity as a personal responsibility and to question the institutional expectations placed upon them. As one interviewee put it: ¡°Digging your heels in because of, I don¡¯t know, principles, is something that is more easily done at different times of your life...I can open my mouth¡because not much can happen to me now really.¡±
Another took the view that it was age more than career stage that was the key driver of evolving attitudes: ¡°Even though I¡¯ve got a reasonable amount of time left teaching, I¡¯ve got to the stage where I¡¯ve decided you can¡¯t breach your moral standards for someone else. I think that does come with middle age.¡± Either way, it can appear as if academic integrity is now a luxury reserved for senior academics.
My own view is that more academic freedom and less interference from institutional managers and government directives would give people a stronger sense of ownership of their research. This, in turn, would give them a greater urge to make sure it is carried out rigorously and honestly. In other words, integrity cannot be legislated into existence: it can come only with autonomy.
Joanna Williams is a part-time senior lecturer in higher education at the University of Kent and is the author of Consuming Higher Education, Why Learning Can¡¯t Be Bought and Academic Freedom in an Age of Conformity: Confronting the Fear of Knowledge.
<ÁñÁ«ÊÓƵ>The research integrity expertÁñÁ«ÊÓƵ>
Having positive and preferably spectacular research findings is wonderful. It helps you to get a publication in a journal with a high impact factor, which will be cited often and may attract a lot of media attention. This is not only a pleasant ego boost but may also be instrumental in getting your next grant or strengthening your academic position. So, in an ever more competitive and metrics-driven scientific environment, it is tempting to make such results occur by any means necessary. And while downright fabrication or falsification of data is probably rare, the more subtle forms of sloppy science are not.
One common tactic is to conduct a number of statistical analyses and publish only the one that you like most. If you torture your data enough, they will always confess. Giving a strong positive spin to your results by selective citation of earlier publications may also do the trick. Arguably the largest evil in the ¡°sloppy science¡± category is to simply ignore negative results ¨C either by publishing no paper at all, or, more subtly, by cherry-picking and reporting only positive findings. Either way, the absence of negative results severely distorts the accumulated body of evidence in the scientific literature and means that most of the published positive results are likely to be false positives.
This was explained as long ago as 2005 by John Ioannidis in his landmark Plos Medicine article ¡°¡±. More recently, it has been shown that only between 10 and 40 per cent of published findings are reproducible: most recently by a large-scale attempt to replicate major findings in psychology, in Science. This is the price we pay for selective reporting. The reproducibility crisis, as the phenomenon is called, implies an enormous waste of resources. But it can also lead to ethical issues when animal studies or clinical research are based on false positives in earlier work.
It is not easy to find out how large the unpublished body of evidence is, and the extent to which it differs from the published record. The only real solution is full prospective transparency. Only by making available study protocols, lab journals, data analysis plans and all study results will it be possible to identify and adjust for the magnitude of distortion due to selective reporting. These principles were implemented more than a decade ago for clinical trials, but are still largely non-existent in other research traditions.
Another major countermeasure to sloppy science would be to foster a research culture that promotes open discussion of dilemmas, constructive criticism and internal audits. Good supervision and inspirational role models should guide young scientists to strengthen their moral compass and resist the temptation to cut corners.
Action is already being taken, but more is needed. Academic leaders should diversify evaluation criteria, and make clear that science is not only about being cited as much as possible. Funders should demand full transparency and refuse to pay if scientists do not comply. And journals should make sure that manuscripts are judged for the relevance of the research question and the soundness of the methods, but not on the findings they report. Furthermore, editors should demand the publication of full datasets and encourage post-publication peer review.
Doing all that would require more resources per project, entailing that fewer could be conducted. But that is exactly what we need: slow, sound, transparent science. Less, carried out in this way, would undoubtedly be more.
Lex Bouter is professor of methodology and integrity at the University Medical Center at VU Amsterdam, the Netherlands. He is co-chair of the , which will take place in Amsterdam in May 2017.
<ÁñÁ«ÊÓƵ>The view from economicsÁñÁ«ÊÓƵ>
Integrity is important and that is why I work in the world¡¯s most truthful industry.
ÁñÁ«ÊÓƵ
The reason to value integrity is that universities provide the granite that stands below a happy society. No other institution can be trusted to be honest. Companies, churches, charities, conservation groups, anti-conservation groups, political parties ¨C all by their nature are obliged to wield axes and, whether they know it or not, are unable to stand up for the entire, disinterested truth. If university researchers fail to be honest, the checks on what might be called social truth, and the reliability of our lives, will disintegrate. That is why university researchers should not align themselves with the political Right, Left, green or whatever.
Let¡¯s keep things in perspective. I was once employed on a building site. In 10 weeks there, I saw more corruption than I have in 40 years in academia. Bad behaviour, yes: I have witnessed some. But it usually came from academics who were convinced that they knew the complete truth and believed themselves ethical. Their crime was not corruption but just the human one of feeling insecure and shutting their minds to ideas that unsettled their worldview. The great statistician Ronald A. Fisher went to his grave protesting that it was safe to smoke cigarettes and that the young medical statisticians around him had been fiddling their calculations.
Yes, I have seen some strange things in my career. I have had failing students in my office asking me if I know how rich their father is: ¡°Surely something can be arranged, professor?¡± I have seen one or two selfish scholars do things against rivals in the grey area of life. A few even fabricate data, but they are typically psychologically unwell. The normal scientific process of replication can be trusted to restore the truth. Believe me: any important intellectual claim you make will be checked by competitors. You fabricate at your peril.
Humans are not perfectible, so surveys claiming to uncover academic corruption will always be with us. As we grow older, however, we learn that what often looks like conscious bias and crime is usually not fiddling or lack of integrity so much as simple human error. I have learned to be doubtful of infamy, infamy. The university industry is fundamentally the most honourable one in the world.
Andrew Oswald is professor of economics at the University of Warwick.
<ÁñÁ«ÊÓƵ>The view from the life sciencesÁñÁ«ÊÓƵ>
Plagiarism, in any form, is unacceptable. It is cheating and it is fraudulent. Self-plagiarism may not be as bad as stealing someone else¡¯s intellectual work, but it¡¯s also incredibly lazy. Do you disrespect your audience so much that it¡¯s too much effort to re-evaluate your thoughts using different words, metaphors or examples? Or was your prior wording so perfect that it couldn¡¯t possibly be improved upon ¨C or referred to merely in a citation?
I don¡¯t have a problem with self-citation where it is warranted ¨C you should cite the most relevant prior work and if that includes some of your own, so be it. Self-cites can be readily identified by citation databases: a rate greater than 5 to 10 per cent suggests that either you invented the field or you are incurably narcissistic. It¡¯s easy to determine which is applicable.
How big is fraud in science? The numbers in this survey are a huge concern. I¡¯ve encountered evidence of misconduct myself but at a far less frequent level. That said, it is sometimes remarkable how little effort some scientific fraudsters put into their fictional constructions. One case I saw as an editor involved three manipulations in two figures in which a portion was rotated, cropped or duplicated. When I rejected the paper, the authors asked to correct and resubmit!
Science is expensive, technically complex and requires significant skill, so it¡¯s not surprising that, in a high-pressure and hyper-competitive world in which publications bring prestige and grant money, there are fraudsters. They need to be identified and expelled from research.
Science is inherently vulnerable to uncertainty. Much of the best science can turn out later to be incorrect for good and innocent reasons, such as missing information or honest misinterpretation. That is why integrity is paramount. Pollution of scientific ¡°knowledge¡± with purposefully forged, duplicated or made-up data is a waste of precious resources and justifiably rattles public confidence and support.
It is a privilege to conduct science. While there are bad apples in all professions, intolerance of any form of fraud in science is demanded not only by ethics but also by sheer self-preservation.
Jim Woodgett is director of research and senior investigator at the Lunenfeld-Tanenbaum Research Institute in Toronto.
<ÁñÁ«ÊÓƵ>The view from physicsÁñÁ«ÊÓƵ>
If it genuinely is the case that nearly 20 per cent of researchers fabricate research data then we have a much, much bigger problem than I ever imagined. I find that figure shocking.
I¡¯d have previously said that the proportion of scientists involved in fabrication of data was of the order of a few per cent ¨C I¡¯d certainly not have estimated that it was anywhere near a fifth. That estimate was based on previous studies on research integrity, including ¡°How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data¡±, published in Plos One in 2009.
If the figure of 18 per cent is a more accurate representation of the extent of fabrication, then websites and blogs that report on scientific misconduct, such as , and Leonid Schneider¡¯s are really going to have their work cut out for them in future.
The 14 per cent that admit to plagiarising others¡¯ work unfortunately does not come as a big surprise. That more than a third of respondents also excuse self-plagiarism is similarly irritating but hardly surprising: there has long been a culture of recycling and republishing work. In my area of condensed matter physics, for example, it was not unusual for many years to find large chunks of a given paper (including the same figures and data) republished as a conference proceedings manuscript.
Fortunately, this now appears to be rapidly going out of fashion and there is a much greater awareness that ¡°churnalism¡± of this type is not the way to do science. Whether the tide has also turned against less obvious but more serious forms of misconduct, however, is very much open to question.
Philip Moriarty is professor of physics at the University of Nottingham.
<ÁñÁ«ÊÓƵ>The view from mathematicsÁñÁ«ÊÓƵ>
In my experience, behaviour in mathematics has always been pretty good. Instances of a little arm-twisting to get articles out in books and highly ranked journals will always occur, but enjoying the fruits of networking or taking advantage of one¡¯s reputation is simply part of life.
The rather dubious practice of breaking up a piece of work into smaller publishable units is increasing, but no more so than in lots of other scientific subjects. The culprit here is unquestionably the imposition of personal and departmental performance targets, working in tandem with the panic-inducing ¡°publish or perish¡± syndrome from which we all suffer nowadays.
However, if you are looking for a field that is rife with episodes of infamy and thriving characters bereft of conscience, my suggestion would be to go elsewhere. A mathematician who acts poorly at a serious level will inevitably gain a bad name, and struggle because of it.
We mathematicians cherish our subject and its reputation is important to us. You can¡¯t ¡°wing it¡± in mathematics ¨C it¡¯s not that sort of animal. You need to be seriously bright and dedicated to forge anything like a solid research profile post-PhD. As a result, there is a good deal of mutual respect for technical work ¨C both within and across the core divisions of pure and applied mathematics ¨C and it is this that fundamentally underpins and shapes the subject.
The levels of creativity and persistence expended in the pursuit of original research is such that the subject has developed an inbuilt integrity. Without going overboard, virtue is a natural part of our mindset because disreputable behaviour somehow devalues the very things we are working with. I am convinced that there is a causal link between the rightful prestige of mathematics and the honesty and decency of the vast majority of practitioners.
Some areas of mathematics sit at the very top of the intellectual tree, and most of the rest reside on branches nearby. Because of this, mathematicians tend to pursue research primarily for the love of it. We are not afraid to share our work informally for fear of professional theft, and are usually happy to help each other with problems. Any self-serving outliers who deviate from normative propriety are, thankfully, the exceptions.
ÁñÁ«ÊÓƵ
Peter J. Larcombe is professor of discrete and applied mathematics in the department of computing and mathematics at the University of Derby.
Print headline: First, principles
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login