ÁñÁ«ÊÓƵ

The grey zone: How questionable research practices are blurring the boundary between science and misconduct

<ÁñÁ«ÊÓƵ class="standfirst">Infamous cases of misconduct such as that of Paolo Macchiarini are just the extremes on a long spectrum of dubious research practices, say Nick Butler, Helen Delaney and Sverre Spoelstra
October 14, 2016
Moors roadsigns in fog
Source: iStock

Earlier this year, Paolo Macchiarini ¨C former star surgeon and professor at Stockholm¡¯s Karolinska Institute ¨C from his post following a high-profile investigation prompted by a documentary broadcast on Swedish national television. Macchiarini was found guilty of failing to for experimental transplant techniques and in journal publications. The scandal throughout the scientific community and has so far led to of Sweden¡¯s most prestigious medical university and , .

Such egregious breaches of scientific protocol are serious, but mercifully rare. Far more prevalent ¨C and therefore ¨C are research practices that fall into an ethical ¡°grey zone¡± between overt misconduct and scholarly best practice. Academic misconduct refers to forms of (FFP) ¨C in other words, the terrain of fraudsters, con artists and cheats. Questionable research practices (QRPs), however, are more difficult to pin down but typically involve . Recent research suggests that academics are becoming more adept at , like athletes who optimise their performance with artificial enhancements without technically breaking the rules. To put it into perspective, found that only 2 per cent of scientists admit to FFP, while almost a third admit to engaging in QRPs.

One prominent example of a QRP is ¡°¡±, standing for ¡°hypothesising after the results are known¡±. Normally, researchers follow the standard scientific practice of developing a hypothesis and then testing it against the facts. But HARKing involves constructing or changing a hypothesis after the data have been collected and analysed. If this is concealed from journal editors, the integrity of the scientific process is compromised. Yet, strictly speaking, HARKing is not considered academic misconduct, even if it is frowned upon by many researchers.

In a , we highlight the problem of questionable research practices in the business school. Management scholars tend to publish articles that use the hypothetico-deductive method to show the effects of, say, leadership styles on organisational performance. Here, the study of management is seen as a scientific discipline that adheres to rigorous methodological standards. By the same token, journal editors will if they fall short of these standards.

ÁñÁ«ÊÓƵ

However, we found evidence that QRPs are widespread in this field. For example, it is common for researchers to play with numbers to get the best (read: most publishable) outcome. This could involve removing outliers to confirm their hypothesis or fishing within the data to find unanticipated results. Such practices fall into the grey zone if they are hidden from editors and reviewers during the peer-review process. Unlike FFP, QRPs are more difficult to detect. As one respondent acknowledged, ¡°I can just delete like 100 data points [and] you would never know it. How would you know? How would anybody find out?¡±

Consequently, highly ranked journals may be flooded with papers with results that are simply too good to be true. This was aptly shown by a that identified a ¡°chrysalis effect¡± in business research ¨C that is, how subpar results in doctoral dissertations miraculously transform into beautiful peer-reviewed publications. The implication is that researchers manipulate their hypotheses or misrepresent their data to meet the exacting standards found in academic journals.

ÁñÁ«ÊÓƵ

The most common explanations for the prevalence of QRPs include inadequate methodological training and the pressure to publish. However, the chrysalis effect points to another explanation: the demands and expectations of highly ranked journals encourage researchers to . This is highlighted by one of our respondents, who that journals sometimes insist that ¡°you come up with a hypothesis that you didn¡¯t have before, and then test it and then report that as a confirmatory analysis in the paper ¨C which is actually not allowed¡±. There is, of course, a paradox here. To live up to the unrealistic ideal of science promoted by top-tier journals, scholars may find themselves transgressing this ideal.

The question is what should be done to discourage QRPs. The is for journals to improve the ethical guidelines for authors and strengthen the peer-review process. Others have called for in science, such as publicly registering hypotheses or establishing central data repositories. But this does nothing to address the role that journals play in fostering QRPs.

One alternative is to develop a ¡°¡±, as proposed by the founders of the Retraction Watch website. This would provide a numerical metric of the journal¡¯s transparency in a number of areas, such as peer review, retraction record, mechanisms for detecting misconduct and procedures for dealing with questionable research practices. The transparency index could also determine whether journals compel authors to acknowledge changes to their method or hypothesis during the research process. Establishing such an index would refocus attention on how journals often serve to reproduce ¨C or even foster ¨C bad academic habits.

Of course, most journals would be unlikely to adopt this index if it clashed with their much-prized impact factor. So we may continue to rely on post-publication peer-review websites such as to hold researchers and journals to account.

ÁñÁ«ÊÓƵ

Nick Butler is assistant professor in the Stockholm Business School at Stockholm University, Sweden. Helen Delaney is senior lecturer in management and international business at the University of Auckland, New Zealand. Sverre Spoelstra is senior lecturer in the department of business administration at Lund University, Sweden.


<ÁñÁ«ÊÓƵ>Write for our blog platform

If you are interested in blogging for us, please email chris.parr@tesglobal.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (2)
"Normally, researchers follow the standard scientific practice of developing a hypothesis and then testing it against the facts." <- Isn't that a myth? I think you'll find that people like Gay-Lussac did experiments. Got data and noticed a pattern in the data. They called it a scientific law. Research proceeded hypothesis. The hypothesis was deduced from the data. A replication will test a hypothesis against facts. Even someone as intellectual as Einstein developed his theories in response to discoveries (AKA data). Other people's data. One prominent example of a QRP is ¡°HARKing¡±, standing for ¡°hypothesising after the results are known¡± <- Sounds exactly what Gay-Lussac and Einstein did! In contrast to Gay-Lussac deducing his law from the data, we have a proposal to outlaw that! Researchers are not expected to provide a hypothesis beforehand! Surely that's the very definition of bias. I put it to you that too many researchers already have too much bias. Too many preconceived ideas. Researchers should listen to what the data tells them. They should by more like J.M. Keynes: "When the facts change, I change my mind. What do you do, sir?"
Ooops. Anti-HARKing: Researchers are _now_ expected to provide a hypothesis beforehand!
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs