ÁñÁ«ÊÓƵ

Good science requires better teaching

<ÁñÁ«ÊÓƵ class="standfirst">We must change the way we educate students to counter rising scientific misconduct, say Tim Birkhead and Bob Montgomerie
August 7, 2014

Source: Miles Cole

We expect that for some ambitious young scientists, the mis-?training they received at school sets the agenda for the rest of their career

These days you can hardly pick up an issue of Times Higher Education, a science magazine or a newspaper without reading about a case of scientific misconduct. Twenty years ago, such reports were extraordinary. Is this an epidemic of misconduct or an?epidemic of reporting? A study published in the Proceedings of the National Academy of Sciences of the USA in 2012 reported a 10-fold increase in the incidence of scientific misconduct since the mid-1970s. Our experience also leads us to believe that such misconduct is proliferating in various guises. High-profile cases of individuals found to have fabricated data, such as social psychologist Diederik Stapel, and scientists Hwang Woo-suk and Marc Hauser, tend to obscure the profusion of less sensational transgressions among researchers, undergraduates and school pupils.

Scientific misconduct is generally said to comprise falsification, fabrication and plagiarism (FFP). It also includes a wide array of other unethical and unsavoury activities that scientists sometimes engage in to promote themselves and their work. Even within the purview of FFP, both the scope and the severity of wrongdoing are large, ranging from inappropriate authorship to what are euphemistically referred to as ¡°questionable research practices¡±.

We should care about scientific misconduct because it damages society¡¯s confidence in science, despite the fact that science is the basis for so much medical progress, technological innovation and economic prosperity. Such damage gives traction to climate change deniers, creationists, homeopaths and many other quacks, as well as weakening the financial support for science from the public purse. While the high-profile cases inflict major damage on public perceptions, all sorts of misconduct undermine scientists¡¯ trust in the literature that forms the basis for research progress.

ÁñÁ«ÊÓƵ

Among academics, scientific misconduct probably occurs because of two interlinked phenomena: competition and cynicism. Competition in academia is ubiquitous, but is also important for scientific progress. It has always been that way, although competition now seems to be more intense than ever because of government-imposed performance assessment, such as the research excellence framework in the UK, and because resources to fund research are tight. The sad truth is that the science enterprise has been almost too successful for its own good, and there are now large numbers of scientists fighting over a?diminishing pot of funds.

Academics compete for research funding, for publications in high-profile journals and for academic appointments. When a single publication in an eminent journal such as Nature or Science can make the difference between a job or no job, it isn¡¯t difficult to see why some are tempted to fake or fudge data. Journal editors and reviewers are sometimes complicit, too, favouring clean-cut stories about trendy, high-profile topics over results that more accurately reflect the often messy reality of scientific research.

ÁñÁ«ÊÓƵ

Cynicism in academia is also widespread. When the research funding system is unfair or even just seems to be ¨C and that appears increasingly to be the case ¨C some academics become disaffected and treat research as a?game to be won at any cost. If established academics cut corners to secure funds or to publish in high places, it is likely ¨C virtually inevitable ¨C that their research students will follow their lead. Our own experience as both scientists and teachers suggests to us that the problem begins with, and may be partially solved by, education.

Among undergraduates, scientific misconduct is also driven by competition, but by competition for grades rather than for publications. Again, this is an indirect consequence of government policy: grades determine qualifications and credentials, and students and their institutions alike stand or fall on the number and quality of their degrees. As a consequence, real education plays second fiddle to grade acquisition. Over the past 15 years, we have informally surveyed several hundred graduates and almost three-quarters of them report that they know of someone who fudged data for a final year assessment.

What about graduate students? At a workshop that we ran in April this year, involving mainly PhD students and early career postdocs from a wide range of universities, we asked participants to complete a questionnaire on misconduct. Sixty-eight of them answered 51 questions to rate their perception of the severity of different kinds of questionable, unethical and fraudulent research practices, from zero (not?really a?problem) to three (severe, deserving censure and punishment).

We were relieved to find that most (96 per cent) of them rated ¡°deliberately making up some or all of the data in a manuscript submitted for publication¡± as three (don¡¯t ask about those 4 per cent who didn¡¯t!). However, we were dismayed that only 54 per cent gave a?three to ¡°knowingly selecting only those data that support a hypothesis¡± and 42 per cent to ¡°deleting some data to make trends clearer¡±. The naivety is staggering.

On the other side of the coin, research students and postdocs have often commented on the difficulty of speaking out against what they consider to be the questionable research practices of their supervisors. At one recent bioethics meeting, a research student told us that she attended in the hope of being able to raise the issue of misconduct, but then was too frightened to do so because her supervisor ¨C the perpetrator ¨C was present.

Further discussion with our own undergraduate research students uncovered what they considered to be the main cause of such misconduct: the way science is taught at school. The obsession with box-ticking is a?major culprit, where assessment rewards only the right answer rather than the process of research and the integrity of reporting. Students told us of teachers who encouraged them to make up results (the right ones, of course) when a particular experiment had not ¡°worked¡±. The problem is obvious: teachers have not been given sufficient time by governments and curriculum developers to properly teach the scientific process and to do experiments carefully. If an experiment or demonstration fails, pupils need to understand why. It is ludicrous that pupils should ever be encouraged to fake results when their experiments do not turn out as expected, or be punished with lower marks when they do not get the ¡°right¡± answer. We expect that for some ambitious young scientists, the mis-training they received at school sets the agenda for the rest of their career.

Miles Cole illustration (7 August 2014)

Most universities have strict policies about plagiarism, but there is typically neither information nor guidelines about dealing with data fudging and faking

ÁñÁ«ÊÓƵ

The bizarre thing about scientific misconduct among undergraduates is that, while most universities have strict policies about plagiarism, there is typically neither information nor guidelines about dealing with data fudging and faking. Many institutions now employ Turnitin to look for plagiarism, and the threat of detection must reduce its incidence. But most universities seem to have their heads in the sand over the other ¨C arguably more important ¨C forms of misconduct.

ÁñÁ«ÊÓƵ

In a similar way, scientific journals have usually been very good at publishing strict warnings about plagiarism. The Committee on Publication Ethics (Cope), which has 9,000 members (journals, editors, publishers), has excellent guidelines for dealing with this problem. But plagiarism has no effect on the veracity of scientific research, and is probably the least important misdemeanour in the FFP triumvirate. In our own experience, scientists (but all too often not journal editors) tend to feel that some honest mistake may have been made when they see results that appear to be too good to be true. Misconduct is rarely on their radar.

What can be done? For established scientists, a clear set of guidelines about what constitutes misconduct in all its manifestations (not just plagiarism) and how to deal with it would help, under the auspices of Cope or a similar organisation. We?recently had to deal with a case of?blatant plagiarism in a manuscript submitted for publication, and the Cope guidelines and procedures turned out to be very useful. But neither the journal nor the miscreant¡¯s institution took the recommended action. We might all benefit from journals and institutions having some official accreditation and agreeing to take a specific course of action to deal with misconduct of all types.

But perhaps it¡¯s too late to expect poorly trained scientists to know how to avoid misconduct, or to deal with it once detected. At school we need to start teaching science properly. We need to stop bucket-filling and fact regurgitation. The current box-ticking and assessment culture leaves no place for the creative and ethical processes of doing science. Teachers need to be allowed to teach what science is. This isn¡¯t easy, but it needs to be done. Equally important, teachers need sufficient time to teach effectively, to do experiments carefully, more than once if necessary, and to assess why the ¡°right¡± results aren¡¯t always obtained.

At the undergraduate level there is insufficient emphasis on the process of science. Universities have been slow to respond to the current reality that facts and information are so readily available on the internet. Sadly, the emphasis in most university courses is still on using lectures to transmit factual information, and yes, some of that is essential; but equally important is understanding the way science as a process works. There is not?enough emphasis on what it means to be a?scientist, about thinking creatively and behaving with integrity.

Some undergraduate degree courses already include modules on ethics, which is excellent, but simply presenting students with a list of dos and don¡¯ts isn¡¯t enough. In our experience, a?much more effective approach is to present the issue of scientific integrity within a?broad framework of the history of science. It?has been argued, for example, that Gregor Mendel¡¯s results on the inheritance of seed colour and shape in pea plants were too good to be true. Examining and discussing the evidence with students and asking them to assess whether or not Mendel might have been guilty of scientific misconduct helps to illustrate many of the issues.

Who will teach such values? It is a common complaint among academics that students arriving at university do not know how to think. The usual rebuttal by educators in the UK is that there is an A-level course in critical thinking. But few students take that course, and we believe the reason for this is that few teachers feel competent enough to teach it, probably because it requires rather more work than transmitting a list of government-approved facts. We suspect that, to many academics, teaching science as a process and including the need for integrity within a?framework of the history of science might also be perceived as difficult, alien or simply unnecessary. One solution might be to establish high-quality online lectures and workshops, taught by experts on misconduct and the history of science. Given the widespread culture of scientific misconduct among undergraduates, teaching about misconduct is more a necessity than a luxury, but such teaching has to be both uniform and of high quality. Every degree course should include a module on the everyday practice of science, on detecting and dealing with misconduct and on ethical practices in research.

We discovered recently that, at one university, all science PhD students are required to take a course in ethics, but that this course was taught by postdoctoral researchers because ¡°no academic would want to waste their time doing that¡±. In our opinion, such an approach is a continuation of the ineffectual box-ticking that devalues so much of education. Scientific misconduct is too important a?part of training to leave solely to the responsibility of postdocs.

In addition to telling undergraduates about the practice of science, there are several specific things we could do. We suggest that in practical classes or projects, students should be assessed on the data collection process before presenting their results. Perhaps most importantly, we should avoid providing opportunities for students to cheat. To do that, we should abandon the usual approach to coursework and replace it with questions or tasks whose results cannot be fudged or faked, and which require real knowledge and understanding. As teachers we?have been, and continue to be, completely limp about this.

ÁñÁ«ÊÓƵ

Finally ¨C and this is both obvious and worth repeating ¨C the emphasis in school and university should be far less about grades and far more about education.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs