Academics have continued to trade blows over the state of psychology research following the release of a paper questioning the results of a major project that cast doubt on reproducibility in the field.
In August 2015, an attempt to reproduce 100 prominent papers by the Center for Open Science found that only 36 per cent produced statistically significant results, stoking concerns about scientific reliability that have also engulfed biomedicine.
But today saw a group of researchers from Harvard University and the University of Virginia respond with claims that the study contained several statistical errors and failed to repeat the experiments properly.
It lists a number of what it claims are discrepancies between the original studies and attempts at replication.
“An original study that measured Americans’ attitudes toward African-Americans was replicated with Italians, who do not share the same stereotypes; an original study that asked college students to imagine being called on by a professor was replicated with participants who had never been to college,” says the , published in Science.
It also criticises the study for only having attempted to reproduce each study once, leading to a much lower rate of replication.
The study “seriously underestimated the reproducibility of psychological science”, it concludes.
But the authors of last year’s replication study have hit back themselves in , calling today’s critical paper a “very optimistic assessment” that is “limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data”.
The corresponding author from last year’s study, Brian Nosek, a psychology professor at Virginia, also took aim at some of the reporting on today’s paper attacking his findings.
One journalist “talks to me at 9.20p saying she only read the press release – not original article, comment or response. Files story @ 9:30p,” he wrote on Twitter.
Other commentators have questioned whether today’s rebuttal means that all is well in psychology.
“The Reproducibility Project is far from the only line of evidence for psychology’s problems. There’s the growing list of failures to replicate textbook phenomena,” wrote Ed Yong in .
“There’s publication bias – the tendency to only publish studies with positive results, while dismissing those with negative ones. There’s evidence of questionable research practices that are widespread and condoned,” he argued.