榴莲视频

Four ways to make research more open and robust

<榴莲视频 class="standfirst">Metadata can be used to improve research integrity but first major changes to research design and practice need to be made, argues Neil Jacobs
十月 26, 2019
data
Source: iStock

Scientific research is using new ways of diffusing knowledge. Digital technologies and collaborative tools are affecting the whole research cycle and call for everyone in research to consider how , including automated experiment selection or natural language processing, can help improve research.?

We need to move away from opaque research, conducted on PCs, using closed software and reported in pdf documents, to more open research created in a digital environment that is designed for that purpose. For example, technologies such as open source scripting languages and open sharing of data and code are techniques that support this trend.

The research sector is in a transition marked by a renewed interest in metadata used in research. The use of metadata is one of the key pillars of the recently launched , which will analyse research systems and experiment with decision and evaluation data, tools and frameworks.

Metadata are data?that describe other?data, which can make finding and working with particular data?sets easier. For?example, metadata tools can gather information on authors, date created, date modified and file size.

Metadata tools give the research community the opportunity to gather insights in previously undisclosed territory. Here are four examples of research projects that focus on metadata, showing the potential use of technology to improve research integrity.

?Pre- and post-outcome comparison

A reported in 2019 in the found that a third of clinical trials that had a different primary outcome from what was preregistered were also more likely to have a higher (16 per cent) intervention effect. This significant finding leads me to wonder whether a tool to compare pre-registered with published outcomes might provide useful feedback to make sure that the intervention effect is not skewed by selective reporting, research bias or any other variable that could compromise the research integrity.

?Reporting compliance dashboard

Similarly, a showed that there is a real lack of reporting on basic metadata. Information about basic reproducibility and ethical practices, such as blinding, sample size calculation, control group allocation and compliance with guidelines such as (animal research: reporting in vivo experiments) is often missing. It could be useful for institutions, researchers, funders and publishers to access a dashboard that shows gaps in reporting.

?Data availability tracking tool

Another example of where technology might offer greater transparency, is a that Jisc was involved with. It showed that only a minority of the studies included useful data availability statements. We are now exploring whether we can develop a tool to identify where these data availability statements are and whether they really do point to data that can be accessed and reused.

Reliability and confidence through AI

The US (Darpa) is working on the Systematising Confidence in Open Research and Evidence project. Darpa is using AI and machine learning to give an estimate of reliability and a confidence score for social and behavioural studies. These AI tools will assign confidence scores about behaviour patterns with a reliability that is equal to, or better than, the best current human expert methods. The scores will inform the way that the US military uses social and behavioural science research to inform their investments and models of human social behaviours to safeguard national security.

Digital collaborative environments

Tools and applications that improve existing research communication practices are paving the way toward a more robust and open science culture. But more fundamentally, we need changes upstream creating digital collaborative environments that embed academic norms and practices such as pre-registration and open code into the research design and practice.

We need to make it easy for researchers to do the right thing during the research process as well as when reporting it afterwards.

For example, we’re in conversations with researchers at the universities of Bristol and Bath, who conduct interdisciplinary research into the built environment, the physical environment and the ways that human beings interact in those. This intrinsically interdisciplinary work involves civil engineers, scientists, psychologists and others.

Those conversations are about what would be an appropriate digital environment in which all these disciplines can bring together data from the internet of things, from sensor networks and from mobile networks – a way that enables hypotheses to be pre-registered and embedded into software agents that could then interrogate these data in a responsible and reproducible way.

There are already providers who deliver such technology such as the which is a free, open platform supporting open research and collaboration.

Another helpful tool is the , which provides a set of principles, concrete guidance to practice, and action towards including diverse perspectives from around the globe.

What we also need are changes in research assessment to enable these best practice applications to be rewarded, to reward the publication of interactive models and more imaginative ways of reporting science that are truer to the research process.? We also need changes in research study design and funding, for example to recognise longer study set-up times, and changes in research teams and skills so that coding becomes as mainstream as authoring papers and bids.

Neil Jacobs is head of open science and research lifecycle at Jisc.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT