FWF: What is more important in research, transparency or output? Brian Nosek: The credibility of scientific output depends on the process that is used to acquire it. Transparency and reproducibility are core values of science. Scientific claims do not become credible because a scientist is an authority. Scientific claims become credible when anyone can see the evidence supporting the claim, when anyone can see how that evidence was obtained, and when anyone can reproduce the evidence themselves. FWF: The Centre for Open Science (COS) develops incentives and practices to promote “truth over publishability” as you call it. What has been your greatest success so far? Nosek: So far, the biggest success is the TOP Guidelines (http://cos.io/top/). TOP are guidelines that journals and funders can adopt to promote transparency and reproducibility. Already more than 700 journals and 60 organizations have become signatories to the guidelines including Science, Nature, Wiley, and other major publishers.
FWF: Your first research project at COS analyzed the reproducibility of Psychological Science. What did you learn? Nosek: We found that reproducing published results is more difficult than most people would have anticipated. We successfully replicated approximately 40 per cent of the 100 published results that we tried to reproduce. What we don’t know yet is why it was so difficult to reproduce those results. It could be that the original results were false, or that the replications were erroneous, or that there are subtle differences between original and replication studies that are very important for obtaining the result. All of these are challenges for reproducibility. FWF: Several declarations like the San Francisco Declaration on Research Assessment (DORA), Metric Tide or the Leiden Manifesto in recent years have opposed against the use of simplified metrics in research evaluation. But what are the alternatives to assess a still increasing research output? Nosek: This is a very hard problem. Humans need heuristics, that is, rules and methods. The world is too complicated to expect that everything will be evaluated in depth. So, we cannot expect that heuristics will go away. The best approach is to maximize the quality of the methods we use, recognize the limitations of those methods transparently, and ensure that they are not overapplied when decisions are important and can be made with a more systematic review. FWF: Journals and funding agencies observe an ongoing overload of the peer review system. How could we sustain the merits of peer review on the one side and avoid degeneration on the other?
Nosek: For journals, I believe that the solution is to remove peer review as a gatekeeper to publication. We wrote about this in Nosek and Bar-Anan (2012). If it is up to researchers to decide when to publish, publication is no longer an achievement for them. Instead, what they need in order to succeed is to generate interest in their research and get evaluation. Seen from this perspective, peer review is a benefit to researchers rather than something that gets in the way of their goal. Also, by giving researchers control of when they publish, the biggest risk is to be ignored. So, researchers may actually be more careful before posting their work. For funders, it is a harder problem. One easy intervention is to eliminate submission deadlines. A division at the National Science Foundation (NSF) tried this and submission rates fell dramatically. Without a deadline, researchers will submit when they are ready – it is up to them to decide. With a deadline, researchers will submit whatever they have because otherwise they cannot be considered.
Nosek: Some reasonable concerns about open science are whether the incentives for researchers will shift as we move toward open science. For example, if researchers are required to share their data, but they do not get credit for others’ use of that data, then researchers who generate valuable data will be disadvantaged. So, any change toward openness has to simultaneously consider how to provide credit to researchers for being open. FWF: What do you think about patenting publicly funded research? Nosek: Personally, I think that there are many occasions in which patenting publicly funded research is a good thing for society. From my point of view, the most important thing is to be clear about the goals up front. Some funded work will lead to public knowledge, other funded work will lead to commercialization. I think that is fine, as long as the funders are aware and support the intended direction of the work that they are funding. FWF: Finally, what can a funding institution like the Austrian Science Fund do to support Open Science effectively? Nosek: Adopt the TOP Guidelines to encourage or require greater openness and reproducibility in the funded research. Set aside a small portion of research funds (2-3 per cent) to support replication studies of important, ground-breaking research. Support metascience – investigations of the scientific process itself – to increase the quality and efficiency of knowledge accumulation.
Brian Nosek is the Director of the Centre for Open Science and psychology professor at the University of Virginia. Nosek is one of the key contributors to the debate of Open Science in general and to the reproducibility of research results in particular. Among other publications, he has published the highly acknowledged studies Estimating the reproducibility of psychological science and the Transparency and Openness Promotion (TOP) Guidelines in Science in 2015. Just recently Brian Nosek was co-author of How open science helps researchers succeed published in elife.
The Austrian Science Fund FWF and IST Austria invited Brian Nosek to give a lecture about Scientific Utopia – Improving Transparency in Scholarly Communication on 21 September 2016 in Vienna within the series “New Trends in Scholarly Communications”. See here for pictures and the video of the event.