How Open Access Builds Trust in Your Work

The STEM fields have been plagued by a crisis that threatens to erode public trust in the value of research. To solve it, we need to foster a more open and collaborative culture around research and publication.

Written by Edward Hearn
Published on Nov. 03, 2020
How Open Access Builds Trust in Your Work
Brand Studio Logo

In 2005, John Ioannidis, a professor of medicine and statistics at Stanford University, published an article in which he claimed that a majority or even most published research findings were false. That is, most medical, scientific, or empirical studies did not reliably produce the same results when re-run with new data. Ioannidis went on to conclude that, “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”

How had the entire research edifice devolved into this? A lively debate about major contributing factors continues to this day. At present, though, the more important question is what’s the solution. Trust in new information is paramount, and the replication crisis has eroded that public trust. Regaining it will require rigorous review, collaboration, and an openness to scrutiny on the part of researchers to facilitate both of these. These are critical elements not only for re-establishing public-wide buy-in to the research process but also for expanding the scope of how people can produce high-quality information by working collaboratively at a large scale. Luckily, a movement toward openness in research, data and tech has grown in popularity over the years since Ioannidis published his article.

Related ReadingScience Needs a Software Upgrade

 

Open-Access Practices in Science

One of the first scientific journals founded on the premise of openness in inquiry was PLoS One. Published by the Public Library of Science, the first openly accessible library of research findings across scientific, medical, and technological fields, PLoS One became the standard-bearer for open-access journal articles that had passed peer-review. What made PLoS One unique was that the journal’s review process didn’t ask the review committee to determine the importance of a study’s results. Unlike traditional academic reviews, PLoS One’s initial reviewers only evaluated the methodology of a scholarly work rather than its conclusions. The study’s techniques determined the paper’s eventual publication or rejection. If the review committee decided that the technical merits of the work were of sufficient publication quality, PLoS One immediately published the work on PLoS. This step made the article available for post-publication peer review to anyone who wished to engage with it. Post-publication reviewers could suggest corrections, point out errors, discuss the merits of the piece and up- or downvote the work’s importance.

More recently, the Center for Open Science’s Open Science Framework has taken PLoS One’s model and expanded upon it. The OSF has equipped researchers and reviewers with centralized access to all the reports, data and computer code in every scientific report published under its aegis. Rather than only hosting completed and methodologically verified reports, the OSF functions as a repository for research in progress and even hosts research ideas that have yet to be fully fleshed out. Interested individuals can publish as much or as little of their previously reviewed materials as they like. They can even search for public projects to review and comment on before any formal review or publication takes place. A good way to think about OSF is as a GitHub-like repository for works in progress at all stages of a research project’s life cycle.

In terms of the review process, OSF also takes PLoS One’s idea for a methodology-only review followed by immediate publication even further. The OSF employs a preregistration process in which researchers submit a study design for review by a participating journal committee. This submission presumably happens after it has already been listed on the OSF site and perhaps commented on by other OSF members. The review committee then grants publication status before any of the proposed analyses are carried out by the research team. This process not only alleviates the infamous file-drawer problem, in which research that fails to confirm hypotheses might not see publication, but it also provides an incentive for researchers to truly commit to a research plan prior to carrying out any substantial work. Thus, the OSF goes one step further than PLoS One’s review process in ensuring accuracy. It does so by formalizing a direct policy by which researchers can see that they don’t need to cherry pick only the most publishable (and therefore probably false) results from a study. PLoS One does have a similar policy, but doesn’t structure it in such a directly beneficial manner. The OSF practice is part of a wider drive toward study preregistration, which is a process that most government-funded, randomized, controlled trials in the United States must adhere to.

The central question to ask of the change from the closed-door, small-group review commissions of prior years to more open, collaborative and methodologically focused review processes is simple. Do these new processes produce better research in terms of the replicability of results? A recent article preprint examined 16 studies’ replicability rates when the researchers adhered to an open-review framework combined with preregistration of study design. The article’s authors found the replicability rate of these studies to be 86 percent. By contrast, the vast majority of studies not adhering to this review framework traditionally had replicability rates of 50 percent. For further reference, the “gold standard” of replicability in scientific research is commonly understood to be 80 percent. Granted, the article preprint contained a small sample and, ironically, further work is needed to ensure these results replicate. But, given their promise, open-review forums like the OSF and PLoS One provide the best potential method for solving the replicability crisis in the future.

 

Opening Up Software Development

Not only are process control and collaborative innovation critical for the replicability of academic research, but they are also important for innovation in software development. Indeed, a formalized system of distributed version control had already come along when PLoS was first getting going in the mid-2000s. Git was the brainchild of Linus Torvalds. Torvalds, who is better known as the chief creator of Linux, needed another method by which he and his development team could distribute, check and maintain the Linux server after their previous source-control management company withdrew the free use of their proprietary platform. So, they wrote their own freely accessible version.

Since Torvalds and his team introduced the Git system for distributed collaboration into mainstream use, its decentralized, open-source nature has caught on. Quasi-proprietary and open-source platforms like GitHub, Bitbucket and GitLab now offer user-friendly versions of non-linear version control software based on Git. These platforms are available not only to most major software development teams but also, depending on the specific platform, to anyone with access to the internet. An increasingly broad network of programmers around the world, both amateur and professional, can engage in pulling and pushing version updates to coding projects on any programming a host wishes to make public. Thus, Git introduced most of the world to an all-in-one repository, version control and development system that thrives on open internal or external access and collaboration.

A final example of how an openly accessible platform for collaborative work ensures informational veracity is also the most famous: Wikipedia. Jimmy Wales and Larry Sanger’s brainchild began as an offshoot of the web-based encyclopedia, Nupedia. Nupedia relied on a select group of writers to contribute articles subject to a review process for content verification. Eventually, Wales and Sanger came to the conclusion that a web-based encyclopedia would be better if it were publicly editable and, if so, relied on its audience to do the editing in real-time by using hypertext to link to other sources of information. This practice, known as “wiki,” provided the other piece of the website’s title. Initially starting as a completely open platform, the core Wikipedia team has continued to implement quality-control rules over the past 20 years. In doing so, the team has increased the validity of the website’s content. Through its adherence to open accessibility and collaborative editing, six years after it was founded, Wikipedia had already become the most expansive encyclopedia in world history. It has, therefore, largely succeeded in maintaining and improving upon its open-source nature and is a success story for this type of framework.

 

Embrace Openness

Openness to public access and to collaborative enterprise, rather than being detriments to information quality, both serve to build greater trust in it. With simple and transparent mechanisms to ensure quality control and general enforcement of community-agreed-upon standards, open processes of inquiry and development produce information of the utmost trustworthiness. The veracity of results produced in this manner over the past 20 years speaks for itself. Systematized openness and collaborative methods of engagement prove to be humanity’s best methods for safeguarding outcome validity that ensures our continuing to uncover the unknown.

Related ReadingThe Internet Should Be More Like Wikipedia

Hiring Now
Sierra Space
Aerospace • Hardware • Information Technology • Robotics
SHARE