Dickersin & Min define publication bias as the failure to publish the results of a study “on the basis of the direction or strength of the study findings.” This non-publication introduces a bias which impacts the ability to accurately synthesize and describe the evidence in a given area. Publication bias is a type of reporting bias and closely related to dissemination bias, although dissemination bias generally applies to all forms of results dissemination, not simply journal publications. A variety of distinct biases are often grouped into the overall definition of publication bias.
There are a number of reasons for publication bias identified in the literature. Research has shown causes of publication bias ranging from trialist motivation, past experience, and competing commitments; perceived or real lack of interest in results from editors, reviewers or other colleagues; or conflicts of interest that would lead to the suppression of results not aligned with a specific agenda.
In his 1986 piece on publication bias in clinical research Robert John Simes compared data reported to a cancer trial registry with data from the published literature on the survival impact of two cancer therapies. Simes found that in both instances the survival impact of the therapies either disappeared or was substantially less when the subset of data published in the academic literature was compared against the more complete data from a registry.
Publication bias is commonly assessed in cohort studies, such as the Simes example, where publication status is ascertained for a group of known completed trials. Research into treatments for depression provides a more recent example. Turner and colleagues reported that 31% of a cohort of studies for antidepressant drugs registered and reported to the FDA were never published. The literature included 91% positive studies while the larger FDA cohort only contained 51% positive studies. Driessen and colleagues reviewed all NIH grants for psychological treatments for depression from 1972 to 2008. When publications were not found, the data was requested from the grant recipients. 13 out of 55 trials (23.6%) arising from this cohort were never published. The effect size of psychological treatments was reduced by 25% when unpublished data was included in the pooled analysis with the published data.
Results of cohort studies such as these have been collected in systematic reviews. A 2013 systematic review by Dwan and colleagues reviewed 20 cohort studies on publication bias in randomized controlled trials and showed “statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7).” A 2014 systematic review by Schmucker and colleagues examined studies of publication and dissemination bias conducted using research approved by ethics committees or registered on a trial registry. They found that across 23 cohort studies “statistically significant results were more likely to be published than those without (pooled OR 2.8; 95% CI 2.2–3.5).”
The above examples help illustrate the impact of publication bias. This can vary, from the non-publication of a single notable study, to compromising the complete assessment of a therapeutic area. However, as with many biases, large-scale quantitative research has tended to focus on documenting the prevalence of publication bias, rather than its impact and assessing the direction and magnitude of bias can be difficult. Schmucker and colleagues conducted a systematic review examining studies on publication bias that additionally estimated the impact of unpublished studies on pooled effects. They found only seven studies, which showed an increase in precision of pooled effect when unpublished data was considered; and two studies which showed a statistically significant effect of unpublished data on the pooled estimates.
Prevention of publication bias can take many forms. Certain journals have made the solicitation and publication of null results a part of their core mission. However many of the documented barriers to publication cannot be addressed by the presence of journals receptive to null results.
The preceding decade has seen various initiatives in the US and EU requiring certain trials to report results directly onto clinical trial registries in structured data format within 12 months of completion, providing an additional data source without the barriers to publication in academic journals. Sadly there is growing evidence that these laws and guidelines are undermined by loopholes and poor compliance.
Authors of systematic reviews and meta-analyses can also take steps to reduce the impact of non-publication on their work. The search for evidence should not be limited to only journal articles indexed in repositories such as PubMed or Ovid. Authors can and should search for results through other routes including trial registries, regulatory documents, and contacting trialists of known or suspected unpublished work. They can also use statistical methods to estimate if their sample of studies is likely impacted by publication bias. Funnel plots are a common way to visualise a skew in the publication of findings but should be interpreted carefully. More rigorous statistical methods for assessing publication bias exist and should be considered for use in meta-research when appropriate.