Publication bias

When the likelihood of a study being published is affected by the findings of the study.

Background

Dickersin & Min define publication bias as the failure to publish the results of a study “on the basis of the direction or strength of the study findings.” This non-publication introduces a bias which impacts the ability to accurately synthesize and describe the evidence in a given area. Publication bias is a type of reporting bias and closely related to dissemination bias, although dissemination bias generally applies to all forms of results dissemination, not simply journal publications. A variety of distinct biases are often grouped into the overall definition of publication bias.

There are a number of reasons for publication bias identified in the literature. Research has shown causes of publication bias ranging from trialist motivation, past experience, and competing commitments; perceived or real lack of interest in results from editors, reviewers or other colleagues; or conflicts of interest that would lead to the suppression of results not aligned with a specific agenda.

Example

In his 1986 piece on publication bias in clinical research Robert John Simes compared data reported to a cancer trial registry with data from the published literature on the survival impact of two cancer therapies. Simes found that in both instances the survival impact of the therapies either disappeared or was substantially less when the subset of data published in the academic literature was compared against the more complete data from a registry.

Publication bias is commonly assessed in cohort studies, such as the Simes example, where publication status is ascertained for a group of known completed trials. Research into treatments for depression provides a more recent example. Turner and colleagues reported that 31% of a cohort of studies for antidepressant drugs registered and reported to the FDA were never published. The literature included 91% positive studies while the larger FDA cohort only contained 51% positive studies. Driessen and colleagues reviewed all NIH grants for psychological treatments for depression from 1972 to 2008. When publications were not found, the data was requested from the grant recipients. 13 out of 55 trials (23.6%) arising from this cohort were never published. The effect size of psychological treatments was reduced by 25% when unpublished data was included in the pooled analysis with the published data.

Results of cohort studies such as these have been collected in systematic reviews. A 2013 systematic review by Dwan and colleagues reviewed 20 cohort studies on publication bias in randomized controlled trials and showed “statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7).” A 2014 systematic review by Schmucker and colleagues examined studies of publication and dissemination bias conducted using research approved by ethics committees or registered on a trial registry. They found that across 23 cohort studies “statistically significant results were more likely to be published than those without (pooled OR 2.8; 95% CI 2.2–3.5).”

Impact

The above examples help illustrate the impact of publication bias. This can vary, from the non-publication of a single notable study, to compromising the complete assessment of a  therapeutic area. However, as with many biases, large-scale quantitative research has tended to focus on documenting the prevalence of publication bias, rather than its impact and assessing the direction and magnitude of bias can be difficult. Schmucker and colleagues conducted a systematic review examining studies on publication bias that additionally estimated the impact of unpublished studies on pooled effects. They found only seven studies, which showed an increase in precision of pooled effect when unpublished data was considered; and two studies which showed a statistically significant effect of unpublished data on the pooled estimates.

Preventive steps

Prevention of publication bias can take many forms. Certain journals have made the solicitation and publication of null results a part of their core mission. However many of the documented barriers to publication cannot be addressed by the presence of journals receptive to null results.  

The preceding decade has seen various initiatives in the US and EU requiring certain trials to report results directly onto clinical trial registries in structured data format within 12 months of completion, providing an additional data source without the barriers to publication in academic journals. Sadly there is growing evidence that these laws and guidelines are undermined by loopholes and poor compliance.

Authors of systematic reviews and meta-analyses can also take steps to reduce the impact of non-publication on their work. The search for evidence should not be limited to only journal articles indexed in repositories such as PubMed or Ovid. Authors can and should search for results through other routes including trial registries, regulatory documents, and contacting trialists of known or suspected unpublished work. They can also use statistical methods to estimate if their sample of studies is likely impacted by publication bias. Funnel plots are a common way to visualise a skew in the publication of findings but should be interpreted carefully. More rigorous statistical methods for assessing publication bias exist and should be considered for use in meta-research when appropriate.

Sources

Chan A-W. Out of sight but not out of mind: how to search for unpublished clinical trial evidence. BMJ 2012;344:d8013.

Cowley, AJ et al. The Effect of Lorcainide on Arrhythmias and Survival in Patients with Acute Myocardial Infarction: An Example of Publication Bias. Int J Cardiol 1993; 40 (2): 161–66.

Dickerson K, Min YI. Publication bias: the problem that won’t go away. Ann N Y Acad Sci 1993;703135-46; discussion 146-48.

Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA 1990;263:1385–9.

Driessen, E et al. Does Publication Bias Inflate the Apparent Efficacy of Psychological Treatment for Major Depressive Disorder? A Systematic Review and Meta-Analysis of US National Institutes of Health-Funded Trials. PloS One 2015; 10 (9): e0137864.

Dwan K et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias – an updated review. PLoS One 2013;8:e66844.

Egger, M et al. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315:629.

Goldacre B et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ 2018;362:k3218

Ioannidis JPA, Trikalinos TA. The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey. CMAJ 2007;176:1091–6.

Lexchin J et al. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 2003;326:1167–70.

Müller KF et al. Defining publication bias: protocol for a systematic review of highly cited articles and proposal for a new framework. Syst Rev 2013;2:34.

Murad MH et al. The effect of publication bias magnitude and direction on the certainty in evidence. BMJ Evid Based Med Published Online First: 12 April 2018. doi:10.1136/bmjebm-2018-110891

Schmucker C et al. Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS One 2014;9:e114023.

Schmucker et al. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One 2017;12:e0176210.

Schneck A. Examining publication bias-a simulation-based evaluation of statistical tests on publication bias. PeerJ 2017;5:e4115.

Simes RJ. Publication bias: the case for an international registry of clinical trials. J Clin Oncol 1986;4:1529–41.

Song, F et al. Dissemination and Publication of Research Findings: An Updated Review of Related Biases. Health Technol Assess 2010; 14 (8): iii, ix – xi, 1–193.

Sterne JAC et al. Chapter 10: Addressing reporting biases. In: Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions. The Cochrane Collaboration 2011.

Sterne JA, Egger M. Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol 2001;54:1046–55.

Tang JL, Liu JL. Misleading funnel plot for detection of bias in meta-analysis. J Clin Epidemiol 2000;53:477–84.

Turner EH et al. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252–60.

Zarin D et al. Trial reporting in ClinicalTrials.gov – The Final Rule. NEJM 2016; 375:1998-2004


PubMed feed

These sources are retrieved dynamically from PubMed

View more →

Leave a Reply

Your email address will not be published. Required fields are marked *