Spin bias

The intentional or unintentional distorted interpretation of research results, unjustifiably suggesting favourable or unfavourable findings that can result in misleading conclusions


The best available clinical evidence should inform decisions in healthcare, so the results of clinical research should be reported and presented accurately. As Fletcher and Black have pointed out, “the data should speak for themselves”.

However, researchers may be tempted to distort the interpretation of their (or others’) results, misleading readers so that results are viewed in a more favourable (or unfavourable) light than is justified, and thus misleading readers, by adding “spin”.

Such actions can be tempting. For example, it may be a way of suggesting that a hypothesis was correct when it was not, or that it was not correct when it was, or demonstrating “impact” attracting media attention, or act as a marketing tool to influence research users.

Spin bias in health research is manifested in many ways, including (but not limited to):
● Attribution of causality when study design and analysis do not justify it
● Unjustified focus on a secondary endpoints
● Stressing a statistically significant primary endpoint and ignoring statistically non-significant primary endpoints
● Claiming non-inferiority/equivalence for a statistically non-significant endpoint
● Implying significance by referring to trends
● Inferring significance from statistical differences in subgroups
● Stressing per protocol rather than intention to treat analysis

The EQUATOR network deems such practices to be misleading reporting that reduces the completeness, transparency and the value of reports of health research.


Boutron and colleagues were among the first investigators to study the presence of spin in reports of randomized trials. They defined ‘spin’ in the context of a trial with statistically nonsignificant primary outcomes as follows:

‘the use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results.’

They identified 72 randomized control trials in which there were no statistically significant between-group differences in the primary outcomes. They examined both the report abstract and main text of each article and found that more than 68% of abstracts and 61% of the main texts had some form of spin.

The analysis identified many strategies for spinning (see Table 2 in their paper). Among the most common techniques were focusing reports on statistically significant differences in secondary outcomes, or focusing on another study objective to distract readers’ attention from a statistically non-significant difference.

A more recent systematic review explored the nature, prevalence and implications of spin in 35 reports that met the inclusion criteria. The review included studies that investigated the association of spin with the following factors:

● conflicts of interest and study funding;
● author characteristics;
● journal characteristics and
● study design and/or quality

However, the authors pointed out that the heterogeneity and the small number of studies limited the strength of the conclusions.


Spin may influence the interpretation of information by evidence users; however, few studies have explored this. A randomised control trial allocated 150 clinicians to assess a sample of cancer-related report abstracts with spin and another 150 clinicians to evaluate the same abstract with the spin removed. Although the absolute size of the difference observed was small, the study found that the presence of spin was more likely to induce clinicians to report that the treatment was beneficial. Paradoxically, the study also showed that spin caused clinicians to rate the study as being ‘less’ rigorous and they were more likely to want to review the full-text of the article.

Two articles on statins with elements of spin and followed by a wide debate in the media were associated with an 11% and 12% increase in the likelihood of existing users stopping their treatment for primary and secondary prevention, respectively. Such effects could result in more than 2,000 extra cardiovascular events across the United Kingdom over a 10-y period.

Preventive steps

As pointed out by Chiu and colleagues, there are several ways to prevent spin. These include peer reviewers and journal editors checking that manuscripts, particularly abstracts, results and discussion sections, are consistent with the overall main findings of the research.

But the onus for preventing spin lies first and foremost with researchers themselves. They must resist the temptation to play up (or play down) their research findings, but also resist the pressure of other parties seeking to do the same, such as the publicity department of an institution.

The results from a 2019 randomised trial suggested that not spinning findings did not harm news and media interest.

Evidence used for decision making should not be based on single studies, but instead, be informed by systematic reviews of relevant research. However, as highlighted in another analysis, evidence users should be aware of the spin that can infect systematic reviews. Research submissions to journals should be accompanied by full data sets (where ethically appropriate), to allow subsequent reanalysis and promote replication. Some journals are already asking for this.

Evidence users can also consider using tools to help detect spin. Nine of the included studies in the Chiu et al. (2017) review used a framework for identifying spin first described by Boutron and colleagues. Twenty-three reports used a bespoke framework, although less than half had any evidence of previous piloting or validation of the tool.


Adams RC et al. Claims of causality in health news: a randomised trial. BMC Medicine 2019; 17:91.

Boutron I et al. Impact of spin in the abstracts of articles reporting results of randomized controlled trials in the field of cancer: the SPIIN randomized controlled trial. Journal of Clinical Oncology 2014; 32(36): 4120–26.

Boutron I et al. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA 2010; 303(20): 2058–64.

Chiu K, Grundy Q, Bero L. “Spin” in published biomedical literature: A methodological systematic review. PLOS Biology 2017; 15(9): e2002173/.

Fletcher RH, Black B. Spin in scientific writing: scientific mischief and legal jeopardy. Med Law 2007; 26(3): 511-25.

Mahtani KR. “Spin” in reports of clinical research. BMJ Evidence-Based Medicine 2016; 21: 201-2.

Matthews A et al. Impact of statin related media coverage on use of statins: interrupted time series analysis with UK primary care data. BMJ 2016; 353: i3283.

Yavchitz et al. A new classification of spin in systematic reviews and meta-analyses was developed and ranked according to the severity. J Clin Epidemiol. 2016; 75: 56-65.

PubMed feed

These sources are retrieved dynamically from Pubmed

    Leave a Reply

    Your email address will not be published. Required fields are marked *