Lack of blinding

The lack of concealment of an intervention or control treatment received by participants in a clinical trial.

Background

The aim of blinding is to reduce bias due to the knowledge of which intervention or control is being received by study participants.  Blinding in a trial can be single, double-blind or triple blind, however, what is important is defining who was blinded as blinding terms are often easily confused.

Who Why Source
Study participants To ensure participants don’t change their behaviour as a result of knowing what their group assignment is and do not report their subjective outcome measures differently as a result. A participant who knows they are receiving the placebo might be disappointed: they might attend more doctor’s appointments in order to try to get additional treatment. They also might be more likely to be lost to follow-up. Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet. 2002;359:696–700
Data collectors Study staff collecting data might record differently for different participants if they know which group they are in.
Outcome assessors Outcomes could be assessed differently if the assessors know which participant was receiving which intervention. In a trial of treatments for multiple sclerosis, when blinded neurologists performed disease assessment, there was no difference between intervention and placebo, but when unblinded neurologists performed the assessment, there was an apparent benefit of the intervention over the control treatment. Noseworthy JH, Ebers GC, Vandervoort MK, Farquhar RE, Yetisir E, Roberts R. The impact of blinding on the results of a randomized, placebo-controlled multiple sclerosis clinical trial. Neurology. 1994 Jan; 44(1):16-20
Data analysts To ensure the data analysis is not influenced during or after the trial until analyses are complete; for example, by conscious or unconscious selection of statistical tests and reporting.
Clinicians caring for study participants To prevent clinicians from providing different treatment for intervention and control groups, and to prevent them from reflecting their views on the allocation to the participants. Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet. 2002;359:696–700

 

In most studies, blinding should be maintained for the duration of the study until data analysis is complete. However, for some intervention studies, blinding is not achievable: for instance, it is not possible to blind dietary intakes in a free-living experiment; it can be more difficult to achieve blinding in trials of procedures such as surgery.  Also, treatment effects or associated adverse events might be specific enough to identify the allocation to a certain intervention.

The ethical argument supporting blinding is that whilst the participant loses the knowledge of their treatment, clinical equipoise makes this acceptable. There are arguments that challenge the ethics of blinding study participants. For example, for a participant in a clinical trial, not knowing their allocation could make it more difficult to receive individualised treatment.

Example

A well-documented example of lack of blinding is an early study of vitamin C in the prevention of the common cold. The participants could determine whether they were taking the intervention pill or the placebo pill because the two types of pill did not match in taste:

Book cover

A randomized trial published in 1975 (Karlowski 1975) was a classic example of how lack of adequate blinding in a trial results in serious bias.”  Read more

In a systematic review of Tamiflu, one of the reasons for the high risk of bias included in many cases the placebo capsule had a different coloured cap to that of the active capsule, which was not mentioned in the published papers and only discovered by analysis of the clinical study reports.

Impact

Quantitative evidence demonstrates that blinding in clinical trials affects the results reported: Schulz and coworkers analysed data from 250 randomized controlled trials which had been included in 33 meta-analyses. Studies that did not report double-blinding showed on average odds ratios that were 17% higher than studies that did not.

Objective outcomes, such as death are less at risk of bias from lack of blinding, particularly outcome assessors.  Trials with subjective outcomes (those based on feelings such as pain, rather than objectively measured outcomes such as diagnostic records) might be more at risk of bias from lack of blinding. For example, osteoarthritis trials tend to use patient-reported pain and function outcomes. A study looking at the influence of lack of blinding in 122 trials including 27,452 participants found that estimated treatment effects were smaller for trials with adequate blinding compared with trials with inadequate blinding.

A systematic review of meta-analyses looked at the average bias and heterogeneity associated with reported methodological aspects of randomized controlled trials:

“Lack of/unclear double blinding (versus double blinding, where both participants and personnel/assessors are blinded) was associated with a 23% exaggeration of intervention effect estimates in trials with subjective outcomes (ROR 0.77, 95% CI 0.61 to 0.93). In contrast, there was little evidence of such bias in trials of mortality or other objective outcomes, or when all outcomes were analysed (ROR 0.92, 95% CI 0.74 to 1.14;I2 33%).”

Preventive steps

Blinding is a preventative procedure and should be used where ethically appropriate and feasible. For drug trials, this includes matching the placebo in colour, taste and dosing schedule. it is also important that trial reports publish who was blinded in the trial including whether the study investigators were blinded.

Allocation concealment is a related, but separate procedure: this is preventing knowledge of a participant’s allocation and preventing the deduction of subsequent allocations; concealment of allocation aims to reduce allocation bias, a form of selection bias that occurs during recruitment and randomization.

Sources

Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, Wang C, Lau J. Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA. 2002 Jun 12;287(22):2973-82

Hellman S, Hellman DS. Of mice but not men: problems of the randomized clinical trial. N Engl J Med. 1991;324(22):1585–9. doi: 10.1056/NEJM199105303242208

Jefferson Tom, et al. Oseltamivir for influenza in adults and children: systematic review of clinical study reports and summary of regulatory comments 

Jüni P, Altman DG, Egger M. Systematic reviews in health care: Assessing the quality of controlled clinical trials. BMJ. 2001 Jul 7; 323(7303):42-6

Noseworthy JH et al. The impact of blinding on the results of a randomized, placebo-controlled multiple sclerosis clinical trial. Neurology. 1994 Jan; 44(1):16-20 (NB not yet retrieved full text, as not available through Oxford library)

Nüesch E, Reichenbach S, Trelle S, Rutjes AW, Liewald K, Sterchi R, Altman DG, Jüni P. The importance of allocation concealment and patient blinding in osteoarthritis trials: a meta-epidemiologic study. Arthritis Rheum. 2009 Dec 15;61(12):1633-41. doi: 10.1002/art.24894.

Page MJ, Higgins JPT, Clayton G, Sterne JAC, Hróbjartsson A, Savović J. Empirical evidence of study design biases in randomized trials: systematic review of meta-epidemiological studies. PLoS One. 2016; 11(7): e0159267. doi:  10.1371/journal.pone.0159267

Porta M, Greenland S. Hernãn M, dos Santos Silva I, Last M, editors. A dictionary of epidemiology. 6th edition. New York: Oxford University Press: 2014

Sackett DL. Bias in analytic research. J Chron Dis 1979; 32: 51-63

Savović J, Jones HE, Altman DG, Harris RJ, Jüni P, Pildal J, Als-Nielsen B, Balk EM, Gluud C, Gluud LL, Ioannidis JP, Schulz KF, Beynon R, Welton NJ, Wood L, Moher D, Deeks JJ, Sterne JA. Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials. Ann Intern Med. 2012 Sep 18;157(6):429-38

Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet. 2002 Feb 23;359(9307):696-700

Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995 Feb 1; 273(5):408-12


PubMed feed