Attrition occurs when participants leave during a study. It almost always happens to some extent.
Different rates of loss to follow-up in the exposure groups, or losses of different types of participants, whether at similar or different frequencies, may change the characteristics of the groups, irrespective of the exposure or intervention. Losses may be influenced by such factors as unsatisfactory treatment efficacy or intolerable adverse events.
When participants leave, it may not be known whether they continue or discontinue an intervention; there may be no data on outcomes for these participants after that time.
Systematic differences between people who leave the study and those who continue can introduce bias into a study’s results – this is attrition bias. However, the results may not necessarily be biased, despite different drop-out rates in the groups. We discuss below how to assess the impact of different amounts of attrition.
In some cases, those who leave a study are likely to be different from those who continue. For instance, in an intervention study of diet in people with depression, those with more severe depression might find it harder to adhere to the diet regimen and therefore more likely to leave the study.
A study of psychosocial factors among patients with cardiac conditions showed that those who fully completed the study differed in clinical and psychosocial features from those who dropped out before the study ended. Such differential attrition could have biased the study’s results.
A trial investigating the quality of life among patients randomised to aggressive treatment of renal cancer had high rates of attrition owing to toxicity, disease progression, and deaths (64% in the control group; 70% in the intervention group). Analysis of those still in the trial showed no difference in the quality of life. The impact of attrition bias, however, suggested that even with equal drop-outs in both groups a biased estimate occurred.
A systematic review assessed the reporting, extent, and handling of loss to follow-up and its potential impact, on treatment effects in randomised controlled trials published in the five top medical journals, The authors calculated the percentage of trials in which the relative risk would no longer be significant when participants loss to follow-up varied. In 160 trials, with an average loss to follow-up of 6%, and assuming different event rates in the intervention groups relative to the control groups, between 0% and 33% of trials were no longer significant.
Potential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-IT): a systematic review. BMJ 2012;344:e2809.
Techniques for preventing losses follow-up include ensuring good communication between study staff and participants, accessibility to clinics, effective communication channels, incentives to continue, and ensuring that the study is of relevance to the participants.
However, for many studies, complete follow up is unlikely. In such cases, the reasons for attrition should be carefully considered. After the study has been completed, a number of analysis methods can be used to reduce the impact of attrition bias.
Intention to treat analysis: Because anything that happens after randomisation can affect the chance that a study participant has the outcome of interest, it is important that all patients (even those who fail to take their medicine or accidentally or intentionally receive the wrong treatment) are analysed in the groups to which they were allocated.
It is important that we not only look for the term ‘intention-to-treat analysis’ in the methods but also look at the results to ensure that the analysis was actually done.
Methods for dealing with missing data include last observation (or baseline value) carried forward, mixed models, imputation and sensitivity analysis using ‘worst case’ scenarios (assuming that those with no information all got worse) and ‘best case’ scenarios (assuming that all got better). Analysing data only from participants remaining in the study is called complete case analysis.
A rule of thumb states that <5% attrition leads to little bias, while >20% poses serious threats to validity. While this is useful, it is important to note that even small proportions of patients lost to follow-up can cause significant bias. One way to determine whether losses to follow-up can seriously affect results is to assume a worst-case scenario for the outcomes in those with missing data and look to see if the results would change. If this method doesn’t change the study’s conclusions, the loss to follow-up is likely not a threat to the study’s validity.
Regardless of the mechanisms used to obtain estimates of outcome data, the reasons that participants leave the study should be carefully considered: if people leave for reasons unrelated to the exposure (treatment) or the outcome this may have little or no impact on the results.