The ideal diagnostic test would have both high sensitivity (the proportion of people testing positive who actually have the disease) and high specificity (the proportion of people testing negative who do not have the disease). However, the sensitivity and specificity of diagnostic tests vary in different settings. One of the contributing factors is spectrum bias, where the sensitivity and specificity of a test can be affected by the differences in the patient characteristics in different settings because each setting (e.g. primary care, emergency care, hospital setting) has a different mix of patients.
A systematic review assessed the diagnostic accuracy of anti-citrullinated peptide antibodies (ACPA) in diagnosing rheumatoid arthritis and included 151 primary studies of different study designs. The case-control studies, which included individuals with (cases) and without rheumatoid arthritis (controls), over-estimated the sensitivity of the ACPA test.
A review of carpal tunnel syndrome diagnostic accuracy studies showed that both sensitivity and specificity were overestimated in studies using a case-control design. Test performance can also vary based on the affected population’s characteristics, for example in a study assessing the performance of an exercise test for coronary artery disease.
Spectrum bias can have varying effects on sensitivity and specificity. For example, there is consistent evidence that when using a case-control design in diagnostic accuracy studies, both sensitivity and specificity are increased.
A review of meta-analyses of diagnostic accuracy studies found that the largest overestimation of accuracy occurred in studies that included severe cases and healthy controls (relative diagnostic odds ratio, 4.9, 95% confidence interval 0.6–37). Severe cases would be easier to detect, therefore increasing the estimated sensitivity, while healthy controls would mean there are less false positive results, therefore overestimating the specificity. Studies that used other case-control designs produced similar estimates of accuracy to diagnostic cohort studies.
Studies that do not include patients consecutively are associated with an overestimation of the diagnostic odds ratio by 50% compared with those that used a consecutive series of patients.
Primary studies of diagnostic test accuracy should recruit individuals that represent the population in which the test will be typically used, in line with the STARD guidelines and PRISMA for diagnostic test accuracy. Ideally, individuals should be recruited consecutively. Failing that, the case mix in the study sample should be explicitly stated.
Systematic reviews of diagnostic accuracy should include prespecified subgroup analyses of all clinically relevant patient characteristics.
When using a particular diagnostic test, clinicians should be aware of the differences in characteristics between the patient or population in front of them and those individuals included in the research. This information may be used to consider potential changes in sensitivity or specificity of the test, and the effect on the predictive value.