Novelty bias has been observed in several therapeutic areas. The mechanisms by which interventions appears better when new, or new for a certain purpose, is not known and may involve other forms of bias having a greater effect when an intervention is new. Novelty bias may be comprised of selection bias (e.g. participants in early trials of a medicine are more carefully selected than in later trials), positive result bias (e.g. positive results of a treatment are selectively reported when it is new and less selectively reported later) and other forms of bias such as outcome reporting bias, confirmation bias, and hot stuff bias.
Novelty bias refers to the mere appearance that a new treatment is better when it is new; there may be other circumstances where a new treatment actually is better when it is new (e.g. an antimicrobial treatment may actually be better when it is first introduced due to lower rates of resistance).
A network meta-analysis of 522 trials (116 477 participants) found that when a medicine was novel, it appeared to be more effective (difference 1.18 times) than when the very same medicine was trialled against a newer one.
Similarly, a meta-regression of 37 trials of fluoxetine found after adjusting for possible confounders its observed efficacy was associated with whether fluoxetine was the experimental drug or whether it was used as the comparator,
A meta-regression of 61 trials of hepatitis C treatments found that sustained response, assessed by transaminases or virus-RNA counts, was 11.9 % greater when the treatment was labelled experimental treatments compared to control.
A multiple-treatments meta-analysis model of 229 trials of chemotherapeutic agents for ovarian, colorectal and breast cancer found that treatment effects were exaggerated by 6 % due to the novelty effect (95 % credible interval: 2 to 16 %).
Based on meta-analyses of clinical trials of medicines, novelty bias can cause an intervention to appear between 2 and 27 % better when the treatment is novel. There may be some clinical areas where novelty bias has little or no effect on the literature and other areas where it has a large effect. This remains an important area for future research.
Authors of studies of novel interventions should explicitly mention whether or not the difference observed is likely to be explained by novelty bias. Journal editors could insist that authors mention novelty bias when it may well be at play such as when a trial of a novel intervention makes it appear slightly better than an older intervention. Similarly, authors of later studies that with less favourable results for the formerly novel intervention should try to explain the differences in the studies that may account for the different results.
Clinicians and patients should take novelty bias into account when making decisions based on the results of studies. For example, results that suggest a new antidepressant is 10 % better than an old one are consistent with the two being equivalent and the new appearing to be better only due to novelty bias.