Hypothetical bias

A distortion that arises when an individual’s stated behaviour or valuation differs to that of their real behaviour or valuation.

Background

Hypothetical bias occurs when individuals report unrealistic behaviours or values to researchers in surveys or in experimental studies. In other words, what individuals say they would do hypothetically is not necessarily what they would do in reality.[1] This bias occurs in stated preference studies (individuals’ stated choices/valuations of goods/services), e.g. discrete choice experiments (DCEs), which are widely used across health sciences. Hypothetical bias impacts the validity of a study’s results. It is considered particularly prevalent in healthcare because there are many treatments and services that individuals may experience in the future or may not experience at all.

Hypothetical bias is thought to be linked to several factors, such as responses in stated preference settings being non-binding. As such, the implications to the individual of their responses are inconsequential (and respondents may not in fact agree with the policy implications of their own choices; see [2]). Moreover, the settings in which experiments or surveys are taken (e.g. online surveys) may be far removed from the settings in which the corresponding real-world behaviours are conducted (e.g. making decisions about treatment options in clinical settings). Lastly, respondents may respond strategically to surveys for a variety of reasons (e.g. report that they would use primary care services more often than they really would if they believed a new service would be opened closer to them on the basis of this [strategic] response to a survey.[3]

Although hypothetical bias potentially arises in any stated preference study, its presence is difficult to detect. It is an issue that is commonly overlooked in health settings for a variety of reasons, such as having no real-world data to detect or correct for hypothetical bias.[4]

Example

Buckell and Hess (2019) use an online DCE in the US tobacco market, and US tobacco market data, to show the presence of (and correct for) hypothetical bias.[5] Their findings suggest that hypothetical bias can affect the predicted market shares of tobacco products; that is, the predicted proportion(s) of smokers that purchase cigarettes or e-cigarettes appears to be distorted by hypothetical bias. Moreover, both the direction and magnitude of predictions of tobacco policy changes appear to be distorted by hypothetical bias.

Impact

Empirical evidence shows how hypothetical bias can impact on results of health-based stated preference studies:

  • Ozdemir et al. (2009) show that estimates of willingness to pay for treatment for rheumatoid arthritis are inflated by hypothetical bias. Respondents in the “cheap talk” arm (versus the control arm) reported much lower willingness-to-pay (WTP) for a four-week onset of treatment: $35 vs $255.[6]
  • Mark and Swait (2004) report differences between experimental and real-world preference estimates for physicians’ prescribing of alcohol treatments, where “the stated preference and revealed preference data do not yield identical preference estimates.” For example, estimates for efficacy were significantly lower for revealed preference (estimated parameter = 0.22; t-ratio = 2.00) than for stated preference (estimated parameter 0.46; t-ratio = 3.10).[7]
  • Quaife et al. (2018) demonstrate some discrepancies between predicted health behaviours (including treatments for sleep apnea, tuberculosis treatments, screening for Chlamydia, and preferences for pharmacy-based health checks) from DCEs and corresponding, actual health behaviours in the real world, “Pooled estimation suggests that the sensitivity of DCE predictions was relatively high (0.88, 95% CI 0.81, 0.92), whilst specificity was substantially lower (0.34, 95% CI 0.23, 0.46). These results suggest that DCEs can be moderately informative for predicting future behavior.”[8]

Preventive steps

Many approaches are available to mitigate the impact of hypothetical bias. These are typically categorised as ex-ante approaches (i.e. implemented before reporting) or ex-post approaches (i.e. implemented after reporting) and are detailed below. It is worth noting that, “it is likely that a number of factors affect hypothetical bias and therefore no single technique will be the magic bullet that eliminates this bias”.[9]

Ex-ante approaches:

  • Cheap talk [10]: instructing respondents that their responses are feeding into important research that may impact on current clinical practise or policy. This approach aims to induce realistic behaviours by linking respondents’ responses to consequences (terms such as “consequentiality scripts” and “honesty pledges” have also been used to convey similar approaches).
  • Honesty priming [11]: a technique from psychology in which respondents are required, prior to the experimental task, to make sentences from scrambled words, and the words are those associated with honesty, truthfulness, etc. Respondents are then said to be primed, meaning that they are subliminally encouraged to give truthful responses in the experimental tasks that follow.
  • Inferred valuation [12]: asking respondents to estimate others’, rather than their own, value of a good or service. This method removes an individual’s sense of agency in their valuation and as a consequence is thought to reduce self-related biases in valuations.
  • Incentive compatibility [13: conditioning a reward (typically a financial reward), or the chance of a reward, on respondents’ choices. In this case, respondents’ choices are linked to a payoff, and hypothetical bias is said to be reduced.
  • Pivot designs [1]: embedding information on respondents’ own choices in the design of the experimental tasks to make the tasks more realistic and so to reduce hypothetical bias (see also “SP-off-RP” designs [14]).

Ex-post approaches:

  • Certainty calibration [15]: asking respondents to indicate how certain they are that they would make their experimental choices in real-world settings. This information is then used to adjust models, termed calibration, in analyses so as to reduce hypothetical bias.
  • Revealed preference calibration [10]: obtaining available market (i.e. real-world) data, in which individuals actually made choices, and adjusting – or calibrating – models using this data. Since uncalibrated models are based on experimental data, using real-world behaviour to make adjustments is thought to reduce hypothetical bias.

Sources

  1. Hensher, D. A., Rose, J. M., & Greene, W. (2015). Applied Choice Analysis. Cambridge: Cambridge University Press
  2. Shah, K. K., Tsuchiya, A., & Wailoo, A. J. (2018). Valuing health at the end of life: A review of stated preference studies in the social sciences literature. Social Science & Medicine, 204, 39-50.
  3. Carson, R. T. and T. Groves (2007). Incentive and informational properties of preference questions. Environmental and Resource Economics 37(1): 181-210.
  4. Lancsar, E., & Burge, P. (2014). Choice modelling research in health economics. In S. Hess & A. Daly (Eds.), Handbook of Choice Modelling. Cheltenham: Edward Elgar Publishing.
  5. Buckell, J. and J. L. Sindelar (2019). The impact of flavors, health risks, secondhand smoke and prices on young adults’ cigarette and e-cigarette choices: a discrete choice experiment. Addiction 114(8): 1427-1435.
  6. Özdemir, S., Johnson, F. R., & Hauber, A. B. (2009). Hypothetical bias, cheap talk, and stated willingness to pay for health care. Journal of Health Economics, 28(4), 894-901.
  7. Mark, T. L. and J. Swait (2004). Using stated preference and revealed preference modeling to evaluate prescribing decisions. Health Economics 13(6): 563-573.
  8. Quaife, M., Terris-Prestholt, F., Di Tanna, G. L., & Vickerman, P. (2018). How well do discrete choice experiments predict health choices? A systematic review and meta-analysis of external validity. The European Journal of Health Economics, 19(8), 1053-1066.
  9. Murphy, J. J., Allen, P. G., Stevens, T. H., & Weatherhead, D. (2005). A Meta-analysis of Hypothetical Bias in Stated Preference Valuation. Environmental and Resource Economics, 30(3), 313-325.
  10. Buckell, J. and S. Hess (2019). Stubbing out hypothetical bias: improving tobacco market predictions by combining stated and revealed preference data. Journal of Health Economics 65: 93-102.
  11. De Magistris, T., Gracia, A., & Nayga, R. M., Jr. (2013). On the Use of Honesty Priming Tasks to Mitigate Hypothetical Bias in Choice Experiments. American Journal of Agricultural Economics, 95(5), 1136-1154.
  12. Lusk, J. L. and F. B. Norwood (2009). An Inferred Valuation Method. Land Economics 85(3): 500-514.
  13. Smith, Vernon L. Microeconomic systems as an experimental science. The American Economic Review 72.5 (1982): 923-955.
  14. Train, K. E. and W. W. Wilson (2009). Monte Carlo analysis of SP-off-RP data. Journal of Choice Modelling 2(1): 101-117.
  15. Beck, M. J., Fifer, S., & Rose, J. M. (2016). Can you ever be certain? Reducing hypothetical bias in stated choice experiments via respondent reported choice certainty. Transportation Research Part B: Methodological, 89, 149-167.

PubMed feed

https://www.ncbi.nlm.nih.gov/pubmed/clinical/?term=%22hypothetical%20bias%22

Leave a Reply

Your email address will not be published. Required fields are marked *