Health data, actuarial fairness and the hidden risk of discrimination in insurance pricing
The use of health data in actuarial pricing is increasingly situated at the intersection of technical necessity and societal expectations. Insurers must rely on accurate data to assess and price risks appropriately, while regulators and consumer organisations remain alert to issues of fairness, potential discrimination and unequal access to insurance. As health information becomes more digitalised and regulatory frameworks evolve, the challenge of preventing both intentional and unintentional discrimination grows in complexity.
A central theme emphasised by the Actuarial Association of Europe (AAE) is the distinction between legitimate risk differentiation and unlawful discrimination. Actuarial pricing principles rest on the understanding that customers presenting similar levels of risk should be treated consistently. Yet health data contains sensitive variables, some of which cannot be used under legal, ethical or societal constraints. These boundaries require careful navigation to preserve pricing adequacy while ensuring equitable treatment.
Although the debate has increasingly connected to the broader use of AI, the underlying issue predates advanced analytics. Nevertheless, the Geneva Association has correctly noted that AI driven modelling may reveal correlations that appear actuarially relevant but could, in practice, lead to discriminatory outcomes. Such risks include indirect associations with genetic predispositions, environmental factors or socioeconomic signals embedded within health related datasets.
Within the AAE, we did a considerable work a couple of years ago supporting the European Commission and stakeholders in assessing the implications of the European wide Right to Be Forgotten (RTBF). The initiative aimed to allow cancer survivors, after a specified period, to refrain from disclosing previous diagnoses when seeking insurance. While this objective clearly supported improved access, it simultaneously challenged established underwriting structures. The AAE has highlighted how fairness objectives may conflict with existing regulatory frameworks and the practicalities of actuarial risk assessment, underscoring the need for balanced policy design that considers both societal protections and technical feasibility.
The use of AI in health related underwriting introduces additional sources of potential discrimination. Models trained on historical claims and medical data may unintentionally replicate past inequalities or infer sensitive health conditions from indirect variables. These effects are particularly acute for lower income groups, who may already face structural disadvantages. Public concerns over factors such as BMI based underwriting or genetic data further illustrate the complexity of aligning technological developments with consumer trust. Ensuring fairness requires robust governance, transparent modelling and continuous monitoring to prevent health data from reinforcing hidden biases within pricing decisions.
New possibilities of using more health data in insurance pricing must be approached with careful consideration of fairness and regulatory coherence. The RTBF debate illustrates the inherent tension between supporting vulnerable groups and maintaining sustainable risk pooling. As data sources expand and analytical techniques grow more sophisticated, insurers must reinforce governance standards, strengthen transparency and ensure that underwriting decisions remain both technically justified and socially acceptable. Ultimately, maintaining trust in the insurance system requires policies that balance societal expectations with actuarial soundness, safeguarding both consumer protection and market stability.