AI, insurance, discrimination and unfair differentiation. An overview
and research agenda
- URL: http://arxiv.org/abs/2401.11892v1
- Date: Mon, 22 Jan 2024 12:39:36 GMT
- Title: AI, insurance, discrimination and unfair differentiation. An overview
and research agenda
- Authors: Marvin S. L. van Bekkum, Frederik J. Zuiderveen Borgesius
- Abstract summary: We distinguish two situations in which insurers use AI: (i) data-intensive underwriting, and (ii) behaviour-based insurance.
While the two trends bring many advantages, they may also have discriminatory effects.
We focus on two types of discrimination-related effects: discrimination and other unfair differentiation.
- Score: 0.951828574518325
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Insurers increasingly use AI. We distinguish two situations in which insurers
use AI: (i) data-intensive underwriting, and (ii) behaviour-based insurance.
(i) First, insurers can use AI for data analysis to assess risks:
data-intensive underwriting. Underwriting is, in short, calculating risks and
amending the insurance premium accordingly. (ii) Second, insurers can use AI to
monitor the behaviour of consumers in real-time: behaviour-based insurance. For
example, some car insurers give a discount if a consumer agrees to being
tracked by the insurer and drives safely. While the two trends bring many
advantages, they may also have discriminatory effects. This paper focuses on
the following question. Which discrimination-related effects may occur if
insurers use data-intensive underwriting and behaviour-based insurance? We
focus on two types of discrimination-related effects: discrimination and other
unfair differentiation. (i) Discrimination harms certain groups who are
protected by non-discrimination law, for instance people with certain
ethnicities. (ii) Unfair differentiation does not harm groups that are
protected by non-discrimination law, but it does seem unfair. We introduce four
factors to consider when assessing the fairness of insurance practices. The
paper builds on literature from various disciplines including law, philosophy,
and computer science.
Related papers
- Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It [2.2913283036871865]
chapter explores how genAI intersects with non-discrimination laws.
It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups.
It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues.
arXiv Detail & Related papers (2024-06-26T13:32:58Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Non-discrimination law in Europe: a primer for non-lawyers [44.715854387549605]
We aim to describe the law in such a way that non-lawyers and non-European lawyers can easily grasp its contents and challenges.
We introduce the EU-wide non-discrimination rules which are included in a number of EU directives.
The last section broadens the horizon to include bias-relevant law and cases from the EU AI Act, and related statutes.
arXiv Detail & Related papers (2024-04-12T14:59:58Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - AI and ethics in insurance: a new solution to mitigate proxy
discrimination in risk modeling [0.0]
Driven by the growing attention of regulators on the ethical use of data in insurance, the actuarial community must rethink pricing and risk selection practices.
Equity is a philosophy concept that has many different definitions in every jurisdiction that influence each other without currently reaching consensus.
We propose an innovative method, not yet met in the literature, to reduce the risks of indirect discrimination thanks to mathematical concepts of linear algebra.
arXiv Detail & Related papers (2023-07-25T16:20:56Z) - A Discussion of Discrimination and Fairness in Insurance Pricing [0.0]
Group fairness concepts are proposed to'smooth out' the impact of protected characteristics in the calculation of insurance prices.
We present a statistical model that is free of proxy discrimination, thus, unproblematic from an insurance pricing point of view.
We find that the canonical price in this statistical model does not satisfy any of the three most popular group fairness axioms.
arXiv Detail & Related papers (2022-09-02T07:31:37Z) - Using sensitive data to prevent discrimination by artificial
intelligence: Does the GDPR need a new exception? [0.0]
In Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally discriminates based on ethnicity.
In principle, the bans the use of certain'special categories of data'
This paper asks whether the rules on special categories of personal data hinder the prevention of AI-driven discrimination.
arXiv Detail & Related papers (2022-05-17T07:39:25Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z) - Discrimination of POVMs with rank-one effects [62.997667081978825]
This work provides an insight into the problem of discrimination of positive operator valued measures with rank-one effects.
We compare two possible discrimination schemes: the parallel and adaptive ones.
We provide an explicit algorithm which allows us to find this adaptive scheme.
arXiv Detail & Related papers (2020-02-13T11:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.