AI and ethics in insurance: a new solution to mitigate proxy
discrimination in risk modeling
- URL: http://arxiv.org/abs/2307.13616v1
- Date: Tue, 25 Jul 2023 16:20:56 GMT
- Title: AI and ethics in insurance: a new solution to mitigate proxy
discrimination in risk modeling
- Authors: Marguerite Sauce, Antoine Chancel, and Antoine Ly
- Abstract summary: Driven by the growing attention of regulators on the ethical use of data in insurance, the actuarial community must rethink pricing and risk selection practices.
Equity is a philosophy concept that has many different definitions in every jurisdiction that influence each other without currently reaching consensus.
We propose an innovative method, not yet met in the literature, to reduce the risks of indirect discrimination thanks to mathematical concepts of linear algebra.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of Machine Learning is experiencing growing interest from the
general public, and in recent years there have been numerous press articles
questioning its objectivity: racism, sexism, \dots Driven by the growing
attention of regulators on the ethical use of data in insurance, the actuarial
community must rethink pricing and risk selection practices for fairer
insurance. Equity is a philosophy concept that has many different definitions
in every jurisdiction that influence each other without currently reaching
consensus. In Europe, the Charter of Fundamental Rights defines guidelines on
discrimination, and the use of sensitive personal data in algorithms is
regulated. If the simple removal of the protected variables prevents any
so-called `direct' discrimination, models are still able to `indirectly'
discriminate between individuals thanks to latent interactions between
variables, which bring better performance (and therefore a better
quantification of risk, segmentation of prices, and so on). After introducing
the key concepts related to discrimination, we illustrate the complexity of
quantifying them. We then propose an innovative method, not yet met in the
literature, to reduce the risks of indirect discrimination thanks to
mathematical concepts of linear algebra. This technique is illustrated in a
concrete case of risk selection in life insurance, demonstrating its simplicity
of use and its promising performance.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - On the Societal Impact of Open Foundation Models [93.67389739906561]
We focus on open foundation models, defined here as those with broadly available model weights.
We identify five distinctive properties of open foundation models that lead to both their benefits and risks.
arXiv Detail & Related papers (2024-02-27T16:49:53Z) - Prejudice and Volatility: A Statistical Framework for Measuring Social Discrimination in Large Language Models [0.0]
This study investigates why and how inconsistency in the generation of Large Language Models (LLMs) might induce or exacerbate societal injustice.
We formulate the Prejudice-Volatility Framework (PVF) that precisely defines behavioral metrics for assessing LLMs.
We mathematically dissect the aggregated discrimination risk of LLMs into prejudice risk, originating from their system bias, and volatility risk.
arXiv Detail & Related papers (2024-02-23T18:15:56Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Mitigating Discrimination in Insurance with Wasserstein Barycenters [0.0]
Insurance industry heavily reliant on predictions of risks based on characteristics of potential customers.
Discrimination based on sensitive features such as gender or race often attributed to historical data biases.
We propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling.
arXiv Detail & Related papers (2023-06-22T14:27:17Z) - A Discussion of Discrimination and Fairness in Insurance Pricing [0.0]
Group fairness concepts are proposed to'smooth out' the impact of protected characteristics in the calculation of insurance prices.
We present a statistical model that is free of proxy discrimination, thus, unproblematic from an insurance pricing point of view.
We find that the canonical price in this statistical model does not satisfy any of the three most popular group fairness axioms.
arXiv Detail & Related papers (2022-09-02T07:31:37Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Marrying Fairness and Explainability in Supervised Learning [0.0]
We formalize direct discrimination as a direct causal effect of the protected attributes on the decisions.
We find that state-of-the-art fair learning methods can induce discrimination via association or reverse discrimination.
We propose to nullify the influence of the protected attribute on the output of the system, while preserving the influence of remaining features.
arXiv Detail & Related papers (2022-04-06T17:26:58Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.