Mitigating Discrimination in Insurance with Wasserstein Barycenters
- URL: http://arxiv.org/abs/2306.12912v1
- Date: Thu, 22 Jun 2023 14:27:17 GMT
- Title: Mitigating Discrimination in Insurance with Wasserstein Barycenters
- Authors: Arthur Charpentier and Fran\c{c}ois Hu and Philipp Ratz
- Abstract summary: Insurance industry heavily reliant on predictions of risks based on characteristics of potential customers.
Discrimination based on sensitive features such as gender or race often attributed to historical data biases.
We propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The insurance industry is heavily reliant on predictions of risks based on
characteristics of potential customers. Although the use of said models is
common, researchers have long pointed out that such practices perpetuate
discrimination based on sensitive features such as gender or race. Given that
such discrimination can often be attributed to historical data biases, an
elimination or at least mitigation is desirable. With the shift from more
traditional models to machine-learning based predictions, calls for greater
mitigation have grown anew, as simply excluding sensitive variables in the
pricing process can be shown to be ineffective. In this article, we first
investigate why predictions are a necessity within the industry and why
correcting biases is not as straightforward as simply identifying a sensitive
variable. We then propose to ease the biases through the use of Wasserstein
barycenters instead of simple scaling. To demonstrate the effects and
effectiveness of the approach we employ it on real data and discuss its
implications.
Related papers
- Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Debiasing Machine Learning Models by Using Weakly Supervised Learning [3.3298048942057523]
We tackle the problem of bias mitigation of algorithmic decisions in a setting where both the output of the algorithm and the sensitive variable are continuous.
Typical examples are unfair decisions made with respect to the age or the financial status.
Our bias mitigation strategy is a weakly supervised learning method which requires that a small portion of the data can be measured in a fair manner.
arXiv Detail & Related papers (2024-02-23T18:11:32Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Towards Assumption-free Bias Mitigation [47.5131072745805]
We propose an assumption-free framework to detect the related attributes automatically by modeling feature interaction for bias mitigation.
Experimental results on four real-world datasets demonstrate that our proposed framework can significantly alleviate unfair prediction behaviors.
arXiv Detail & Related papers (2023-07-09T05:55:25Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - Controlling Bias Exposure for Fair Interpretable Predictions [11.364105288235308]
We argue that a favorable debiasing method should use sensitive information 'fairly' rather than blindly eliminating it.
Our model achieves a desirable trade-off between debiasing and task performance along with producing debiased rationales as evidence.
arXiv Detail & Related papers (2022-10-14T01:49:01Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Unfairness Discovery and Prevention For Few-Shot Regression [9.95899391250129]
We study fairness in supervised few-shot meta-learning models sensitive to discrimination (or bias) in historical data.
A machine learning model trained based on biased data tends to make unfair predictions for users from minority groups.
arXiv Detail & Related papers (2020-09-23T22:34:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.