A Discussion of Discrimination and Fairness in Insurance Pricing
- URL: http://arxiv.org/abs/2209.00858v1
- Date: Fri, 2 Sep 2022 07:31:37 GMT
- Title: A Discussion of Discrimination and Fairness in Insurance Pricing
- Authors: Mathias Lindholm, Ronald Richman, Andreas Tsanakas, Mario V.
W\"uthrich
- Abstract summary: Group fairness concepts are proposed to'smooth out' the impact of protected characteristics in the calculation of insurance prices.
We present a statistical model that is free of proxy discrimination, thus, unproblematic from an insurance pricing point of view.
We find that the canonical price in this statistical model does not satisfy any of the three most popular group fairness axioms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Indirect discrimination is an issue of major concern in algorithmic models.
This is particularly the case in insurance pricing where protected policyholder
characteristics are not allowed to be used for insurance pricing. Simply
disregarding protected policyholder information is not an appropriate solution
because this still allows for the possibility of inferring the protected
characteristics from the non-protected ones. This leads to so-called proxy or
indirect discrimination. Though proxy discrimination is qualitatively different
from the group fairness concepts in machine learning, these group fairness
concepts are proposed to 'smooth out' the impact of protected characteristics
in the calculation of insurance prices. The purpose of this note is to share
some thoughts about group fairness concepts in the light of insurance pricing
and to discuss their implications. We present a statistical model that is free
of proxy discrimination, thus, unproblematic from an insurance pricing point of
view. However, we find that the canonical price in this statistical model does
not satisfy any of the three most popular group fairness axioms. This seems
puzzling and we welcome feedback on our example and on the usefulness of these
group fairness axioms for non-discriminatory insurance pricing.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - AI and ethics in insurance: a new solution to mitigate proxy
discrimination in risk modeling [0.0]
Driven by the growing attention of regulators on the ethical use of data in insurance, the actuarial community must rethink pricing and risk selection practices.
Equity is a philosophy concept that has many different definitions in every jurisdiction that influence each other without currently reaching consensus.
We propose an innovative method, not yet met in the literature, to reduce the risks of indirect discrimination thanks to mathematical concepts of linear algebra.
arXiv Detail & Related papers (2023-07-25T16:20:56Z) - Group fairness without demographics using social networks [29.073125057536014]
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
arXiv Detail & Related papers (2023-05-19T00:45:55Z) - Proportional Fairness in Obnoxious Facility Location [70.64736616610202]
We propose a hierarchy of distance-based proportional fairness concepts for the problem.
We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness.
We prove existence results for two extensions to our model.
arXiv Detail & Related papers (2023-01-11T07:30:35Z) - A multi-task network approach for calculating discrimination-free
insurance prices [0.0]
In insurance pricing, indirect or proxy discrimination is an issue of major concern.
We propose a multi-task neural network architecture for claim predictions, which can be trained using only partial information on protected characteristics.
We find that its predictive accuracy is comparable to a conventional feedforward neural network (on full information)
arXiv Detail & Related papers (2022-07-06T16:36:27Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - A Fair Pricing Model via Adversarial Learning [3.983383967538961]
At the core of insurance business lies classification between risky and non-risky insureds.
The distinction between a fair actuarial classification and "discrimination" is subtle.
We show that debiasing the predictor alone may be insufficient to maintain adequate accuracy.
arXiv Detail & Related papers (2022-02-24T10:42:20Z) - Robust Allocations with Diversity Constraints [65.3799850959513]
We show that the Nash Welfare rule that maximizes product of agent values is uniquely positioned to be robust when diversity constraints are introduced.
We also show that the guarantees achieved by Nash Welfare are nearly optimal within a widely studied class of allocation rules.
arXiv Detail & Related papers (2021-09-30T11:09:31Z) - Black Loans Matter: Distributionally Robust Fairness for Fighting
Subgroup Discrimination [23.820606347327686]
Algorithmic fairness in lending relies on group fairness metrics for monitoring statistical parity across protected groups.
This approach is vulnerable to subgroup discrimination by proxy, carrying significant risks of legal and reputational damage for lenders.
We motivate this problem against the backdrop of historical and residual racism in the United States polluting all available training data.
arXiv Detail & Related papers (2020-11-27T21:04:07Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.