A Fair Pricing Model via Adversarial Learning
- URL: http://arxiv.org/abs/2202.12008v3
- Date: Mon, 26 Dec 2022 15:07:41 GMT
- Title: A Fair Pricing Model via Adversarial Learning
- Authors: Vincent Grari, Arthur Charpentier, Marcin Detyniecki
- Abstract summary: At the core of insurance business lies classification between risky and non-risky insureds.
The distinction between a fair actuarial classification and "discrimination" is subtle.
We show that debiasing the predictor alone may be insufficient to maintain adequate accuracy.
- Score: 3.983383967538961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: At the core of insurance business lies classification between risky and
non-risky insureds, actuarial fairness meaning that risky insureds should
contribute more and pay a higher premium than non-risky or less-risky ones.
Actuaries, therefore, use econometric or machine learning techniques to
classify, but the distinction between a fair actuarial classification and
"discrimination" is subtle. For this reason, there is a growing interest about
fairness and discrimination in the actuarial community Lindholm, Richman,
Tsanakas, and Wuthrich (2022). Presumably, non-sensitive characteristics can
serve as substitutes or proxies for protected attributes. For example, the
color and model of a car, combined with the driver's occupation, may lead to an
undesirable gender bias in the prediction of car insurance prices.
Surprisingly, we will show that debiasing the predictor alone may be
insufficient to maintain adequate accuracy (1). Indeed, the traditional pricing
model is currently built in a two-stage structure that considers many
potentially biased components such as car or geographic risks. We will show
that this traditional structure has significant limitations in achieving
fairness. For this reason, we have developed a novel pricing model approach.
Recently some approaches have Blier-Wong, Cossette, Lamontagne, and Marceau
(2021); Wuthrich and Merz (2021) shown the value of autoencoders in pricing. In
this paper, we will show that (2) this can be generalized to multiple pricing
factors (geographic, car type), (3) it perfectly adapted for a fairness context
(since it allows to debias the set of pricing components): We extend this main
idea to a general framework in which a single whole pricing model is trained by
generating the geographic and car pricing components needed to predict the pure
premium while mitigating the unwanted bias according to the desired metric.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making [58.06306331390586]
We introduce the notion of a margin complement, which measures how much a prediction score $S$ changes due to a thresholding operation.
We show that under suitable causal assumptions, the influences of $X$ on the prediction score $S$ are equal to the influences of $X$ on the true outcome $Y$.
arXiv Detail & Related papers (2024-05-24T11:22:19Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Mitigating Discrimination in Insurance with Wasserstein Barycenters [0.0]
Insurance industry heavily reliant on predictions of risks based on characteristics of potential customers.
Discrimination based on sensitive features such as gender or race often attributed to historical data biases.
We propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling.
arXiv Detail & Related papers (2023-06-22T14:27:17Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Bayesian CART models for insurance claims frequency [0.0]
classification and regression trees (CARTs) and their ensembles have gained popularity in the actuarial literature.
We introduce Bayesian CART models for insurance pricing, with a particular focus on claims frequency modelling.
Some simulations and real insurance data will be discussed to illustrate the applicability of these models.
arXiv Detail & Related papers (2023-03-03T13:48:35Z) - A Discussion of Discrimination and Fairness in Insurance Pricing [0.0]
Group fairness concepts are proposed to'smooth out' the impact of protected characteristics in the calculation of insurance prices.
We present a statistical model that is free of proxy discrimination, thus, unproblematic from an insurance pricing point of view.
We find that the canonical price in this statistical model does not satisfy any of the three most popular group fairness axioms.
arXiv Detail & Related papers (2022-09-02T07:31:37Z) - The Fairness of Credit Scoring Models [0.0]
In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers.
This can be unintentional and originate from the training dataset or from the model itself.
We show how to formally test the algorithmic fairness of scoring models and how to identify the variables responsible for any lack of fairness.
arXiv Detail & Related papers (2022-05-20T14:20:40Z) - Better Together? How Externalities of Size Complicate Notions of
Solidarity and Actuarial Fairness [4.94950858749529]
Two notions of fairness might be to a) charge each individual the same price or b) charge each individual according to the cost that they bring to the pool.
We show that it is possible for both groups (high and low risk) to strictly benefit by joining an insurance pool where costs are evenly split, as opposed to being in separate risk pools.
We build on this by producing a pricing scheme that maximally subsidizes the high risk group, while maintaining an incentive for lower risk people to stay in the insurance pool.
arXiv Detail & Related papers (2021-02-27T22:55:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.