A multi-task network approach for calculating discrimination-free
insurance prices
- URL: http://arxiv.org/abs/2207.02799v1
- Date: Wed, 6 Jul 2022 16:36:27 GMT
- Title: A multi-task network approach for calculating discrimination-free
insurance prices
- Authors: Mathias Lindholm, Ronald Richman, Andreas Tsanakas, Mario V.
W\"uthrich
- Abstract summary: In insurance pricing, indirect or proxy discrimination is an issue of major concern.
We propose a multi-task neural network architecture for claim predictions, which can be trained using only partial information on protected characteristics.
We find that its predictive accuracy is comparable to a conventional feedforward neural network (on full information)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In applications of predictive modeling, such as insurance pricing, indirect
or proxy discrimination is an issue of major concern. Namely, there exists the
possibility that protected policyholder characteristics are implicitly inferred
from non-protected ones by predictive models, and are thus having an
undesirable (or illegal) impact on prices. A technical solution to this problem
relies on building a best-estimate model using all policyholder characteristics
(including protected ones) and then averaging out the protected characteristics
for calculating individual prices. However, such approaches require full
knowledge of policyholders' protected characteristics, which may in itself be
problematic. Here, we address this issue by using a multi-task neural network
architecture for claim predictions, which can be trained using only partial
information on protected characteristics, and it produces prices that are free
from proxy discrimination. We demonstrate the use of the proposed model and we
find that its predictive accuracy is comparable to a conventional feedforward
neural network (on full information). However, this multi-task network has
clearly superior performance in the case of partially missing policyholder
information.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - A Discussion of Discrimination and Fairness in Insurance Pricing [0.0]
Group fairness concepts are proposed to'smooth out' the impact of protected characteristics in the calculation of insurance prices.
We present a statistical model that is free of proxy discrimination, thus, unproblematic from an insurance pricing point of view.
We find that the canonical price in this statistical model does not satisfy any of the three most popular group fairness axioms.
arXiv Detail & Related papers (2022-09-02T07:31:37Z) - Model Transparency and Interpretability : Survey and Application to the
Insurance Industry [1.6058099298620423]
This paper introduces the importance of model tackles interpretation and the notion of model transparency.
Within an insurance context, it illustrates how some tools can be used to enforce the control of actuarial models.
arXiv Detail & Related papers (2022-09-01T16:12:54Z) - Improved Generalization Guarantees in Restricted Data Models [16.193776814471768]
Differential privacy is known to protect against threats to validity incurred due to adaptive, or exploratory, data analysis.
We show that, under this assumption, it is possible to "re-use" privacy budget on different portions of the data, significantly improving accuracy without increasing the risk of overfitting.
arXiv Detail & Related papers (2022-07-20T16:04:12Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.