Autocalibration and Tweedie-dominance for Insurance Pricing with Machine
Learning
- URL: http://arxiv.org/abs/2103.03635v1
- Date: Fri, 5 Mar 2021 12:40:30 GMT
- Title: Autocalibration and Tweedie-dominance for Insurance Pricing with Machine
Learning
- Authors: Michel Denuit and Arthur Charpentier and Julien Trufin
- Abstract summary: It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale.
This new method to correct for bias adds extra local GLM step to the analysis.
The convex order appears to be the natural tool to compare competing models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Boosting techniques and neural networks are particularly effective machine
learning methods for insurance pricing. Often in practice, there are
nevertheless endless debates about the choice of the right loss function to be
used to train the machine learning model, as well as about the appropriate
metric to assess the performances of competing models. Also, the sum of fitted
values can depart from the observed totals to a large extent and this often
confuses actuarial analysts. The lack of balance inherent to training models by
minimizing deviance outside the familiar GLM with canonical link setting has
been empirically documented in W\"uthrich (2019, 2020) who attributes it to the
early stopping rule in gradient descent methods for model fitting. The present
paper aims to further study this phenomenon when learning proceeds by
minimizing Tweedie deviance. It is shown that minimizing deviance involves a
trade-off between the integral of weighted differences of lower partial moments
and the bias measured on a specific scale. Autocalibration is then proposed as
a remedy. This new method to correct for bias adds an extra local GLM step to
the analysis. Theoretically, it is shown that it implements the autocalibration
concept in pure premium calculation and ensures that balance also holds on a
local scale, not only at portfolio level as with existing bias-correction
techniques. The convex order appears to be the natural tool to compare
competing models, putting a new light on the diagnostic graphs and associated
metrics proposed by Denuit et al. (2019).
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Bias Mitigating Few-Shot Class-Incremental Learning [17.185744533050116]
Few-shot class-incremental learning aims at recognizing novel classes continually with limited novel class samples.
Recent methods somewhat alleviate the accuracy imbalance between base and incremental classes by fine-tuning the feature extractor in the incremental sessions.
We propose a novel method to mitigate model bias of the FSCIL problem during training and inference processes.
arXiv Detail & Related papers (2024-02-01T10:37:41Z) - Fairness Uncertainty Quantification: How certain are you that the model
is fair? [13.209748908186606]
In modern machine learning, Gradient Descent (SGD) type algorithms are almost always used as training algorithms implying that the learned model, and consequently, its fairness properties are random.
In this work we provide Confidence Interval (CI) for test unfairness when a group-fairness-aware, specifically, Disparate Impact (DI), and Disparate Mistreatment (DM) aware linear binary classifier is trained using online SGD-type algorithms.
arXiv Detail & Related papers (2023-04-27T04:07:58Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z) - Positive-Congruent Training: Towards Regression-Free Model Updates [87.25247195148187]
In image classification, sample-wise inconsistencies appear as "negative flips"
A new model incorrectly predicts the output for a test sample that was correctly classified by the old (reference) model.
We propose a simple approach for PC training, Focal Distillation, which enforces congruence with the reference model.
arXiv Detail & Related papers (2020-11-18T09:00:44Z) - Error Autocorrelation Objective Function for Improved System Modeling [1.2760453906939444]
We introduce a "whitening" cost function, the Ljung-Box statistic, which not only minimizes the error but also minimizes the correlations between errors.
The results show significant improvement in generalization for recurrent neural networks (RNNs) and image autoencoders (2d)
arXiv Detail & Related papers (2020-08-08T19:20:32Z) - Counterfactual fairness: removing direct effects through regularization [0.0]
We propose a new definition of fairness that incorporates causality through the Controlled Direct Effect (CDE)
We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition.
Our results were found to mitigate unfairness from the predictions with small reductions in model performance.
arXiv Detail & Related papers (2020-02-25T10:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.