Model Transparency and Interpretability : Survey and Application to the
Insurance Industry
- URL: http://arxiv.org/abs/2209.00562v1
- Date: Thu, 1 Sep 2022 16:12:54 GMT
- Title: Model Transparency and Interpretability : Survey and Application to the
Insurance Industry
- Authors: Dimitri Delcaillau, Antoine Ly, Alize Papp and Franck Vermet
- Abstract summary: This paper introduces the importance of model tackles interpretation and the notion of model transparency.
Within an insurance context, it illustrates how some tools can be used to enforce the control of actuarial models.
- Score: 1.6058099298620423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of models, even if efficient, must be accompanied by an understanding
at all levels of the process that transforms data (upstream and downstream).
Thus, needs increase to define the relationships between individual data and
the choice that an algorithm could make based on its analysis (e.g. the
recommendation of one product or one promotional offer, or an insurance rate
representative of the risk). Model users must ensure that models do not
discriminate and that it is also possible to explain their results. This paper
introduces the importance of model interpretation and tackles the notion of
model transparency. Within an insurance context, it specifically illustrates
how some tools can be used to enforce the control of actuarial models that can
nowadays leverage on machine learning. On a simple example of loss frequency
estimation in car insurance, we show the interest of some interpretability
methods to adapt explanation to the target audience.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Information Laundering for Model Privacy [34.66708766179596]
We propose information laundering, a novel framework for enhancing model privacy.
Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use.
arXiv Detail & Related papers (2020-09-13T23:24:08Z) - Interpretabilit\'e des mod\`eles : \'etat des lieux des m\'ethodes et
application \`a l'assurance [1.6058099298620423]
Data is the raw material of many models today make it possible to increase the quality and performance of digital services.
Models users must ensure that models do not discriminate against and that it is also possible to explain its result.
The widening of the panel of predictive algorithms leads scientists to be vigilant about the use of models.
arXiv Detail & Related papers (2020-07-25T12:18:07Z) - Towards Explainability of Machine Learning Models in Insurance Pricing [0.0]
We discuss the need for model interpretability in property & casualty insurance ratemaking.
We propose a framework for explaining models, and present a case study to illustrate the framework.
arXiv Detail & Related papers (2020-03-24T05:51:30Z) - Asking the Right Questions: Learning Interpretable Action Models Through
Query Answering [33.08099403894141]
This paper develops a new approach for estimating an interpretable, relational model of a black-box autonomous agent that can plan and act.
Our main contributions are a new paradigm for estimating such models using a minimal query interface with the agent, and a hierarchical querying algorithm that generates an interrogation policy for estimating the agent's internal model.
arXiv Detail & Related papers (2019-12-29T09:05:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.