Explaining Adverse Actions in Credit Decisions Using Shapley
Decomposition
- URL: http://arxiv.org/abs/2204.12365v1
- Date: Tue, 26 Apr 2022 15:07:15 GMT
- Title: Explaining Adverse Actions in Credit Decisions Using Shapley
Decomposition
- Authors: Vijayan N. Nair, Tianshu Feng, Linwei Hu, Zach Zhang, Jie Chen and
Agus Sudjianto
- Abstract summary: This paper focuses on credit decisions based on a predictive model for probability of default and proposes a methodology for adverse action explanation.
We consider models with low-order interactions and develop a simple and intuitive approach based on first principles.
Unlike other Shapley techniques in the literature for local interpretability of machine learning results, B-Shap is computationally tractable.
- Score: 8.003221404049905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When a financial institution declines an application for credit, an adverse
action (AA) is said to occur. The applicant is then entitled to an explanation
for the negative decision. This paper focuses on credit decisions based on a
predictive model for probability of default and proposes a methodology for AA
explanation. The problem involves identifying the important predictors
responsible for the negative decision and is straightforward when the
underlying model is additive. However, it becomes non-trivial even for linear
models with interactions. We consider models with low-order interactions and
develop a simple and intuitive approach based on first principles. We then show
how the methodology generalizes to the well-known Shapely decomposition and the
recently proposed concept of Baseline Shapley (B-Shap). Unlike other Shapley
techniques in the literature for local interpretability of machine learning
results, B-Shap is computationally tractable since it involves just function
evaluations. An illustrative case study is used to demonstrate the usefulness
of the method. The paper also discusses situations with highly correlated
predictors and desirable properties of fitted models in the credit-lending
context, such as monotonicity and continuity.
Related papers
- Shapley Marginal Surplus for Strong Models [0.9831489366502301]
We show that while Shapley values might be accurate explainers of model predictions, machine learning models themselves are often poor explainers of the true data-generating process (DGP)
We introduce a novel variable importance algorithm, Shapley Marginal Surplus for Strong Models, that samples the space of possible models to come up with an inferential measure of feature importance.
arXiv Detail & Related papers (2024-08-16T17:06:07Z) - Variational Shapley Network: A Probabilistic Approach to Self-Explaining
Shapley values with Uncertainty Quantification [2.6699011287124366]
Shapley values have emerged as a foundational tool in machine learning (ML) for elucidating model decision-making processes.
We introduce a novel, self-explaining method that simplifies the computation of Shapley values significantly, requiring only a single forward pass.
arXiv Detail & Related papers (2024-02-06T18:09:05Z) - Manifold Restricted Interventional Shapley Values [0.5156484100374059]
We propose emphManifoldShap, which respects the model's domain of validity by restricting model evaluations to the data manifold.
We show, theoretically and empirically, that ManifoldShap is robust to off-manifold perturbations of the model and leads to more accurate and intuitive explanations.
arXiv Detail & Related papers (2023-01-10T15:47:49Z) - Direct Advantage Estimation [63.52264764099532]
We show that the expected return may depend on the policy in an undesirable way which could slow down learning.
We propose the Direct Advantage Estimation (DAE), a novel method that can model the advantage function and estimate it directly from data.
If desired, value functions can also be seamlessly integrated into DAE and be updated in a similar way to Temporal Difference Learning.
arXiv Detail & Related papers (2021-09-13T16:09:31Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Accurate and Intuitive Contextual Explanations using Linear Model Trees [0.0]
Local post hoc model explanations have gained massive adoption.
Current state of the art methods use rudimentary methods to generate synthetic data around the point to be explained.
We use a Generative Adversarial Network for synthetic data generation and train a piecewise linear model in the form of Linear Model Trees.
arXiv Detail & Related papers (2020-09-11T10:13:12Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Explaining predictive models with mixed features using Shapley values
and conditional inference trees [1.8065361710947976]
Shapley values stand out as a sound method to explain predictions from any type of machine learning model.
We propose a method to explain mixed dependent features by modeling the dependence structure of the features using conditional inference trees.
arXiv Detail & Related papers (2020-07-02T11:25:45Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.