PermuteAttack: Counterfactual Explanation of Machine Learning Credit
Scorecards
- URL: http://arxiv.org/abs/2008.10138v2
- Date: Fri, 28 Aug 2020 18:06:46 GMT
- Title: PermuteAttack: Counterfactual Explanation of Machine Learning Credit
Scorecards
- Authors: Masoud Hashemi, Ali Fathi
- Abstract summary: This paper is a note on new directions and methodologies for validation and explanation of Machine Learning (ML) models employed for retail credit scoring in finance.
Our proposed framework draws motivation from the field of Artificial Intelligence (AI) security and adversarial ML.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is a note on new directions and methodologies for validation and
explanation of Machine Learning (ML) models employed for retail credit scoring
in finance. Our proposed framework draws motivation from the field of
Artificial Intelligence (AI) security and adversarial ML where the need for
certifying the performance of the ML algorithms in the face of their
overwhelming complexity poses a need for rethinking the traditional notions of
model architecture selection, sensitivity analysis and stress testing. Our
point of view is that the phenomenon of adversarial perturbations when detached
from the AI security domain, has purely algorithmic roots and fall within the
scope of model risk assessment. We propose a model criticism and explanation
framework based on adversarially generated counterfactual examples for tabular
data. A counterfactual example to a given instance in this context is defined
as a synthetically generated data point sampled from the estimated data
distribution which is treated differently by a model. The counterfactual
examples can be used to provide a black-box instance-level explanation of the
model behaviour as well as studying the regions in the input space where the
model performance deteriorates. Adversarial example generating algorithms are
extensively studied in the image and natural language processing (NLP) domains.
However, most financial data come in tabular format and naive application of
the existing techniques on this class of datasets generates unrealistic
samples. In this paper, we propose a counterfactual example generation method
capable of handling tabular data including discrete and categorical variables.
Our proposed algorithm uses a gradient-free optimization based on genetic
algorithms and therefore is applicable to any classification model.
Related papers
- Towards a framework on tabular synthetic data generation: a minimalist approach: theory, use cases, and limitations [0.7227323884094953]
The framework is applied to high dimensional simulated credit scoring data which parallels real-life financial applications.
We show that the method is simplistic, guarantees interpretability all the way through, does not require extra tuning and provide unique benefits.
arXiv Detail & Related papers (2024-11-17T06:37:54Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - CLIMAX: An exploration of Classifier-Based Contrastive Explanations [5.381004207943597]
We propose a novel post-hoc model XAI technique that provides contrastive explanations justifying the classification of a black box.
Our method, which we refer to as CLIMAX, is based on local classifiers.
We show that we achieve better consistency as compared to baselines such as LIME, BayLIME, and SLIME.
arXiv Detail & Related papers (2023-07-02T22:52:58Z) - A prediction and behavioural analysis of machine learning methods for
modelling travel mode choice [0.26249027950824505]
We conduct a systematic comparison of different modelling approaches, across multiple modelling problems, in terms of the key factors likely to affect model choice.
Results indicate that the models with the highest disaggregate predictive performance provide poorer estimates of behavioural indicators and aggregate mode shares.
It is also observed that the MNL model performs robustly in a variety of situations, though ML techniques can improve the estimates of behavioural indices such as Willingness to Pay.
arXiv Detail & Related papers (2023-01-11T11:10:32Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.