Transparency, Auditability and eXplainability of Machine Learning Models
in Credit Scoring
- URL: http://arxiv.org/abs/2009.13384v1
- Date: Mon, 28 Sep 2020 15:00:13 GMT
- Title: Transparency, Auditability and eXplainability of Machine Learning Models
in Credit Scoring
- Authors: Michael B\"ucker and Gero Szepannek and Alicja Gosiewska and
Przemyslaw Biecek
- Abstract summary: This paper works out different dimensions that have to be considered for making credit scoring models understandable.
We present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards.
- Score: 4.370097023410272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major requirement for credit scoring models is to provide a maximally
accurate risk prediction. Additionally, regulators demand these models to be
transparent and auditable. Thus, in credit scoring, very simple predictive
models such as logistic regression or decision trees are still widely used and
the superior predictive power of modern machine learning algorithms cannot be
fully leveraged. Significant potential is therefore missed, leading to higher
reserves or more credit defaults. This paper works out different dimensions
that have to be considered for making credit scoring models understandable and
presents a framework for making ``black box'' machine learning models
transparent, auditable and explainable. Following this framework, we present an
overview of techniques, demonstrate how they can be applied in credit scoring
and how results compare to the interpretability of score cards. A real world
case study shows that a comparable degree of interpretability can be achieved
while machine learning techniques keep their ability to improve predictive
power.
Related papers
- Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Predicting Credit Risk for Unsecured Lending: A Machine Learning
Approach [0.0]
This research paper is to build a contemporary credit scoring model to forecast credit defaults for unsecured lending (credit cards)
Our research indicates that the Light Gradient Boosting Machine (LGBM) model is better equipped to deliver higher learning speeds, better efficiencies and manage larger data volumes.
We expect that deployment of this model will enable better and timely prediction of credit defaults for decision-makers in commercial lending institutions and banks.
arXiv Detail & Related papers (2021-10-05T17:54:56Z) - Look Who's Talking: Interpretable Machine Learning for Assessing Italian
SMEs Credit Default [0.0]
This paper relies on a model-agnostic approach to model firms' default prediction.
Two Machine Learning algorithms (eXtreme Gradient Boosting and FeedForward Neural Network) are compared to three standard discriminant models.
Results show that our analysis of the Italian Small and Medium Enterprises manufacturing industry benefits from the overall highest classification power by the eXtreme Gradient Boosting algorithm.
arXiv Detail & Related papers (2021-08-31T15:11:17Z) - Enabling Machine Learning Algorithms for Credit Scoring -- Explainable
Artificial Intelligence (XAI) methods for clear understanding complex
predictive models [2.1723750239223034]
This paper compares various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) and show that advanced tree based models give best results in prediction of client default.
We also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners.
arXiv Detail & Related papers (2021-04-14T09:44:04Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Explainable AI for Interpretable Credit Scoring [0.8379286663107844]
Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application.
Regulations have added the need for model interpretability to ensure that algorithmic decisions are understandable coherent.
We present a credit scoring model that is both accurate and interpretable.
arXiv Detail & Related papers (2020-12-03T18:44:03Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - A Hierarchy of Limitations in Machine Learning [0.0]
This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society.
Modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them.
Consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning.
arXiv Detail & Related papers (2020-02-12T19:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.