Explainable AI for Interpretable Credit Scoring
- URL: http://arxiv.org/abs/2012.03749v1
- Date: Thu, 3 Dec 2020 18:44:03 GMT
- Title: Explainable AI for Interpretable Credit Scoring
- Authors: Lara Marie Demajo, Vince Vella and Alexiei Dingli
- Abstract summary: Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application.
Regulations have added the need for model interpretability to ensure that algorithmic decisions are understandable coherent.
We present a credit scoring model that is both accurate and interpretable.
- Score: 0.8379286663107844
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the ever-growing achievements in Artificial Intelligence (AI) and the
recent boosted enthusiasm in Financial Technology (FinTech), applications such
as credit scoring have gained substantial academic interest. Credit scoring
helps financial experts make better decisions regarding whether or not to
accept a loan application, such that loans with a high probability of default
are not accepted. Apart from the noisy and highly imbalanced data challenges
faced by such credit scoring models, recent regulations such as the `right to
explanation' introduced by the General Data Protection Regulation (GDPR) and
the Equal Credit Opportunity Act (ECOA) have added the need for model
interpretability to ensure that algorithmic decisions are understandable and
coherent. An interesting concept that has been recently introduced is
eXplainable AI (XAI), which focuses on making black-box models more
interpretable. In this work, we present a credit scoring model that is both
accurate and interpretable. For classification, state-of-the-art performance on
the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is
achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then
further enhanced with a 360-degree explanation framework, which provides
different explanations (i.e. global, local feature-based and local
instance-based) that are required by different people in different situations.
Evaluation through the use of functionallygrounded, application-grounded and
human-grounded analysis show that the explanations provided are simple,
consistent as well as satisfy the six predetermined hypotheses testing for
correctness, effectiveness, easy understanding, detail sufficiency and
trustworthiness.
Related papers
- A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Empowering Many, Biasing a Few: Generalist Credit Scoring through Large
Language Models [53.620827459684094]
Large Language Models (LLMs) have great potential for credit scoring tasks, with strong generalization ability across multiple tasks.
We propose the first open-source comprehensive framework for exploring LLMs for credit scoring.
We then propose the first Credit and Risk Assessment Large Language Model (CALM) by instruction tuning, tailored to the nuanced demands of various financial risk assessment tasks.
arXiv Detail & Related papers (2023-10-01T03:50:34Z) - Would I have gotten that reward? Long-term credit assignment by
counterfactual contribution analysis [50.926791529605396]
We introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms.
Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards.
arXiv Detail & Related papers (2023-06-29T09:27:27Z) - Inclusive FinTech Lending via Contrastive Learning and Domain Adaptation [9.75150920742607]
FinTech lending has played a significant role in facilitating financial inclusion.
There are concerns about the potentially biased algorithmic decision-making during loan screening.
We propose a new Transformer-based sequential loan screening model with self-supervised contrastive learning and domain adaptation.
arXiv Detail & Related papers (2023-05-10T01:11:35Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Enabling Machine Learning Algorithms for Credit Scoring -- Explainable
Artificial Intelligence (XAI) methods for clear understanding complex
predictive models [2.1723750239223034]
This paper compares various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) and show that advanced tree based models give best results in prediction of client default.
We also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners.
arXiv Detail & Related papers (2021-04-14T09:44:04Z) - Explaining Credit Risk Scoring through Feature Contribution Alignment
with Expert Risk Analysts [1.7778609937758323]
We focus on companies credit scoring and we benchmark different machine learning models.
The aim is to build a model to predict whether a company will experience financial problems in a given time horizon.
We bring light by providing an expert-aligned feature relevance score highlighting the disagreement between a credit risk expert and a model feature attribution explanation.
arXiv Detail & Related papers (2021-03-15T12:59:15Z) - Explainable AI in Credit Risk Management [0.0]
We implement two advanced explainability techniques called Local Interpretable Model Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to machine learning (ML)-based credit scoring models.
Specifically, we use LIME to explain instances locally and SHAP to get both local and global explanations.
We discuss the results in detail and present multiple comparison scenarios by using various kernels available for explaining graphs generated using SHAP values.
arXiv Detail & Related papers (2021-03-01T12:23:20Z) - Explanations of Machine Learning predictions: a mandatory step for its
application to Operational Processes [61.20223338508952]
Credit Risk Modelling plays a paramount role.
Recent machine and deep learning techniques have been applied to the task.
We suggest to use LIME technique to tackle the explainability problem in this field.
arXiv Detail & Related papers (2020-12-30T10:27:59Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Transparency, Auditability and eXplainability of Machine Learning Models
in Credit Scoring [4.370097023410272]
This paper works out different dimensions that have to be considered for making credit scoring models understandable.
We present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards.
arXiv Detail & Related papers (2020-09-28T15:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.