Explainable AI in Credit Risk Management
- URL: http://arxiv.org/abs/2103.00949v1
- Date: Mon, 1 Mar 2021 12:23:20 GMT
- Title: Explainable AI in Credit Risk Management
- Authors: Branka Hadji Misheva, Joerg Osterrieder, Ali Hirsa, Onkar Kulkarni,
Stephen Fung Lin
- Abstract summary: We implement two advanced explainability techniques called Local Interpretable Model Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to machine learning (ML)-based credit scoring models.
Specifically, we use LIME to explain instances locally and SHAP to get both local and global explanations.
We discuss the results in detail and present multiple comparison scenarios by using various kernels available for explaining graphs generated using SHAP values.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) has created the single biggest technology
revolution the world has ever seen. For the finance sector, it provides great
opportunities to enhance customer experience, democratize financial services,
ensure consumer protection and significantly improve risk management. While it
is easier than ever to run state-of-the-art machine learning models, designing
and implementing systems that support real-world finance applications have been
challenging. In large part because they lack transparency and explainability
which are important factors in establishing reliable technology and the
research on this topic with a specific focus on applications in credit risk
management. In this paper, we implement two advanced post-hoc model agnostic
explainability techniques called Local Interpretable Model Agnostic
Explanations (LIME) and SHapley Additive exPlanations (SHAP) to machine
learning (ML)-based credit scoring models applied to the open-access data set
offered by the US-based P2P Lending Platform, Lending Club. Specifically, we
use LIME to explain instances locally and SHAP to get both local and global
explanations. We discuss the results in detail and present multiple comparison
scenarios by using various kernels available for explaining graphs generated
using SHAP values. We also discuss the practical challenges associated with the
implementation of these state-of-art eXplainabale AI (XAI) methods and document
them for future reference. We have made an effort to document every technical
aspect of this research, while at the same time providing a general summary of
the conclusions.
Related papers
- Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era [77.174117675196]
XAI is being extended towards Large Language Models (LLMs)
This paper analyzes how XAI can benefit LLMs and AI systems.
We introduce 10 strategies, introducing the key techniques for each and discussing their associated challenges.
arXiv Detail & Related papers (2024-03-13T20:25:27Z) - Multimodal Large Language Models to Support Real-World Fact-Checking [80.41047725487645]
Multimodal large language models (MLLMs) carry the potential to support humans in processing vast amounts of information.
While MLLMs are already being used as a fact-checking tool, their abilities and limitations in this regard are understudied.
We propose a framework for systematically assessing the capacity of current multimodal models to facilitate real-world fact-checking.
arXiv Detail & Related papers (2024-03-06T11:32:41Z) - A Hypothesis on Good Practices for AI-based Systems for Financial Time
Series Forecasting: Towards Domain-Driven XAI Methods [0.0]
Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks.
These models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance.
This paper explores good practices for deploying explainability in AI-based systems for finance.
arXiv Detail & Related papers (2023-11-13T17:56:45Z) - AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities
and Challenges [60.56413461109281]
Artificial Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes.
We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful.
We categorize the key AIOps tasks as - incident detection, failure prediction, root cause analysis and automated actions.
arXiv Detail & Related papers (2023-04-10T15:38:12Z) - A Time Series Approach to Explainability for Neural Nets with
Applications to Risk-Management and Fraud Detection [0.0]
Trust in technology is enabled by understanding the rationale behind the predictions made.
For cross-sectional data classical XAI approaches can lead to valuable insights about the models' inner workings.
We propose a novel XAI technique for deep learning methods which preserves and exploits the natural time ordering of the data.
arXiv Detail & Related papers (2022-12-06T12:04:01Z) - Analyzing Machine Learning Models for Credit Scoring with Explainable AI
and Optimizing Investment Decisions [0.0]
This paper examines two different yet related questions related to explainable AI (XAI) practices.
The study compares various machine learning models, including single classifiers (logistic regression, decision trees, LDA, QDA), heterogeneous ensembles (AdaBoost, Random Forest), and sequential neural networks.
Two advanced post-hoc model explainability techniques - LIME and SHAP are utilized to assess ML-based credit scoring models.
arXiv Detail & Related papers (2022-09-19T21:44:42Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Explaining Credit Risk Scoring through Feature Contribution Alignment
with Expert Risk Analysts [1.7778609937758323]
We focus on companies credit scoring and we benchmark different machine learning models.
The aim is to build a model to predict whether a company will experience financial problems in a given time horizon.
We bring light by providing an expert-aligned feature relevance score highlighting the disagreement between a credit risk expert and a model feature attribution explanation.
arXiv Detail & Related papers (2021-03-15T12:59:15Z) - Explanations of Machine Learning predictions: a mandatory step for its
application to Operational Processes [61.20223338508952]
Credit Risk Modelling plays a paramount role.
Recent machine and deep learning techniques have been applied to the task.
We suggest to use LIME technique to tackle the explainability problem in this field.
arXiv Detail & Related papers (2020-12-30T10:27:59Z) - Explainable AI for Interpretable Credit Scoring [0.8379286663107844]
Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application.
Regulations have added the need for model interpretability to ensure that algorithmic decisions are understandable coherent.
We present a credit scoring model that is both accurate and interpretable.
arXiv Detail & Related papers (2020-12-03T18:44:03Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.