Explaining Credit Risk Scoring through Feature Contribution Alignment
with Expert Risk Analysts
- URL: http://arxiv.org/abs/2103.08359v1
- Date: Mon, 15 Mar 2021 12:59:15 GMT
- Title: Explaining Credit Risk Scoring through Feature Contribution Alignment
with Expert Risk Analysts
- Authors: Ayoub El Qadi, Natalia Diaz-Rodriguez, Maria Trocan and Thomas
Frossard
- Abstract summary: We focus on companies credit scoring and we benchmark different machine learning models.
The aim is to build a model to predict whether a company will experience financial problems in a given time horizon.
We bring light by providing an expert-aligned feature relevance score highlighting the disagreement between a credit risk expert and a model feature attribution explanation.
- Score: 1.7778609937758323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Credit assessments activities are essential for financial institutions and
allow the global economy to grow. Building robust, solid and accurate models
that estimate the probability of a default of a company is mandatory for credit
insurance companies, moreover when it comes to bridging the trade finance gap.
Automating the risk assessment process will allow credit risk experts to reduce
their workload and focus on the critical and complex cases, as well as to
improve the loan approval process by reducing the time to process the
application. The recent developments in Artificial Intelligence are offering
new powerful opportunities. However, most AI techniques are labelled as
blackbox models due to their lack of explainability. For both users and
regulators, in order to deploy such technologies at scale, being able to
understand the model logic is a must to grant accurate and ethical decision
making. In this study, we focus on companies credit scoring and we benchmark
different machine learning models. The aim is to build a model to predict
whether a company will experience financial problems in a given time horizon.
We address the black box problem using eXplainable Artificial Techniques in
particular, post-hoc explanations using SHapley Additive exPlanations. We bring
light by providing an expert-aligned feature relevance score highlighting the
disagreement between a credit risk expert and a model feature attribution
explanation in order to better quantify the convergence towards a better
human-aligned decision making.
Related papers
- Explainable Automated Machine Learning for Credit Decisions: Enhancing
Human Artificial Intelligence Collaboration in Financial Engineering [0.0]
This paper explores the integration of Explainable Automated Machine Learning (AutoML) in the realm of financial engineering.
The focus is on how AutoML can streamline the development of robust machine learning models for credit scoring.
The findings underscore the potential of explainable AutoML in improving the transparency and accountability of AI-driven financial decisions.
arXiv Detail & Related papers (2024-02-06T08:47:16Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Empowering Many, Biasing a Few: Generalist Credit Scoring through Large
Language Models [53.620827459684094]
Large Language Models (LLMs) have great potential for credit scoring tasks, with strong generalization ability across multiple tasks.
We propose the first open-source comprehensive framework for exploring LLMs for credit scoring.
We then propose the first Credit and Risk Assessment Large Language Model (CALM) by instruction tuning, tailored to the nuanced demands of various financial risk assessment tasks.
arXiv Detail & Related papers (2023-10-01T03:50:34Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - Enabling Machine Learning Algorithms for Credit Scoring -- Explainable
Artificial Intelligence (XAI) methods for clear understanding complex
predictive models [2.1723750239223034]
This paper compares various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) and show that advanced tree based models give best results in prediction of client default.
We also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners.
arXiv Detail & Related papers (2021-04-14T09:44:04Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Explainable AI in Credit Risk Management [0.0]
We implement two advanced explainability techniques called Local Interpretable Model Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to machine learning (ML)-based credit scoring models.
Specifically, we use LIME to explain instances locally and SHAP to get both local and global explanations.
We discuss the results in detail and present multiple comparison scenarios by using various kernels available for explaining graphs generated using SHAP values.
arXiv Detail & Related papers (2021-03-01T12:23:20Z) - Explanations of Machine Learning predictions: a mandatory step for its
application to Operational Processes [61.20223338508952]
Credit Risk Modelling plays a paramount role.
Recent machine and deep learning techniques have been applied to the task.
We suggest to use LIME technique to tackle the explainability problem in this field.
arXiv Detail & Related papers (2020-12-30T10:27:59Z) - Explainable AI for Interpretable Credit Scoring [0.8379286663107844]
Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application.
Regulations have added the need for model interpretability to ensure that algorithmic decisions are understandable coherent.
We present a credit scoring model that is both accurate and interpretable.
arXiv Detail & Related papers (2020-12-03T18:44:03Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.