Look Who's Talking: Interpretable Machine Learning for Assessing Italian
SMEs Credit Default
- URL: http://arxiv.org/abs/2108.13914v2
- Date: Wed, 1 Sep 2021 07:29:21 GMT
- Title: Look Who's Talking: Interpretable Machine Learning for Assessing Italian
SMEs Credit Default
- Authors: Lisa Crosato, Caterina Liberati and Marco Repetto
- Abstract summary: This paper relies on a model-agnostic approach to model firms' default prediction.
Two Machine Learning algorithms (eXtreme Gradient Boosting and FeedForward Neural Network) are compared to three standard discriminant models.
Results show that our analysis of the Italian Small and Medium Enterprises manufacturing industry benefits from the overall highest classification power by the eXtreme Gradient Boosting algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Academic research and the financial industry have recently paid great
attention to Machine Learning algorithms due to their power to solve complex
learning tasks. In the field of firms' default prediction, however, the lack of
interpretability has prevented the extensive adoption of the black-box type of
models. To overcome this drawback and maintain the high performances of
black-boxes, this paper relies on a model-agnostic approach. Accumulated Local
Effects and Shapley values are used to shape the predictors' impact on the
likelihood of default and rank them according to their contribution to the
model outcome. Prediction is achieved by two Machine Learning algorithms
(eXtreme Gradient Boosting and FeedForward Neural Network) compared with three
standard discriminant models. Results show that our analysis of the Italian
Small and Medium Enterprises manufacturing industry benefits from the overall
highest classification power by the eXtreme Gradient Boosting algorithm without
giving up a rich interpretation framework.
Related papers
- Cliqueformer: Model-Based Optimization with Structured Transformers [102.55764949282906]
We develop a model that learns the structure of an MBO task and empirically leads to improved designs.
We evaluate Cliqueformer on various tasks, ranging from high-dimensional black-box functions to real-world tasks of chemical and genetic design.
arXiv Detail & Related papers (2024-10-17T00:35:47Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Designing Inherently Interpretable Machine Learning Models [0.0]
Inherently IML models should be adopted because of their transparency and explainability.
Black-box models with model-agnostic explainability can be more difficult to defend under regulatory scrutiny.
arXiv Detail & Related papers (2021-11-02T17:06:02Z) - Beyond Average Performance -- exploring regions of deviating performance
for black box classification models [0.0]
We describe two approaches that can be used to provide interpretable descriptions of the expected performance of any black box classification model.
These approaches are of high practical relevance as they provide means to uncover and describe in an interpretable way situations where the models are expected to have a performance that deviates significantly from their average behaviour.
arXiv Detail & Related papers (2021-09-16T20:46:52Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Enabling Machine Learning Algorithms for Credit Scoring -- Explainable
Artificial Intelligence (XAI) methods for clear understanding complex
predictive models [2.1723750239223034]
This paper compares various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) and show that advanced tree based models give best results in prediction of client default.
We also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners.
arXiv Detail & Related papers (2021-04-14T09:44:04Z) - Transparency, Auditability and eXplainability of Machine Learning Models
in Credit Scoring [4.370097023410272]
This paper works out different dimensions that have to be considered for making credit scoring models understandable.
We present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards.
arXiv Detail & Related papers (2020-09-28T15:00:13Z) - Machine Learning approach for Credit Scoring [0.0]
We build a stack of machine learning models aimed at composing a state-of-the-art credit rating and default prediction system.
Our approach is an excursion through the most recent ML / AI concepts.
arXiv Detail & Related papers (2020-07-20T21:29:06Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.