Look Who's Talking: Interpretable Machine Learning for Assessing Italian
SMEs Credit Default
- URL: http://arxiv.org/abs/2108.13914v2
- Date: Wed, 1 Sep 2021 07:29:21 GMT
- Title: Look Who's Talking: Interpretable Machine Learning for Assessing Italian
SMEs Credit Default
- Authors: Lisa Crosato, Caterina Liberati and Marco Repetto
- Abstract summary: This paper relies on a model-agnostic approach to model firms' default prediction.
Two Machine Learning algorithms (eXtreme Gradient Boosting and FeedForward Neural Network) are compared to three standard discriminant models.
Results show that our analysis of the Italian Small and Medium Enterprises manufacturing industry benefits from the overall highest classification power by the eXtreme Gradient Boosting algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Academic research and the financial industry have recently paid great
attention to Machine Learning algorithms due to their power to solve complex
learning tasks. In the field of firms' default prediction, however, the lack of
interpretability has prevented the extensive adoption of the black-box type of
models. To overcome this drawback and maintain the high performances of
black-boxes, this paper relies on a model-agnostic approach. Accumulated Local
Effects and Shapley values are used to shape the predictors' impact on the
likelihood of default and rank them according to their contribution to the
model outcome. Prediction is achieved by two Machine Learning algorithms
(eXtreme Gradient Boosting and FeedForward Neural Network) compared with three
standard discriminant models. Results show that our analysis of the Italian
Small and Medium Enterprises manufacturing industry benefits from the overall
highest classification power by the eXtreme Gradient Boosting algorithm without
giving up a rich interpretation framework.
Related papers
- Leveraging Convolutional Neural Network-Transformer Synergy for Predictive Modeling in Risk-Based Applications [5.914777314371152]
This paper proposes a deep learning model based on the combination of convolutional neural networks (CNN) and Transformer for credit user default prediction.
The results show that the CNN+Transformer model outperforms traditional machine learning models, such as random forests and XGBoost.
This study provides a new idea for credit default prediction and provides strong support for risk assessment and intelligent decision-making in the financial field.
arXiv Detail & Related papers (2024-12-24T07:07:14Z) - RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models [60.596005921295806]
Agglomerative models have emerged as a powerful approach to training vision foundation models.
We identify critical challenges including resolution mode shifts, teacher imbalance, idiosyncratic teacher artifacts, and an excessive number of output tokens.
We propose several novel solutions: multi-resolution training, mosaic augmentation, and improved balancing of teacher loss functions.
arXiv Detail & Related papers (2024-12-10T17:06:41Z) - Language Model Meets Prototypes: Towards Interpretable Text Classification Models through Prototypical Networks [1.1711824752079485]
dissertation focuses on developing intrinsically interpretable models when using LMs as encoders.
I developed a novel white-box multi-head graph attention-based prototype network.
I am working on extending the attention-based prototype network with contrastive learning to redesign an interpretable graph neural network.
arXiv Detail & Related papers (2024-12-04T22:59:35Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Enabling Machine Learning Algorithms for Credit Scoring -- Explainable
Artificial Intelligence (XAI) methods for clear understanding complex
predictive models [2.1723750239223034]
This paper compares various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) and show that advanced tree based models give best results in prediction of client default.
We also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners.
arXiv Detail & Related papers (2021-04-14T09:44:04Z) - Transparency, Auditability and eXplainability of Machine Learning Models
in Credit Scoring [4.370097023410272]
This paper works out different dimensions that have to be considered for making credit scoring models understandable.
We present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards.
arXiv Detail & Related papers (2020-09-28T15:00:13Z) - Machine Learning approach for Credit Scoring [0.0]
We build a stack of machine learning models aimed at composing a state-of-the-art credit rating and default prediction system.
Our approach is an excursion through the most recent ML / AI concepts.
arXiv Detail & Related papers (2020-07-20T21:29:06Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.