A comparative study of forecasting Corporate Credit Ratings using Neural
Networks, Support Vector Machines, and Decision Trees
- URL: http://arxiv.org/abs/2007.06617v1
- Date: Mon, 13 Jul 2020 18:47:20 GMT
- Title: A comparative study of forecasting Corporate Credit Ratings using Neural
Networks, Support Vector Machines, and Decision Trees
- Authors: Parisa Golbayani, Ionu\c{t} Florescu, Rupak Chatterjee
- Abstract summary: Credit ratings are one of the primary keys that reflect the level of riskiness and reliability of corporations to meet their financial obligations.
Successful machine learning methods can provide rapid analysis of credit scores while updating older ones on a daily time scale.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Credit ratings are one of the primary keys that reflect the level of
riskiness and reliability of corporations to meet their financial obligations.
Rating agencies tend to take extended periods of time to provide new ratings
and update older ones. Therefore, credit scoring assessments using artificial
intelligence has gained a lot of interest in recent years. Successful machine
learning methods can provide rapid analysis of credit scores while updating
older ones on a daily time scale. Related studies have shown that neural
networks and support vector machines outperform other techniques by providing
better prediction accuracy. The purpose of this paper is two fold. First, we
provide a survey and a comparative analysis of results from literature applying
machine learning techniques to predict credit rating. Second, we apply
ourselves four machine learning techniques deemed useful from previous studies
(Bagged Decision Trees, Random Forest, Support Vector Machine and Multilayer
Perceptron) to the same datasets. We evaluate the results using a 10-fold cross
validation technique. The results of the experiment for the datasets chosen
show superior performance for decision tree based models. In addition to the
conventional accuracy measure of classifiers, we introduce a measure of
accuracy based on notches called "Notch Distance" to analyze the performance of
the above classifiers in the specific context of credit rating. This measure
tells us how far the predictions are from the true ratings. We further compare
the performance of three major rating agencies, Standard $\&$ Poors, Moody's
and Fitch where we show that the difference in their ratings is comparable with
the decision tree prediction versus the actual rating on the test dataset.
Related papers
- The Certainty Ratio $C_ρ$: a novel metric for assessing the reliability of classifier predictions [0.0]
This paper introduces the Certainty Ratio ($C_rho$), a novel metric designed to quantify the contribution of confident (certain) versus uncertain predictions to any classification performance measure.
Experimental results across 26 datasets and multiple classifiers, including Decision Trees, Naive-Bayes, 3-Nearest Neighbors, and Random Forests, demonstrate that $C_rho$rho reveals critical insights that conventional metrics often overlook.
arXiv Detail & Related papers (2024-11-04T10:50:03Z) - Fighting Sampling Bias: A Framework for Training and Evaluating Credit Scoring Models [2.918530881730374]
This paper addresses the adverse effect of sampling bias on model training and evaluation.
We propose bias-aware self-learning and a reject inference framework for scorecard evaluation.
Our results suggest a profit improvement of about eight percent, when using Bayesian evaluation to decide on acceptance rates.
arXiv Detail & Related papers (2024-07-17T20:59:54Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - Incorporating Experts' Judgment into Machine Learning Models [2.5363839239628843]
In some cases, domain experts might have a judgment about the expected outcome that might conflict with the prediction of machine learning models.
We present a novel framework that aims at leveraging experts' judgment to mitigate the conflict.
arXiv Detail & Related papers (2023-04-24T07:32:49Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - A Closer Look at Debiased Temporal Sentence Grounding in Videos:
Dataset, Metric, and Approach [53.727460222955266]
Temporal Sentence Grounding in Videos (TSGV) aims to ground a natural language sentence in an untrimmed video.
Recent studies have found that current benchmark datasets may have obvious moment annotation biases.
We introduce a new evaluation metric "dR@n,IoU@m" that discounts the basic recall scores to alleviate the inflating evaluation caused by biased datasets.
arXiv Detail & Related papers (2022-03-10T08:58:18Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - A Novel Classification Approach for Credit Scoring based on Gaussian
Mixture Models [0.0]
This paper introduces a new method for credit scoring based on Gaussian Mixture Models.
Our algorithm classifies consumers into groups which are labeled as positive or negative.
We apply our model with real world databases from Australia, Japan, and Germany.
arXiv Detail & Related papers (2020-10-26T07:34:27Z) - Evaluation Toolkit For Robustness Testing Of Automatic Essay Scoring
Systems [64.4896118325552]
We evaluate the current state-of-the-art AES models using a model adversarial evaluation scheme and associated metrics.
We find that AES models are highly overstable. Even heavy modifications(as much as 25%) with content unrelated to the topic of the questions do not decrease the score produced by the models.
arXiv Detail & Related papers (2020-07-14T03:49:43Z) - Application of Deep Neural Networks to assess corporate Credit Rating [4.14084373472438]
We analyze the performance of four neural network architectures in predicting corporate credit rating as issued by Standard and Poor's.
The goal of the analysis is to improve application of machine learning algorithms to credit assessment.
arXiv Detail & Related papers (2020-03-04T21:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.