Explainable Enterprise Credit Rating via Deep Feature Crossing Network
- URL: http://arxiv.org/abs/2105.13843v1
- Date: Sat, 22 May 2021 02:41:50 GMT
- Title: Explainable Enterprise Credit Rating via Deep Feature Crossing Network
- Authors: Weiyu Guo, Zhijiang Yang, Shu Wu, Fu Chen
- Abstract summary: We propose a novel network to explicitly model the enterprise credit rating problem using deep neural networks (DNNs) and attention mechanisms.
The proposed model realizes explainable enterprise credit ratings. Experimental results obtained on real-world enterprise datasets verify that the proposed approach achieves higher performance than conventional methods.
- Score: 8.867320666267956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the powerful learning ability on high-rank and non-linear features,
deep neural networks (DNNs) are being applied to data mining and machine
learning in various fields, and exhibit higher discrimination performance than
conventional methods. However, the applications based on DNNs are rare in
enterprise credit rating tasks because most of DNNs employ the "end-to-end"
learning paradigm, which outputs the high-rank representations of objects and
predictive results without any explanations. Thus, users in the financial
industry cannot understand how these high-rank representations are generated,
what do they mean and what relations exist with the raw inputs. Then users
cannot determine whether the predictions provided by DNNs are reliable, and not
trust the predictions providing by such "black box" models. Therefore, in this
paper, we propose a novel network to explicitly model the enterprise credit
rating problem using DNNs and attention mechanisms. The proposed model realizes
explainable enterprise credit ratings. Experimental results obtained on
real-world enterprise datasets verify that the proposed approach achieves
higher performance than conventional methods, and provides insights into
individual rating results and the reliability of model training.
Related papers
- Bayesian Entropy Neural Networks for Physics-Aware Prediction [14.705526856205454]
We introduce BENN, a framework designed to impose constraints on Bayesian Neural Network (BNN) predictions.
Benn is capable of constraining not only the predicted values but also their derivatives and variances, ensuring a more robust and reliable model output.
Results highlight significant improvements over traditional BNNs and showcase competitive performance relative to contemporary constrained deep learning methods.
arXiv Detail & Related papers (2024-07-01T07:00:44Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - AED: An black-box NLP classifier model attacker [8.15167980163668]
Deep Neural Networks (DNNs) have been successful in solving real-world tasks in domains such as connected and automated vehicles, disease, and job hiring.
There is a growing concern regarding the potential bias and robustness of these DNN models.
We propose a word-level NLP classifier attack model called "AED," which stands for Attention mechanism enabled post-model Explanation.
arXiv Detail & Related papers (2021-12-22T04:25:23Z) - Mixture of Linear Models Co-supervised by Deep Neural Networks [14.831346286039151]
We propose an approach to fill the gap between relatively simple explainable models and deep neural network (DNN) models.
Our main idea is a mixture of discriminative models that is trained with the guidance from a DNN.
arXiv Detail & Related papers (2021-08-05T02:08:35Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - A Simple Framework to Quantify Different Types of Uncertainty in Deep
Neural Networks for Image Classification [0.0]
Quantifying uncertainty in a model's predictions is important as it enables the safety of an AI system to be increased.
This is crucial for applications where the cost of an error is high, such as in autonomous vehicle control, medical image analysis, financial estimations or legal fields.
We propose a complete framework to capture and quantify three known types of uncertainty in Deep Neural Networks for the task of image classification.
arXiv Detail & Related papers (2020-11-17T15:36:42Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.