An Interpretable Loan Credit Evaluation Method Based on Rule
Representation Learner
- URL: http://arxiv.org/abs/2304.00731v1
- Date: Mon, 3 Apr 2023 05:55:04 GMT
- Title: An Interpretable Loan Credit Evaluation Method Based on Rule
Representation Learner
- Authors: Zihao Chen, Xiaomeng Wang, Yuanjiang Huang, Tao Jia
- Abstract summary: We design an intrinsically interpretable model based on RRL(Rule Representation) for the Lending Club dataset.
During the training, we learned tricks from previous research to effectively train binary weights.
Our model is used to test the correctness of the explanations generated by the post-hoc method.
- Score: 8.08640000394814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The interpretability of model has become one of the obstacles to its wide
application in the high-stake fields. The usual way to obtain interpretability
is to build a black-box first and then explain it using the post-hoc methods.
However, the explanations provided by the post-hoc method are not always
reliable. Instead, we design an intrinsically interpretable model based on
RRL(Rule Representation Learner) for the Lending Club dataset. Specifically,
features can be divided into three categories according to their
characteristics of themselves and build three sub-networks respectively, each
of which is similar to a neural network with a single hidden layer but can be
equivalently converted into a set of rules. During the training, we learned
tricks from previous research to effectively train binary weights. Finally, our
model is compared with the tree-based model. The results show that our model is
much better than the interpretable decision tree in performance and close to
other black-box, which is of practical significance to both financial
institutions and borrowers. More importantly, our model is used to test the
correctness of the explanations generated by the post-hoc method, the results
show that the post-hoc method is not always reliable.
Related papers
- Inherently Interpretable Tree Ensemble Learning [7.868733904112288]
We show that when shallow decision trees are used as base learners, the ensemble learning algorithms can become inherently interpretable.
An interpretation algorithm is developed that converts the tree ensemble into the functional ANOVA representation with inherent interpretability.
Experiments on simulations and real-world datasets show that our proposed methods offer a better trade-off between model interpretation and predictive performance.
arXiv Detail & Related papers (2024-10-24T18:58:41Z) - RewardBench: Evaluating Reward Models for Language Modeling [100.28366840977966]
We present RewardBench, a benchmark dataset and code-base for evaluation of reward models.
The dataset is a collection of prompt-chosen-rejected trios spanning chat, reasoning, and safety.
On the RewardBench leaderboard, we evaluate reward models trained with a variety of methods.
arXiv Detail & Related papers (2024-03-20T17:49:54Z) - CLIMAX: An exploration of Classifier-Based Contrastive Explanations [5.381004207943597]
We propose a novel post-hoc model XAI technique that provides contrastive explanations justifying the classification of a black box.
Our method, which we refer to as CLIMAX, is based on local classifiers.
We show that we achieve better consistency as compared to baselines such as LIME, BayLIME, and SLIME.
arXiv Detail & Related papers (2023-07-02T22:52:58Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - Using Decision Tree as Local Interpretable Model in Autoencoder-based
LIME [0.76146285961466]
We present a modified version of an autoencoder-based approach for local interpretability called ALIME.
This work proposes a new approach, which uses a decision tree instead of the linear model, as the interpretable model.
Compared to ALIME, the experiments show significant results on stability and local fidelity and improved results on interpretability.
arXiv Detail & Related papers (2022-04-07T09:39:02Z) - Consistent Explanations by Contrastive Learning [15.80891456718324]
Post-hoc evaluation techniques, such as Grad-CAM, enable humans to inspect the spatial regions responsible for a particular network decision.
We introduce a novel training method to train the model to produce more consistent explanations.
We show that our method, Contrastive Grad-CAM Consistency (CGC), results in Grad-CAM interpretation heatmaps that are consistent with human annotations.
arXiv Detail & Related papers (2021-10-01T16:49:16Z) - Thought Flow Nets: From Single Predictions to Trains of Model Thought [39.619001911390804]
When humans solve complex problems, they rarely come up with a decision right-away.
Instead, they start with an intuitive decision reflecting upon it, spot mistakes, resolve contradictions and jump between different hypotheses.
arXiv Detail & Related papers (2021-07-26T13:56:37Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Why do you think that? Exploring Faithful Sentence-Level Rationales
Without Supervision [60.62434362997016]
We propose a differentiable training-framework to create models which output faithful rationales on a sentence level.
Our model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best.
arXiv Detail & Related papers (2020-10-07T12:54:28Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.