A Holistic Approach to Interpretability in Financial Lending: Models,
Visualizations, and Summary-Explanations
- URL: http://arxiv.org/abs/2106.02605v1
- Date: Fri, 4 Jun 2021 17:05:25 GMT
- Title: A Holistic Approach to Interpretability in Financial Lending: Models,
Visualizations, and Summary-Explanations
- Authors: Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia
Wang, Tong Wang
- Abstract summary: In a future world without such secrecy, what decision support tools would one want to use for justified lending decisions?
We propose a framework for such decisions, including a globally interpretable machine learning model, an interactive visualization of it, and several types of summaries and explanations for any given decision.
Our framework earned the FICO recognition award for the Explainable Machine Learning Challenge.
- Score: 25.05825112699133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lending decisions are usually made with proprietary models that provide
minimally acceptable explanations to users. In a future world without such
secrecy, what decision support tools would one want to use for justified
lending decisions? This question is timely, since the economy has dramatically
shifted due to a pandemic, and a massive number of new loans will be necessary
in the short term. We propose a framework for such decisions, including a
globally interpretable machine learning model, an interactive visualization of
it, and several types of summaries and explanations for any given decision. The
machine learning model is a two-layer additive risk model, which resembles a
two-layer neural network, but is decomposable into subscales. In this model,
each node in the first (hidden) layer represents a meaningful subscale model,
and all of the nonlinearities are transparent. Our online visualization tool
allows exploration of this model, showing precisely how it came to its
conclusion. We provide three types of explanations that are simpler than, but
consistent with, the global model: case-based reasoning explanations that use
neighboring past cases, a set of features that were the most important for the
model's prediction, and summary-explanations that provide a customized sparse
explanation for any particular lending decision made by the model. Our
framework earned the FICO recognition award for the Explainable Machine
Learning Challenge, which was the first public challenge in the domain of
explainable machine learning.
Related papers
- Models That Are Interpretable But Not Transparent [19.6420087904074]
FaithfulDefense creates explanations for logical models that are completely faithful, yet reveal as little as possible about the decision boundary.
This work provides an approach, FaithfulDefense, that creates model explanations for logical models that are completely faithful, yet reveal as little as possible about the decision boundary.
arXiv Detail & Related papers (2025-02-26T19:05:49Z) - Diffexplainer: Towards Cross-modal Global Explanations with Diffusion Models [51.21351775178525]
DiffExplainer is a novel framework that, leveraging language-vision models, enables multimodal global explainability.
It employs diffusion models conditioned on optimized text prompts, synthesizing images that maximize class outputs.
The analysis of generated visual descriptions allows for automatic identification of biases and spurious features.
arXiv Detail & Related papers (2024-04-03T10:11:22Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Motif-guided Time Series Counterfactual Explanations [1.1510009152620664]
We propose a novel model that generates intuitive post-hoc counterfactual explanations.
We validated our model using five real-world time-series datasets from the UCR repository.
arXiv Detail & Related papers (2022-11-08T17:56:50Z) - Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern
Classification [0.0]
We propose an LTCN-based model for interpretable pattern classification of structured data.
Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process.
Our interpretable model obtains competitive performance when compared to the state-of-the-art white and black boxes.
arXiv Detail & Related papers (2021-07-07T18:14:50Z) - Dissecting Generation Modes for Abstractive Summarization Models via
Ablation and Attribution [34.2658286826597]
We propose a two-step method to interpret summarization model decisions.
We first analyze the model's behavior by ablating the full model to categorize each decoder decision into one of several generation modes.
After isolating decisions that do depend on the input, we explore interpreting these decisions using several different attribution methods.
arXiv Detail & Related papers (2021-06-03T00:54:16Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Accurate and Intuitive Contextual Explanations using Linear Model Trees [0.0]
Local post hoc model explanations have gained massive adoption.
Current state of the art methods use rudimentary methods to generate synthetic data around the point to be explained.
We use a Generative Adversarial Network for synthetic data generation and train a piecewise linear model in the form of Linear Model Trees.
arXiv Detail & Related papers (2020-09-11T10:13:12Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.