Accurate and Intuitive Contextual Explanations using Linear Model Trees
- URL: http://arxiv.org/abs/2009.05322v1
- Date: Fri, 11 Sep 2020 10:13:12 GMT
- Title: Accurate and Intuitive Contextual Explanations using Linear Model Trees
- Authors: Aditya Lahiri, Narayanan Unny Edakunni
- Abstract summary: Local post hoc model explanations have gained massive adoption.
Current state of the art methods use rudimentary methods to generate synthetic data around the point to be explained.
We use a Generative Adversarial Network for synthetic data generation and train a piecewise linear model in the form of Linear Model Trees.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the ever-increasing use of complex machine learning models in critical
applications within the finance domain, explaining the decisions of the model
has become a necessity. With applications spanning from credit scoring to
credit marketing, the impact of these models is undeniable. Among the multiple
ways in which one can explain the decisions of these complicated models, local
post hoc model agnostic explanations have gained massive adoption. These
methods allow one to explain each prediction independent of the modelling
technique that was used while training. As explanations, they either give
individual feature attributions or provide sufficient rules that represent
conditions for a prediction to be made. The current state of the art methods
use rudimentary methods to generate synthetic data around the point to be
explained. This is followed by fitting simple linear models as surrogates to
obtain a local interpretation of the prediction. In this paper, we seek to
significantly improve on both, the method used to generate the explanations and
the nature of explanations produced. We use a Generative Adversarial Network
for synthetic data generation and train a piecewise linear model in the form of
Linear Model Trees to be used as the surrogate model.In addition to individual
feature attributions, we also provide an accompanying context to our
explanations by leveraging the structure and property of our surrogate model.
Related papers
- Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales [3.242050660144211]
Saliency post-hoc explainability methods are important tools for understanding increasingly complex NLP models.
We present a methodology for incorporating rationales, which are text annotations explaining human decisions, into text classification models.
arXiv Detail & Related papers (2024-04-03T22:39:33Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern
Classification [0.0]
We propose an LTCN-based model for interpretable pattern classification of structured data.
Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process.
Our interpretable model obtains competitive performance when compared to the state-of-the-art white and black boxes.
arXiv Detail & Related papers (2021-07-07T18:14:50Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Explaining predictive models with mixed features using Shapley values
and conditional inference trees [1.8065361710947976]
Shapley values stand out as a sound method to explain predictions from any type of machine learning model.
We propose a method to explain mixed dependent features by modeling the dependence structure of the features using conditional inference trees.
arXiv Detail & Related papers (2020-07-02T11:25:45Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.