Exploring Local Explanations of Nonlinear Models Using Animated Linear
Projections
- URL: http://arxiv.org/abs/2205.05359v3
- Date: Fri, 19 Jan 2024 01:30:56 GMT
- Title: Exploring Local Explanations of Nonlinear Models Using Animated Linear
Projections
- Authors: Nicholas Spyrison, Dianne Cook, Przemyslaw Biecek
- Abstract summary: We show how to use eXplainable AI (XAI) to shed light on how a model use predictors to arrive at a prediction.
To understand how the interaction between predictors affects the variable importance estimate, we can convert LVAs into linear projections.
The approach is illustrated with examples from categorical (penguin species, chocolate types) and quantitative (soccer/football salaries, house prices) response models.
- Score: 5.524804393257921
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The increased predictive power of machine learning models comes at the cost
of increased complexity and loss of interpretability, particularly in
comparison to parametric statistical models. This trade-off has led to the
emergence of eXplainable AI (XAI) which provides methods, such as local
explanations (LEs) and local variable attributions (LVAs), to shed light on how
a model use predictors to arrive at a prediction. These provide a point
estimate of the linear variable importance in the vicinity of a single
observation. However, LVAs tend not to effectively handle association between
predictors. To understand how the interaction between predictors affects the
variable importance estimate, we can convert LVAs into linear projections and
use the radial tour. This is also useful for learning how a model has made a
mistake, or the effect of outliers, or the clustering of observations. The
approach is illustrated with examples from categorical (penguin species,
chocolate types) and quantitative (soccer/football salaries, house prices)
response models. The methods are implemented in the R package cheem, available
on CRAN.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Theoretical Evaluation of Asymmetric Shapley Values for Root-Cause
Analysis [0.0]
Asymmetric Shapley Values (ASV) is a variant of the popular SHAP additive local explanation method.
We show how local contributions correspond to global contributions of variance reduction.
We identify generalized additive models (GAM) as a restricted class for which ASV exhibits desirable properties.
arXiv Detail & Related papers (2023-10-15T21:40:16Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Lazy Estimation of Variable Importance for Large Neural Networks [22.95405462638975]
We propose a fast and flexible method for approximating the reduced model with important inferential guarantees.
We demonstrate our method is fast and accurate under several data-generating regimes, and we demonstrate its real-world applicability on a seasonal climate forecasting example.
arXiv Detail & Related papers (2022-07-19T06:28:17Z) - On the Strong Correlation Between Model Invariance and Generalization [54.812786542023325]
Generalization captures a model's ability to classify unseen data.
Invariance measures consistency of model predictions on transformations of the data.
From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets.
arXiv Detail & Related papers (2022-07-14T17:08:25Z) - Benign-Overfitting in Conditional Average Treatment Effect Prediction
with Linear Regression [14.493176427999028]
We study the benign overfitting theory in the prediction of the conditional average treatment effect (CATE) with linear regression models.
We show that the T-learner fails to achieve the consistency except the random assignment, while the IPW-learner converges the risk to zero if the propensity score is known.
arXiv Detail & Related papers (2022-02-10T18:51:52Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Gaussian Function On Response Surface Estimation [12.35564140065216]
We propose a new framework for interpreting (features and samples) black-box machine learning models via a metamodeling technique.
The metamodel can be estimated from data generated via a trained complex model by running the computer experiment on samples of data in the region of interest.
arXiv Detail & Related papers (2021-01-04T04:47:00Z) - A Locally Adaptive Interpretable Regression [7.4267694612331905]
Linear regression is one of the most interpretable prediction models.
In this work, we introduce a locally adaptive interpretable regression (LoAIR)
Our model achieves comparable or better predictive performance than the other state-of-the-art baselines.
arXiv Detail & Related papers (2020-05-07T09:26:14Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.