Shapley explainability on the data manifold
- URL: http://arxiv.org/abs/2006.01272v4
- Date: Mon, 20 Dec 2021 17:43:52 GMT
- Title: Shapley explainability on the data manifold
- Authors: Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton,
Megan Stanley, Ilya Feige
- Abstract summary: General implementations of Shapley explainability make an untenable assumption: that the model's features are uncorrelated.
One solution, based on generative modelling, provides flexible access to data imputations.
The other directly learns the Shapley value-function, providing performance and stability at the cost of flexibility.
- Score: 10.439136407307048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability in AI is crucial for model development, compliance with
regulation, and providing operational nuance to predictions. The Shapley
framework for explainability attributes a model's predictions to its input
features in a mathematically principled and model-agnostic way. However,
general implementations of Shapley explainability make an untenable assumption:
that the model's features are uncorrelated. In this work, we demonstrate
unambiguous drawbacks of this assumption and develop two solutions to Shapley
explainability that respect the data manifold. One solution, based on
generative modelling, provides flexible access to data imputations; the other
directly learns the Shapley value-function, providing performance and stability
at the cost of flexibility. While "off-manifold" Shapley values can (i) give
rise to incorrect explanations, (ii) hide implicit model dependence on
sensitive attributes, and (iii) lead to unintelligible explanations in
higher-dimensional data, on-manifold explainability overcomes these problems.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Shapley Marginal Surplus for Strong Models [0.9831489366502301]
We show that while Shapley values might be accurate explainers of model predictions, machine learning models themselves are often poor explainers of the true data-generating process (DGP)
We introduce a novel variable importance algorithm, Shapley Marginal Surplus for Strong Models, that samples the space of possible models to come up with an inferential measure of feature importance.
arXiv Detail & Related papers (2024-08-16T17:06:07Z) - Explaining the Model and Feature Dependencies by Decomposition of the
Shapley Value [3.0655581300025996]
Shapley values have become one of the go-to methods to explain complex models to end-users.
One downside is that they always require outputs of the model when some features are missing.
This however introduces a non-trivial choice: do we condition on the unknown features or not?
We propose a new algorithmic approach to combine both explanations, removing the burden of choice and enhancing the explanatory power of Shapley values.
arXiv Detail & Related papers (2023-06-19T12:20:23Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explaining predictive models using Shapley values and non-parametric
vine copulas [2.6774008509840996]
We propose two new approaches for modelling the dependence between the features.
The performance of the proposed methods is evaluated on simulated data sets and a real data set.
Experiments demonstrate that the vine copula approaches give more accurate approximations to the true Shapley values than its competitors.
arXiv Detail & Related papers (2021-02-12T09:43:28Z) - Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual
Predictions of Complex Models [6.423239719448169]
Shapley values are designed to attribute the difference between a model's prediction and an average baseline to the different features used as input to the model.
We show how these 'causal' Shapley values can be derived for general causal graphs without sacrificing any of their desirable properties.
arXiv Detail & Related papers (2020-11-03T11:11:36Z) - Human-interpretable model explainability on high-dimensional data [8.574682463936007]
We introduce a framework for human-interpretable explainability on high-dimensional data, consisting of two modules.
First, we apply a semantically meaningful latent representation, both to reduce the raw dimensionality of the data, and to ensure its human interpretability.
Second, we adapt the Shapley paradigm for model-agnostic explainability to operate on these latent features. This leads to interpretable model explanations that are both theoretically controlled and computationally tractable.
arXiv Detail & Related papers (2020-10-14T20:06:28Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.