Counterfactual Shapley Additive Explanations
- URL: http://arxiv.org/abs/2110.14270v1
- Date: Wed, 27 Oct 2021 08:44:53 GMT
- Title: Counterfactual Shapley Additive Explanations
- Authors: Emanuele Albini, Jason Long, Danial Dervovic, Daniele Magazzeni
- Abstract summary: We propose a variant of SHAP, CoSHAP, that uses counterfactual generation techniques to produce a background dataset.
We motivate the need within the actionable recourse setting for careful consideration of background datasets when using Shapley values for feature attributions.
- Score: 6.916452769334367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature attributions are a common paradigm for model explanations due to
their simplicity in assigning a single numeric score for each input feature to
a model. In the actionable recourse setting, wherein the goal of the
explanations is to improve outcomes for model consumers, it is often unclear
how feature attributions should be correctly used. With this work, we aim to
strengthen and clarify the link between actionable recourse and feature
attributions. Concretely, we propose a variant of SHAP, CoSHAP, that uses
counterfactual generation techniques to produce a background dataset for use
within the marginal (a.k.a. interventional) Shapley value framework. We
motivate the need within the actionable recourse setting for careful
consideration of background datasets when using Shapley values for feature
attributions, alongside the requirement for monotonicity, with numerous
synthetic examples. Moreover, we demonstrate the efficacy of CoSHAP by
proposing and justifying a quantitative score for feature attributions,
counterfactual-ability, showing that as measured by this metric, CoSHAP is
superior to existing methods when evaluated on public datasets using monotone
tree ensembles.
Related papers
- Variational Shapley Network: A Probabilistic Approach to Self-Explaining
Shapley values with Uncertainty Quantification [2.6699011287124366]
Shapley values have emerged as a foundational tool in machine learning (ML) for elucidating model decision-making processes.
We introduce a novel, self-explaining method that simplifies the computation of Shapley values significantly, requiring only a single forward pass.
arXiv Detail & Related papers (2024-02-06T18:09:05Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Increasing Performance And Sample Efficiency With Model-agnostic
Interactive Feature Attributions [3.0655581300025996]
We provide model-agnostic implementations for two popular explanation methods (Occlusion and Shapley values) to enforce entirely different attributions in the complex model.
We show how our proposed approach can significantly improve the model's performance only by augmenting its training dataset based on corrected explanations.
arXiv Detail & Related papers (2023-06-28T15:23:28Z) - Efficient Shapley Values Estimation by Amortization for Text
Classification [66.7725354593271]
We develop an amortized model that directly predicts each input feature's Shapley Value without additional model evaluations.
Experimental results on two text classification datasets demonstrate that our amortized model estimates Shapley Values accurately with up to 60 times speedup.
arXiv Detail & Related papers (2023-05-31T16:19:13Z) - Variable Importance Matching for Causal Inference [73.25504313552516]
We describe a general framework called Model-to-Match that achieves these goals.
Model-to-Match uses variable importance measurements to construct a distance metric.
We operationalize the Model-to-Match framework with LASSO.
arXiv Detail & Related papers (2023-02-23T00:43:03Z) - Exact Shapley Values for Local and Model-True Explanations of Decision
Tree Ensembles [0.0]
We consider the application of Shapley values for explaining decision tree ensembles.
We present a novel approach to Shapley value-based feature attribution that can be applied to random forests and boosted decision trees.
arXiv Detail & Related papers (2021-12-16T20:16:02Z) - CARE: Coherent Actionable Recourse based on Sound Counterfactual
Explanations [0.0]
This paper introduces CARE, a modular explanation framework that addresses the model- and user-level desiderata.
As a model-agnostic approach, CARE generates multiple, diverse explanations for any black-box model.
arXiv Detail & Related papers (2021-08-18T15:26:59Z) - Joint Shapley values: a measure of joint feature importance [6.169364905804678]
We introduce joint Shapley values, which directly extend the Shapley axioms.
Joint Shapley values measure a set of features' average effect on a model's prediction.
Results for games show that joint Shapley values present different insights from existing interaction indices.
arXiv Detail & Related papers (2021-07-23T17:22:37Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.