Refutation of Shapley Values for XAI -- Additional Evidence
- URL: http://arxiv.org/abs/2310.00416v1
- Date: Sat, 30 Sep 2023 15:44:06 GMT
- Title: Refutation of Shapley Values for XAI -- Additional Evidence
- Authors: Xuanxiang Huang, Joao Marques-Silva
- Abstract summary: Recent work demonstrated the inadequacy of Shapley values for explainable artificial intelligence (XAI)
This paper demonstrates the inadequacy of Shapley values for families of classifiers where features are not Boolean, but also for families of classifiers for which multiple classes can be picked.
- Score: 4.483306836710804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work demonstrated the inadequacy of Shapley values for explainable
artificial intelligence (XAI). Although to disprove a theory a single
counterexample suffices, a possible criticism of earlier work is that the focus
was solely on Boolean classifiers. To address such possible criticism, this
paper demonstrates the inadequacy of Shapley values for families of classifiers
where features are not boolean, but also for families of classifiers for which
multiple classes can be picked. Furthermore, the paper shows that the features
changed in any minimal $l_0$ distance adversarial examples do not include
irrelevant features, thus offering further arguments regarding the inadequacy
of Shapley values for XAI.
Related papers
- On Correcting SHAP Scores [3.3766484312332303]
The paper makes the case that the failings of SHAP scores result from the characteristic functions used in earlier works.
The paper proposes modifications to the tool SHAP to use instead one of our novel characteristic functions.
arXiv Detail & Related papers (2024-04-30T10:39:20Z) - A Refutation of Shapley Values for Explainability [4.483306836710804]
Recent work demonstrated the existence of Boolean functions for which Shapley values provide misleading information.
This paper proves that, for any number of features, there exist Boolean functions that exhibit one or more inadequacy-revealing issues.
arXiv Detail & Related papers (2023-09-06T14:34:18Z) - Efficient Shapley Values Estimation by Amortization for Text
Classification [66.7725354593271]
We develop an amortized model that directly predicts each input feature's Shapley Value without additional model evaluations.
Experimental results on two text classification datasets demonstrate that our amortized model estimates Shapley Values accurately with up to 60 times speedup.
arXiv Detail & Related papers (2023-05-31T16:19:13Z) - The Inadequacy of Shapley Values for Explainability [0.685316573653194]
The paper argues that the use of Shapley values in explainable AI (XAI) will necessarily yield provably misleading information about the relative importance of features for predictions.
arXiv Detail & Related papers (2023-02-16T09:19:14Z) - On Computing Probabilistic Abductive Explanations [30.325691263226968]
The most widely studied explainable AI (XAI) approaches are unsound.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates practical approaches for computing relevant sets for a number of widely used classifiers.
arXiv Detail & Related papers (2022-12-12T15:47:10Z) - Smoothed Embeddings for Certified Few-Shot Learning [63.68667303948808]
We extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.
Our results are confirmed by experiments on different datasets.
arXiv Detail & Related papers (2022-02-02T18:19:04Z) - Hessian Eigenspectra of More Realistic Nonlinear Models [73.31363313577941]
We make a emphprecise characterization of the Hessian eigenspectra for a broad family of nonlinear models.
Our analysis takes a step forward to identify the origin of many striking features observed in more complex machine learning models.
arXiv Detail & Related papers (2021-03-02T06:59:52Z) - VAE Approximation Error: ELBO and Conditional Independence [78.72292013299868]
This paper analyzes VAE approximation errors caused by the combination of the ELBO objective with the choice of the encoder probability family.
We show that the ELBO subset can not be enlarged, and the respective error cannot be decreased, by only considering deeper encoder networks.
arXiv Detail & Related papers (2021-02-18T12:54:42Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Predictive and Causal Implications of using Shapley Value for Model
Interpretation [6.744385328015561]
We established the relationship between Shapley value and conditional independence, a key concept in both predictive and causal modeling.
Our results indicate that, eliminating a variable with high Shapley value from a model do not necessarily impair predictive performance.
More importantly, Shapley value of a variable do not reflect their causal relationship with the target of interest.
arXiv Detail & Related papers (2020-08-12T01:08:08Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.