Shapley Explanation Networks
- URL: http://arxiv.org/abs/2104.02297v1
- Date: Tue, 6 Apr 2021 05:42:12 GMT
- Title: Shapley Explanation Networks
- Authors: Rui Wang, Xiaoqian Wang, David I. Inouye
- Abstract summary: We propose to incorporate Shapley values themselves as latent representations in deep models.
We operationalize the Shapley transform as a neural network module and construct both shallow and deep networks, called ShapNets.
Our Shallow ShapNets compute the exact Shapley values and our Deep ShapNets maintain the missingness and accuracy properties of Shapley values.
- Score: 19.89293579058277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shapley values have become one of the most popular feature attribution
explanation methods. However, most prior work has focused on post-hoc Shapley
explanations, which can be computationally demanding due to its exponential
time complexity and preclude model regularization based on Shapley explanations
during training. Thus, we propose to incorporate Shapley values themselves as
latent representations in deep models thereby making Shapley explanations
first-class citizens in the modeling paradigm. This intrinsic explanation
approach enables layer-wise explanations, explanation regularization of the
model during training, and fast explanation computation at test time. We define
the Shapley transform that transforms the input into a Shapley representation
given a specific function. We operationalize the Shapley transform as a neural
network module and construct both shallow and deep networks, called ShapNets,
by composing Shapley modules. We prove that our Shallow ShapNets compute the
exact Shapley values and our Deep ShapNets maintain the missingness and
accuracy properties of Shapley values. We demonstrate on synthetic and
real-world datasets that our ShapNets enable layer-wise Shapley explanations,
novel Shapley regularizations during training, and fast computation while
maintaining reasonable performance. Code is available at
https://github.com/inouye-lab/ShapleyExplanationNetworks.
Related papers
- Shapley Pruning for Neural Network Compression [63.60286036508473]
This work presents the Shapley value approximations, and performs the comparative analysis in terms of cost-benefit utility for the neural network compression.
The proposed normative ranking and its approximations show practical results, obtaining state-of-the-art network compression.
arXiv Detail & Related papers (2024-07-19T11:42:54Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - Efficient Shapley Values Estimation by Amortization for Text
Classification [66.7725354593271]
We develop an amortized model that directly predicts each input feature's Shapley Value without additional model evaluations.
Experimental results on two text classification datasets demonstrate that our amortized model estimates Shapley Values accurately with up to 60 times speedup.
arXiv Detail & Related papers (2023-05-31T16:19:13Z) - From Shapley Values to Generalized Additive Models and back [16.665883787432858]
We introduce $n$-Shapley Values, a natural extension of Shapley Values that explain individual predictions with interaction terms up to order $n$.
From the Shapley-GAM, we can compute Shapley Values of arbitrary order, which gives precise insights into the limitations of these explanations.
At the technical end, we show that there is a one-to-one correspondence between different ways to choose the value function and different functional decompositions of the original function.
arXiv Detail & Related papers (2022-09-08T19:37:06Z) - PDD-SHAP: Fast Approximations for Shapley Values using Functional
Decomposition [2.0559497209595823]
We propose PDD-SHAP, an algorithm that uses an ANOVA-based functional decomposition model to approximate the black-box model being explained.
This allows us to calculate Shapley values orders of magnitude faster than existing methods for large datasets, significantly reducing the amortized cost of computing Shapley values.
arXiv Detail & Related papers (2022-08-26T11:49:54Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - FastSHAP: Real-Time Shapley Value Estimation [25.536804325758805]
FastSHAP is a method for estimating Shapley values in a single forward pass using a learned explainer model.
It amortizes the cost of explaining many inputs via a learning approach inspired by Shapley value's weighted least squares characterization.
It generates high-quality explanations with orders of magnitude speedup.
arXiv Detail & Related papers (2021-07-15T16:34:45Z) - groupShapley: Efficient prediction explanation with Shapley values for
feature groups [2.320417845168326]
Shapley values has established itself as one of the most appropriate and theoretically sound frameworks for explaining predictions from machine learning models.
The main drawback with Shapley values is that its computational complexity grows exponentially in the number of input features.
The present paper introduces groupShapley: a conceptually simple approach for dealing with the aforementioned bottlenecks.
arXiv Detail & Related papers (2021-06-23T08:16:14Z) - Fast Hierarchical Games for Image Explanations [78.16853337149871]
We present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients.
Unlike other Shapley-based explanation methods, h-Shap is scalable and can be computed without the need of approximation.
We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem.
arXiv Detail & Related papers (2021-04-13T13:11:02Z) - PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape
Representations [75.42959184226702]
We present a new mid-level patch-based surface representation for object-agnostic training.
We show several applications of our new representation, including shape and partial point cloud completion.
arXiv Detail & Related papers (2020-08-04T15:34:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.