Accelerating Shapley Explanation via Contributive Cooperator Selection
- URL: http://arxiv.org/abs/2206.08529v1
- Date: Fri, 17 Jun 2022 03:24:45 GMT
- Title: Accelerating Shapley Explanation via Contributive Cooperator Selection
- Authors: Guanchu Wang and Yu-Neng Chuang and Mengnan Du and Fan Yang and Quan
Zhou and Pushkar Tripathi and Xuanting Cai and Xia Hu
- Abstract summary: We propose a novel method SHEAR to significantly accelerate the Shapley explanation for DNN models.
The selection of the feature coalitions follows our proposed Shapley chain rule to minimize the absolute error from the ground-truth Shapley values.
SHEAR consistently outperforms state-of-the-art baseline methods across different evaluation metrics.
- Score: 42.11059072201565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Even though Shapley value provides an effective explanation for a DNN model
prediction, the computation relies on the enumeration of all possible input
feature coalitions, which leads to the exponentially growing complexity. To
address this problem, we propose a novel method SHEAR to significantly
accelerate the Shapley explanation for DNN models, where only a few coalitions
of input features are involved in the computation. The selection of the feature
coalitions follows our proposed Shapley chain rule to minimize the absolute
error from the ground-truth Shapley values, such that the computation can be
both efficient and accurate. To demonstrate the effectiveness, we
comprehensively evaluate SHEAR across multiple metrics including the absolute
error from the ground-truth Shapley value, the faithfulness of the
explanations, and running speed. The experimental results indicate SHEAR
consistently outperforms state-of-the-art baseline methods across different
evaluation metrics, which demonstrates its potentials in real-world
applications where the computational resource is limited.
Related papers
- Energy-based Model for Accurate Shapley Value Estimation in Interpretable Deep Learning Predictive Modeling [7.378438977893025]
EmSHAP is an energy-based model for Shapley value estimation.
It estimates the expectation of Shapley contribution function under arbitrary subset of features.
arXiv Detail & Related papers (2024-04-01T12:19:33Z) - Variational Shapley Network: A Probabilistic Approach to Self-Explaining
Shapley values with Uncertainty Quantification [2.6699011287124366]
Shapley values have emerged as a foundational tool in machine learning (ML) for elucidating model decision-making processes.
We introduce a novel, self-explaining method that simplifies the computation of Shapley values significantly, requiring only a single forward pass.
arXiv Detail & Related papers (2024-02-06T18:09:05Z) - Efficient Data Shapley for Weighted Nearest Neighbor Algorithms [47.62605581521535]
WKNN-Shapley is an efficient computation of Data Shapley for weighted $K$ nearest neighbor algorithm (WKNN-Shapley)
We show WKNN-Shapley's computational efficiency and its superior performance in discerning data quality compared to its unweighted counterpart.
arXiv Detail & Related papers (2024-01-20T03:34:18Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation [23.646508094051768]
We consider the dataset valuation problem, that is, the problem of quantifying the incremental gain.
The Shapley value is a natural tool to perform dataset valuation due to its formal axiomatic justification.
We propose a novel approximation, referred to as discrete uniform Shapley, which is expressed as an expectation under a discrete uniform distribution.
arXiv Detail & Related papers (2023-06-03T10:22:50Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - A $k$-additive Choquet integral-based approach to approximate the SHAP
values for local interpretability in machine learning [8.637110868126546]
This paper aims at providing some interpretability for machine learning models based on Shapley values.
A SHAP-based method called Kernel SHAP adopts an efficient strategy that approximates such values with less computational effort.
The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
arXiv Detail & Related papers (2022-11-03T22:34:50Z) - Fast Hierarchical Games for Image Explanations [78.16853337149871]
We present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients.
Unlike other Shapley-based explanation methods, h-Shap is scalable and can be computed without the need of approximation.
We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem.
arXiv Detail & Related papers (2021-04-13T13:11:02Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.