Fast Hierarchical Games for Image Explanations
- URL: http://arxiv.org/abs/2104.06164v1
- Date: Tue, 13 Apr 2021 13:11:02 GMT
- Title: Fast Hierarchical Games for Image Explanations
- Authors: Jacopo Teneggi, Alexandre Luster, Jeremias Sulam
- Abstract summary: We present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients.
Unlike other Shapley-based explanation methods, h-Shap is scalable and can be computed without the need of approximation.
We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem.
- Score: 78.16853337149871
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As modern complex neural networks keep breaking records and solving harder
problems, their predictions also become less and less intelligible. The current
lack of interpretability often undermines the deployment of accurate machine
learning tools in sensitive settings. In this work, we present a model-agnostic
explanation method for image classification based on a hierarchical extension
of Shapley coefficients --Hierarchical Shap (h-Shap)-- that resolves some of
the limitations of current approaches. Unlike other Shapley-based explanation
methods, h-Shap is scalable and can be computed without the need of
approximation. Under certain distributional assumptions, such as those common
in multiple instance learning, h-Shap retrieves the exact Shapley coefficients
with an exponential improvement in computational complexity. We compare our
hierarchical approach with popular Shapley-based and non-Shapley-based methods
on a synthetic dataset, a medical imaging scenario, and a general computer
vision problem, showing that h-Shap outperforms the state of the art in both
accuracy and runtime. Code and experiments are made publicly available.
Related papers
- Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - Shapley Computations Using Surrogate Model-Based Trees [4.2575268077562685]
This paper proposes the use of a surrogate model-based tree to compute Shapley and SHAP values based on conditional expectation.
Simulation studies show that the proposed algorithm provides improvements in accuracy, unifies global Shapley and SHAP interpretation, and the thresholding method provides a way to trade-off running time and accuracy.
arXiv Detail & Related papers (2022-07-11T22:20:51Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - RKHS-SHAP: Shapley Values for Kernel Methods [17.52161019964009]
We propose an attribution method for kernel machines that can efficiently compute both emphInterventional and emphObservational Shapley values
We show theoretically that our method is robust with respect to local perturbations - a key yet often overlooked desideratum for interpretability.
arXiv Detail & Related papers (2021-10-18T10:35:36Z) - groupShapley: Efficient prediction explanation with Shapley values for
feature groups [2.320417845168326]
Shapley values has established itself as one of the most appropriate and theoretically sound frameworks for explaining predictions from machine learning models.
The main drawback with Shapley values is that its computational complexity grows exponentially in the number of input features.
The present paper introduces groupShapley: a conceptually simple approach for dealing with the aforementioned bottlenecks.
arXiv Detail & Related papers (2021-06-23T08:16:14Z) - Shapley Explanation Networks [19.89293579058277]
We propose to incorporate Shapley values themselves as latent representations in deep models.
We operationalize the Shapley transform as a neural network module and construct both shallow and deep networks, called ShapNets.
Our Shallow ShapNets compute the exact Shapley values and our Deep ShapNets maintain the missingness and accuracy properties of Shapley values.
arXiv Detail & Related papers (2021-04-06T05:42:12Z) - Generalize a Small Pre-trained Model to Arbitrarily Large TSP Instances [55.64521598173897]
This paper tries to train a small-scale model, which could be repetitively used to build heat maps for the traveling salesman problem (TSP)
Heat maps are fed into a reinforcement learning approach (Monte Carlo tree search) to guide the search of high-quality solutions.
Experimental results show that, this new approach clearly outperforms the existing machine learning based TSP algorithms.
arXiv Detail & Related papers (2020-12-19T11:06:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.