CAMs as Shapley Value-based Explainers
- URL: http://arxiv.org/abs/2501.06261v1
- Date: Thu, 09 Jan 2025 13:14:32 GMT
- Title: CAMs as Shapley Value-based Explainers
- Authors: Huaiguang Cai,
- Abstract summary: Class Activation Mapping (CAM) methods are widely used to visualize neural network decisions.
We introduce the Content Reserved Game-theoretic (CRG) Explainer.
Within this framework, we develop ShapleyCAM, a new method that leverages gradients and the Hessian matrix.
- Score: 0.0
- License:
- Abstract: Class Activation Mapping (CAM) methods are widely used to visualize neural network decisions, yet their underlying mechanisms remain incompletely understood. To enhance the understanding of CAM methods and improve their explainability, we introduce the Content Reserved Game-theoretic (CRG) Explainer. This theoretical framework clarifies the theoretical foundations of GradCAM and HiResCAM by modeling the neural network prediction process as a cooperative game. Within this framework, we develop ShapleyCAM, a new method that leverages gradients and the Hessian matrix to provide more precise and theoretically grounded visual explanations. Due to the computational infeasibility of exact Shapley value calculation, ShapleyCAM employs a second-order Taylor expansion of the cooperative game's utility function to derive a closed-form expression. Additionally, we propose the Residual Softmax Target-Class (ReST) utility function to address the limitations of pre-softmax and post-softmax scores. Extensive experiments across 12 popular networks on the ImageNet validation set demonstrate the effectiveness of ShapleyCAM and its variants. Our findings not only advance CAM explainability but also bridge the gap between heuristic-driven CAM methods and compute-intensive Shapley value-based methods. The code is available at \url{https://github.com/caihuaiguang/pytorch-shapley-cam}.
Related papers
- Shapley Pruning for Neural Network Compression [63.60286036508473]
This work presents the Shapley value approximations, and performs the comparative analysis in terms of cost-benefit utility for the neural network compression.
The proposed normative ranking and its approximations show practical results, obtaining state-of-the-art network compression.
arXiv Detail & Related papers (2024-07-19T11:42:54Z) - DecomCAM: Advancing Beyond Saliency Maps through Decomposition and Integration [25.299607743268993]
Class Activation Map (CAM) methods highlight regions revealing the model's decision-making basis but lack clear saliency maps and detailed interpretability.
We propose DecomCAM, a novel decomposition-and-integration method that distills shared patterns from channel activation maps.
Experiments reveal that DecomCAM not only excels in locating accuracy but also achieves an optimizing balance between interpretability and computational efficiency.
arXiv Detail & Related papers (2024-05-29T08:40:11Z) - Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning [86.15009879251386]
We propose a novel architecture and method of explainable classification with Concept Bottleneck Models (CBM)
CBMs require an additional set of concepts to leverage.
We show a significant increase in accuracy using sparse hidden layers in CLIP-based bottleneck models.
arXiv Detail & Related papers (2024-04-04T09:43:43Z) - BroadCAM: Outcome-agnostic Class Activation Mapping for Small-scale
Weakly Supervised Applications [69.22739434619531]
We propose an outcome-agnostic CAM approach, called BroadCAM, for small-scale weakly supervised applications.
By evaluating BroadCAM on VOC2012 and BCSS-WSSS for WSSS and OpenImages30k for WSOL, BroadCAM demonstrates superior performance.
arXiv Detail & Related papers (2023-09-07T06:45:43Z) - Empowering CAM-Based Methods with Capability to Generate Fine-Grained
and High-Faithfulness Explanations [1.757194730633422]
We propose FG-CAM, which extends CAM-based methods to enable generating fine-grained and high-faithfulness explanations.
Our method not only solves the shortcoming of CAM-based methods without changing their characteristics, but also generates fine-grained explanations that have higher faithfulness than LRP and its variants.
arXiv Detail & Related papers (2023-03-16T09:29:05Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - Class Re-Activation Maps for Weakly-Supervised Semantic Segmentation [88.55040177178442]
Class activation maps (CAM) is arguably the most standard step of generating pseudo masks for semantic segmentation.
Yet, the crux of the unsatisfactory pseudo masks is the binary cross-entropy loss (BCE) widely used in CAM.
We introduce an embarrassingly simple yet surprisingly effective method: Reactivating the converged CAM with BCE by using softmax cross-entropy loss (SCE)
The evaluation on both PASCAL VOC and MSCOCO shows that ReCAM not only generates high-quality masks, but also supports plug-and-play in any CAM variant with little overhead.
arXiv Detail & Related papers (2022-03-02T09:14:58Z) - F-CAM: Full Resolution CAM via Guided Parametric Upscaling [20.609010268320013]
Class Activation Mapping (CAM) methods have recently gained much attention for weakly-supervised object localization (WSOL) tasks.
CAM methods are typically integrated within off-the-shelf CNN backbones, such as ResNet50.
We introduce a generic method for parametric upscaling of CAMs that allows constructing accurate full resolution CAMs.
arXiv Detail & Related papers (2021-09-15T04:45:20Z) - Use HiResCAM instead of Grad-CAM for faithful explanations of
convolutional neural networks [89.56292219019163]
Explanation methods facilitate the development of models that learn meaningful concepts and avoid exploiting spurious correlations.
We illustrate a previously unrecognized limitation of the popular neural network explanation method Grad-CAM.
We propose HiResCAM, a class-specific explanation method that is guaranteed to highlight only the locations the model used to make each prediction.
arXiv Detail & Related papers (2020-11-17T19:26:14Z) - Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of
CNNs [29.731732363623713]
Class Activation Mapping (CAM) methods have been proposed to discover the connection between CNN's decision and image regions.
In this paper, we introduce two axioms -- Conservation and Sensitivity -- to the visualization paradigm of the CAM methods.
A dedicated Axiom-based Grad-CAM (XGrad-CAM) is proposed to satisfy these axioms as much as possible.
arXiv Detail & Related papers (2020-08-05T18:42:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.