Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep
Convolutional Networks via Integrated Gradient-Based Scoring
- URL: http://arxiv.org/abs/2102.07805v1
- Date: Mon, 15 Feb 2021 19:21:46 GMT
- Title: Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep
Convolutional Networks via Integrated Gradient-Based Scoring
- Authors: Sam Sattarzadeh, Mahesh Sudhakar, Konstantinos N. Plataniotis,
Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim
- Abstract summary: Grad-CAM is a popular solution that provides such a visualization by combining the activation maps obtained from the model.
We introduce a solution to tackle this problem by computing the path integral of the gradient-based terms in Grad-CAM.
We conduct a thorough analysis to demonstrate the improvement achieved by our method in measuring the importance of the extracted representations for the CNN's predictions.
- Score: 26.434705114982584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visualizing the features captured by Convolutional Neural Networks (CNNs) is
one of the conventional approaches to interpret the predictions made by these
models in numerous image recognition applications. Grad-CAM is a popular
solution that provides such a visualization by combining the activation maps
obtained from the model. However, the average gradient-based terms deployed in
this method underestimates the contribution of the representations discovered
by the model to its predictions. Addressing this problem, we introduce a
solution to tackle this issue by computing the path integral of the
gradient-based terms in Grad-CAM. We conduct a thorough analysis to demonstrate
the improvement achieved by our method in measuring the importance of the
extracted representations for the CNN's predictions, which yields to our
method's administration in object localization and model interpretation.
Related papers
- Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Interpretation of Neural Networks is Susceptible to Universal Adversarial Perturbations [9.054540533394926]
We show the existence of a Universal Perturbation for Interpretation (UPI) for standard image datasets.
We propose a gradient-based optimization method as well as a principal component analysis (PCA)-based approach to compute a UPI which can effectively alter a neural network's gradient-based interpretation on different samples.
arXiv Detail & Related papers (2022-11-30T15:55:40Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - Geometrically Guided Integrated Gradients [0.3867363075280543]
We introduce an interpretability method called "geometrically-guided integrated gradients"
Our method explores the model's dynamic behavior from multiple scaled versions of the input and captures the best possible attribution for each input.
We also propose a "model perturbation" sanity check to complement the traditionally used "model randomization" test.
arXiv Detail & Related papers (2022-06-13T05:05:43Z) - Generalizing Adversarial Explanations with Grad-CAM [7.165984630575092]
We present a novel method that extends Grad-CAM from example-based explanations to a method for explaining global model behaviour.
For our experiment, we study adversarial attacks on deep models such as VGG16, ResNet50, and ResNet101, and wide models such as InceptionNetv3 and XceptionNet.
The proposed method can be used to understand adversarial attacks and explain the behaviour of black box CNN models for image analysis.
arXiv Detail & Related papers (2022-04-11T22:09:21Z) - Enhancing Deep Neural Network Saliency Visualizations with Gradual
Extrapolation [0.0]
We propose an enhancement technique of the Class Activation Mapping methods like Grad-CAM or Excitation Backpropagation.
Our idea, called Gradual Extrapolation, can supplement any method that generates a heatmap picture by sharpening the output.
arXiv Detail & Related papers (2021-04-11T07:39:35Z) - Use HiResCAM instead of Grad-CAM for faithful explanations of
convolutional neural networks [89.56292219019163]
Explanation methods facilitate the development of models that learn meaningful concepts and avoid exploiting spurious correlations.
We illustrate a previously unrecognized limitation of the popular neural network explanation method Grad-CAM.
We propose HiResCAM, a class-specific explanation method that is guaranteed to highlight only the locations the model used to make each prediction.
arXiv Detail & Related papers (2020-11-17T19:26:14Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Towards Interpretable Semantic Segmentation via Gradient-weighted Class
Activation Mapping [71.91734471596432]
We propose SEG-GRAD-CAM, a gradient-based method for interpreting semantic segmentation.
Our method is an extension of the widely-used Grad-CAM method, applied locally to produce heatmaps showing the relevance of individual pixels for semantic segmentation.
arXiv Detail & Related papers (2020-02-26T12:32:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.