Revisiting The Evaluation of Class Activation Mapping for
Explainability: A Novel Metric and Experimental Analysis
- URL: http://arxiv.org/abs/2104.10252v1
- Date: Tue, 20 Apr 2021 21:34:24 GMT
- Title: Revisiting The Evaluation of Class Activation Mapping for
Explainability: A Novel Metric and Experimental Analysis
- Authors: Samuele Poppi, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara
- Abstract summary: Class Activation Mapping (CAM) approaches provide an effective visualization by taking weighted averages of the activation maps.
We propose a novel set of metrics to quantify explanation maps, which show better effectiveness and simplify comparisons between approaches.
- Score: 54.94682858474711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the request for deep learning solutions increases, the need for
explainability is even more fundamental. In this setting, particular attention
has been given to visualization techniques, that try to attribute the right
relevance to each input pixel with respect to the output of the network. In
this paper, we focus on Class Activation Mapping (CAM) approaches, which
provide an effective visualization by taking weighted averages of the
activation maps. To enhance the evaluation and the reproducibility of such
approaches, we propose a novel set of metrics to quantify explanation maps,
which show better effectiveness and simplify comparisons between approaches. To
evaluate the appropriateness of the proposal, we compare different CAM-based
visualization methods on the entire ImageNet validation set, fostering proper
comparisons and reproducibility.
Related papers
- Decom--CAM: Tell Me What You See, In Details! Feature-Level Interpretation via Decomposition Class Activation Map [23.71680014689873]
Class Activation Map (CAM) is widely used to interpret deep model predictions by highlighting object location.
This paper proposes a new two-stage interpretability method called the Decomposition Class Activation Map (Decom-CAM)
Our experiments demonstrate that the proposed Decom-CAM outperforms current state-of-the-art methods significantly.
arXiv Detail & Related papers (2023-05-27T14:33:01Z) - Better Understanding Differences in Attribution Methods via Systematic Evaluations [57.35035463793008]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.
arXiv Detail & Related papers (2023-03-21T14:24:58Z) - Towards Better Understanding Attribution Methods [77.1487219861185]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We also propose a post-processing smoothing step that significantly improves the performance of some attribution methods.
arXiv Detail & Related papers (2022-05-20T20:50:17Z) - Impact of a DCT-driven Loss in Attention-based Knowledge-Distillation
for Scene Recognition [64.29650787243443]
We propose and analyse the use of a 2D frequency transform of the activation maps before transferring them.
This strategy enhances knowledge transferability in tasks such as scene recognition.
We publicly release the training and evaluation framework used along this paper at http://www.vpu.eps.uam.es/publications/DCTBasedKDForSceneRecognition.
arXiv Detail & Related papers (2022-05-04T11:05:18Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - ADVISE: ADaptive Feature Relevance and VISual Explanations for
Convolutional Neural Networks [0.745554610293091]
We introduce ADVISE, a new explainability method that quantifies and leverages the relevance of each unit of the feature map to provide better visual explanations.
We extensively evaluate our idea in the image classification task using AlexNet, VGG16, ResNet50, and Xception pretrained on ImageNet.
Our experiments further show that ADVISE fulfils the sensitivity and implementation independence axioms while passing the sanity checks.
arXiv Detail & Related papers (2022-03-02T18:16:57Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z) - Assessing the Reliability of Visual Explanations of Deep Models with
Adversarial Perturbations [15.067369314723958]
We propose an objective measure to evaluate the reliability of explanations of deep models.
Our approach is based on changes in the network's outcome resulting from the perturbation of input images in an adversarial way.
We also propose a straightforward application of our approach to clean relevance maps, creating more interpretable maps without any loss in essential explanation.
arXiv Detail & Related papers (2020-04-22T19:57:34Z) - Uncertainty based Class Activation Maps for Visual Question Answering [30.859101872119517]
We propose a method that obtains gradient-based certainty estimates that also provide visual attention maps.
We incorporate modern probabilistic deep learning methods that we further improve by using the gradients for these estimates.
The proposed technique can be thought of as a recipe for obtaining improved certainty estimates and explanations for deep learning models.
arXiv Detail & Related papers (2020-01-23T19:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.