Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation
of Convolutional Neural Networks
- URL: http://arxiv.org/abs/2207.03648v1
- Date: Fri, 8 Jul 2022 02:06:46 GMT
- Title: Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation
of Convolutional Neural Networks
- Authors: Chunyan Zeng, Kang Yan, Zhifeng Wang, Yan Yu, Shiyan Xia, Nan Zhao
- Abstract summary: Class activation mapping-based method has been widely used to interpret the internal decisions of models in computer vision tasks.
We propose an Absolute value Class Activation Mapping-based (Abs-CAM) method, which optimize the gradients derived from the backpropagation.
The framework of Abs-CAM is divided into two phases: generating initial saliency map and generating final saliency map.
- Score: 7.71412567705588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The black-box nature of Deep Neural Networks (DNNs) severely hinders its
performance improvement and application in specific scenes. In recent years,
class activation mapping-based method has been widely used to interpret the
internal decisions of models in computer vision tasks. However, when this
method uses backpropagation to obtain gradients, it will cause noise in the
saliency map, and even locate features that are irrelevant to decisions. In
this paper, we propose an Absolute value Class Activation Mapping-based
(Abs-CAM) method, which optimizes the gradients derived from the
backpropagation and turns all of them into positive gradients to enhance the
visual features of output neurons' activation, and improve the localization
ability of the saliency map. The framework of Abs-CAM is divided into two
phases: generating initial saliency map and generating final saliency map. The
first phase improves the localization ability of the saliency map by optimizing
the gradient, and the second phase linearly combines the initial saliency map
with the original image to enhance the semantic information of the saliency
map. We conduct qualitative and quantitative evaluation of the proposed method,
including Deletion, Insertion, and Pointing Game. The experimental results show
that the Abs-CAM can obviously eliminate the noise in the saliency map, and can
better locate the features related to decisions, and is superior to the
previous methods in recognition and localization tasks.
Related papers
- Guided AbsoluteGrad: Magnitude of Gradients Matters to Explanation's Localization and Saliency [10.786952260623002]
We propose a gradient-based XAI method called Guided AbsoluteGrad for saliency map explanations.
We introduce a novel evaluation metric named ReCover And Predict (RCAP), which considers the localization and visual noise level objectives.
We evaluate Guided AbsoluteGrad with seven gradient-based XAI methods using the RCAP metric and other SOTA metrics in three case studies.
arXiv Detail & Related papers (2024-04-23T23:26:02Z) - Hi-Map: Hierarchical Factorized Radiance Field for High-Fidelity
Monocular Dense Mapping [51.739466714312805]
We introduce Hi-Map, a novel monocular dense mapping approach based on Neural Radiance Field (NeRF)
Hi-Map is exceptional in its capacity to achieve efficient and high-fidelity mapping using only posed RGB inputs.
arXiv Detail & Related papers (2024-01-06T12:32:25Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Rethinking Class Activation Maps for Segmentation: Revealing Semantic
Information in Shallow Layers by Reducing Noise [2.462953128215088]
A major limitation to the performance of the class activation maps is the small spatial resolution of the feature maps in the last layer of the convolutional neural network.
We propose a simple gradient-based denoising method to filter the noise by truncating the positive gradient.
Our proposed scheme can be easily deployed in other CAM-related methods, facilitating these methods to obtain higher-quality class activation maps.
arXiv Detail & Related papers (2023-08-04T03:04:09Z) - Decom--CAM: Tell Me What You See, In Details! Feature-Level Interpretation via Decomposition Class Activation Map [23.71680014689873]
Class Activation Map (CAM) is widely used to interpret deep model predictions by highlighting object location.
This paper proposes a new two-stage interpretability method called the Decomposition Class Activation Map (Decom-CAM)
Our experiments demonstrate that the proposed Decom-CAM outperforms current state-of-the-art methods significantly.
arXiv Detail & Related papers (2023-05-27T14:33:01Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Enhancing Deep Neural Network Saliency Visualizations with Gradual
Extrapolation [0.0]
We propose an enhancement technique of the Class Activation Mapping methods like Grad-CAM or Excitation Backpropagation.
Our idea, called Gradual Extrapolation, can supplement any method that generates a heatmap picture by sharpening the output.
arXiv Detail & Related papers (2021-04-11T07:39:35Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution [70.78655569298923]
Integrated Gradients as an attribution method for deep neural network models offers simple implementability.
It suffers from noisiness of explanations which affects the ease of interpretability.
The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method.
arXiv Detail & Related papers (2020-04-22T10:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.