Enhancing Deep Neural Network Saliency Visualizations with Gradual
Extrapolation
- URL: http://arxiv.org/abs/2104.04945v1
- Date: Sun, 11 Apr 2021 07:39:35 GMT
- Title: Enhancing Deep Neural Network Saliency Visualizations with Gradual
Extrapolation
- Authors: Tomasz Szandala
- Abstract summary: We propose an enhancement technique of the Class Activation Mapping methods like Grad-CAM or Excitation Backpropagation.
Our idea, called Gradual Extrapolation, can supplement any method that generates a heatmap picture by sharpening the output.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an enhancement technique of the Class Activation Mapping methods
like Grad-CAM or Excitation Backpropagation, which presents visual explanations
of decisions from CNN-based models. Our idea, called Gradual Extrapolation, can
supplement any method that generates a heatmap picture by sharpening the
output. Instead of producing a coarse localization map highlighting the
important predictive regions in the image, our method outputs the specific
shape that most contributes to the model output. Thus, it improves the accuracy
of saliency maps. Effect has been achieved by gradual propagation of the crude
map obtained in deep layer through all preceding layers with respect to their
activations. In validation tests conducted on a selected set of images, the
proposed method significantly improved the localization detection of the neural
networks' attention. Furthermore, the proposed method is applicable to any deep
neural network model.
Related papers
- Gradient-Free Supervised Learning using Spike-Timing-Dependent Plasticity for Image Recognition [3.087000217989688]
An approach to supervised learning in spiking neural networks is presented using a gradient-free method combined with spike-timing-dependent plasticity for image recognition.
The proposed network architecture is scalable to multiple layers, enabling the development of more complex and deeper SNN models.
arXiv Detail & Related papers (2024-10-21T21:32:17Z) - DELAD: Deep Landweber-guided deconvolution with Hessian and sparse prior [0.22940141855172028]
We present a model for non-blind image deconvolution that incorporates the classic iterative method into a deep learning application.
We build our network based on the iterative Landweber deconvolution algorithm, which is integrated with trainable convolutional layers to enhance the recovered image structures and details.
arXiv Detail & Related papers (2022-09-30T11:15:03Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation
of Convolutional Neural Networks [7.71412567705588]
Class activation mapping-based method has been widely used to interpret the internal decisions of models in computer vision tasks.
We propose an Absolute value Class Activation Mapping-based (Abs-CAM) method, which optimize the gradients derived from the backpropagation.
The framework of Abs-CAM is divided into two phases: generating initial saliency map and generating final saliency map.
arXiv Detail & Related papers (2022-07-08T02:06:46Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Poly-CAM: High resolution class activation map for convolutional neural
networks [88.29660600055715]
saliency maps derived from convolutional neural networks generally fail in localizing with accuracy the image features justifying the network prediction.
This is because those maps are either low-resolution as for CAM [Zhou et al., 2016], or smooth as for perturbation-based methods [Zeiler and Fergus, 2014], or do correspond to a large number of widespread peaky spots.
In contrast, our work proposes to combine the information from earlier network layers with the one from later layers to produce a high resolution Class Activation Map.
arXiv Detail & Related papers (2022-04-28T09:06:19Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Pan-sharpening via High-pass Modification Convolutional Neural Network [39.295436779920465]
We propose a novel pan-sharpening convolutional neural network based on a high-pass modification block.
The proposed block is designed to learn the high-pass information, leading to enhance spatial information in each band of the multi-spectral-resolution images.
Experiments demonstrate the superior performance of the proposed method compared to the state-of-the-art pan-sharpening methods.
arXiv Detail & Related papers (2021-05-24T23:39:04Z) - Explaining Convolutional Neural Networks through Attribution-Based Input
Sampling and Block-Wise Feature Aggregation [22.688772441351308]
Methods based on class activation mapping and randomized input sampling have gained great popularity.
However, the attribution methods provide lower resolution and blurry explanation maps that limit their explanation power.
In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique.
We also propose a layer selection strategy that applies to the whole family of CNN-based models.
arXiv Detail & Related papers (2020-10-01T20:27:30Z) - Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution [70.78655569298923]
Integrated Gradients as an attribution method for deep neural network models offers simple implementability.
It suffers from noisiness of explanations which affects the ease of interpretability.
The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method.
arXiv Detail & Related papers (2020-04-22T10:43:19Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.