Smooth Deep Saliency
- URL: http://arxiv.org/abs/2404.02282v3
- Date: Thu, 8 Aug 2024 15:25:44 GMT
- Title: Smooth Deep Saliency
- Authors: Rudolf Herdt, Maximilian Schmidt, Daniel Otero Baguer, Peter Maaß,
- Abstract summary: We investigate methods to reduce the noise in deep saliency maps coming from convolutional downsampling.
Those methods make the investigated models more interpretable for gradient-based saliency maps, computed in hidden layers.
- Score: 0.3397310088873502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we investigate methods to reduce the noise in deep saliency maps coming from convolutional downsampling. Those methods make the investigated models more interpretable for gradient-based saliency maps, computed in hidden layers. We evaluate the faithfulness of those methods using insertion and deletion metrics, finding that saliency maps computed in hidden layers perform better compared to both the input layer and GradCAM. We test our approach on different models trained for image classification on ImageNet1K, and models trained for tumor detection on Camelyon16 and in-house real-world digital pathology scans of stained tissue samples. Our results show that the checkerboard noise in the gradient gets reduced, resulting in smoother and therefore easier to interpret saliency maps.
Related papers
- Deep Nets with Subsampling Layers Unwittingly Discard Useful Activations at Test-Time [46.795812678240445]
Subsampling layers play a crucial role in deep nets by discarding a portion of an activation map to reduce its spatial dimensions.
We propose a search and aggregate method to find useful activation maps to be used at test time.
Our method consistently improves model test-time performance, complementing existing test-time augmentation techniques.
arXiv Detail & Related papers (2024-10-01T21:24:43Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Towards Unpaired Depth Enhancement and Super-Resolution in the Wild [121.96527719530305]
State-of-the-art data-driven methods of depth map super-resolution rely on registered pairs of low- and high-resolution depth maps of the same scenes.
We consider an approach to depth map enhancement based on learning from unpaired data.
arXiv Detail & Related papers (2021-05-25T16:19:16Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency
Map Comparison [9.023847175654602]
In this paper we show that arguably neutral baseline images still impact the generated saliency maps and their evaluation with input perturbations.
We experimentally reveal inconsistencies among a selection of input perturbation methods and find that they lack robustness for generating saliency maps and for evaluating saliency maps as saliency metrics.
arXiv Detail & Related papers (2021-01-26T18:11:06Z) - Multiscale Score Matching for Out-of-Distribution Detection [19.61640396236456]
We present a new methodology for detecting out-of-distribution (OOD) images by utilizing norms of the score estimates at multiple noise scales.
Our methodology is completely unsupervised and follows a straight forward training scheme.
arXiv Detail & Related papers (2020-10-25T15:10:31Z) - Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution [70.78655569298923]
Integrated Gradients as an attribution method for deep neural network models offers simple implementability.
It suffers from noisiness of explanations which affects the ease of interpretability.
The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method.
arXiv Detail & Related papers (2020-04-22T10:43:19Z) - There and Back Again: Revisiting Backpropagation Saliency Methods [87.40330595283969]
Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.
A popular class of such methods is based on backpropagating a signal and analyzing the resulting gradient.
We propose a single framework under which several such methods can be unified.
arXiv Detail & Related papers (2020-04-06T17:58:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.