Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution
- URL: http://arxiv.org/abs/2004.10484v2
- Date: Thu, 2 Sep 2021 17:57:56 GMT
- Title: Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution
- Authors: Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek,
Alexander Binder
- Abstract summary: Integrated Gradients as an attribution method for deep neural network models offers simple implementability.
It suffers from noisiness of explanations which affects the ease of interpretability.
The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method.
- Score: 70.78655569298923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Integrated Gradients as an attribution method for deep neural network models
offers simple implementability. However, it suffers from noisiness of
explanations which affects the ease of interpretability. The SmoothGrad
technique is proposed to solve the noisiness issue and smoothen the attribution
maps of any gradient-based attribution method. In this paper, we present
SmoothTaylor as a novel theoretical concept bridging Integrated Gradients and
SmoothGrad, from the Taylor's theorem perspective. We apply the methods to the
image classification problem, using the ILSVRC2012 ImageNet object recognition
dataset, and a couple of pretrained image models to generate attribution maps.
These attribution maps are empirically evaluated using quantitative measures
for sensitivity and noise level. We further propose adaptive noising to
optimize for the noise scale hyperparameter value. From our experiments, we
find that the SmoothTaylor approach together with adaptive noising is able to
generate better quality saliency maps with lesser noise and higher sensitivity
to the relevant points in the input space as compared to Integrated Gradients.
Related papers
- On the Trade-Off between Stability and Fidelity of Gaussian-Smoothed Saliency Maps [9.054540533394926]
We study the role of randomized smoothing in the well-known Smooth-Grad algorithm in the stability of gradient-based maps to the randomness of training samples.
Our theoretical results suggest the role of Gaussian smoothing in boosting the stability of gradient-based maps to the randomness of training settings.
arXiv Detail & Related papers (2024-11-06T13:26:57Z) - Rethinking the Principle of Gradient Smooth Methods in Model Explanation [2.6819730646697972]
Gradient Smoothing is an efficient approach to reducing noise in gradient-based model explanation method.
We propose an adaptive gradient smoothing method, AdaptGrad, based on these insights.
arXiv Detail & Related papers (2024-10-10T08:24:27Z) - A Learning Paradigm for Interpretable Gradients [9.074325843851726]
We present a novel training approach to improve the quality of gradients for interpretability.
We find that the resulting gradient is qualitatively less noisy and improves quantitatively the interpretability properties of different networks.
arXiv Detail & Related papers (2024-04-23T13:32:29Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring [66.91879314310842]
We propose an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features.
A multi-scale cascaded feature refinement module then predicts the deblurred image from the deconvolved deep features.
We show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts and quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.
arXiv Detail & Related papers (2021-03-18T00:38:11Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Investigating Saturation Effects in Integrated Gradients [5.366801257602863]
We propose a variant of IntegratedGradients which primarily captures gradients in unsaturated regions.
We find that this attribution technique shows higher model faithfulness and lower sensitivity to noise com-pared with standard Integrated Gradients.
arXiv Detail & Related papers (2020-10-23T22:48:02Z) - Physics-based Shading Reconstruction for Intrinsic Image Decomposition [20.44458250060927]
We propose albedo and shading gradient descriptors which are derived from physics-based models.
An initial sparse shading map is calculated directly from the corresponding RGB image gradients in a learning-free unsupervised manner.
An optimization method is proposed to reconstruct the full dense shading map.
We are the first to directly address the texture and intensity ambiguity problems of the shading estimations.
arXiv Detail & Related papers (2020-09-03T09:30:17Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.