TSG: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
- URL: http://arxiv.org/abs/2110.05182v1
- Date: Mon, 11 Oct 2021 12:00:20 GMT
- Title: TSG: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
- Authors: Lin Cheng, Pengfei Fang, Yanjie Liang, Liao Zhang, Chunhua Shen, Hanzi
Wang
- Abstract summary: We study the visual saliency, a.k.a. visual explanation, to interpret convolutional neural networks.
Inspired by those observations, we propose a novel visual saliency framework, termed Target-Selective Gradient (TSG) backprop.
The proposed TSG consists of two components, namely, TSG-Conv and TSG-FC, which rectify the gradients for convolutional layers and fully-connected layers, respectively.
- Score: 72.9106103283475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The explanation for deep neural networks has drawn extensive attention in the
deep learning community over the past few years. In this work, we study the
visual saliency, a.k.a. visual explanation, to interpret convolutional neural
networks. Compared to iteration based saliency methods, single backward pass
based saliency methods benefit from faster speed and are widely used in
downstream visual tasks. Thus our work focuses on single backward pass
approaches. However, existing methods in this category struggle to successfully
produce fine-grained saliency maps concentrating on specific target classes.
That said, producing faithful saliency maps satisfying both
target-selectiveness and fine-grainedness using a single backward pass is a
challenging problem in the field. To mitigate this problem, we revisit the
gradient flow inside the network, and find that the entangled semantics and
original weights may disturb the propagation of target-relevant saliency.
Inspired by those observations, we propose a novel visual saliency framework,
termed Target-Selective Gradient (TSG) backprop, which leverages rectification
operations to effectively emphasize target classes and further efficiently
propagate the saliency to the input space, thereby generating target-selective
and fine-grained saliency maps. The proposed TSG consists of two components,
namely, TSG-Conv and TSG-FC, which rectify the gradients for convolutional
layers and fully-connected layers, respectively. Thorough qualitative and
quantitative experiments on ImageNet and Pascal VOC show that the proposed
framework achieves more accurate and reliable results than other competitive
methods.
Related papers
- Visual Prompt Tuning in Null Space for Continual Learning [51.96411454304625]
Existing prompt-tuning methods have demonstrated impressive performances in continual learning (CL)
This paper aims to learn each task by tuning the prompts in the direction orthogonal to the subspace spanned by previous tasks' features.
In practice, an effective null-space-based approximation solution has been proposed to implement the prompt gradient projection.
arXiv Detail & Related papers (2024-06-09T05:57:40Z) - Rethinking Class Activation Maps for Segmentation: Revealing Semantic
Information in Shallow Layers by Reducing Noise [2.462953128215088]
A major limitation to the performance of the class activation maps is the small spatial resolution of the feature maps in the last layer of the convolutional neural network.
We propose a simple gradient-based denoising method to filter the noise by truncating the positive gradient.
Our proposed scheme can be easily deployed in other CAM-related methods, facilitating these methods to obtain higher-quality class activation maps.
arXiv Detail & Related papers (2023-08-04T03:04:09Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Learning with Local Gradients at the Edge [14.94491070863641]
We present a novel backpropagation-free optimization algorithm dubbed Target Projection Gradient Descent (tpSGD)
tpSGD generalizes direct random target projection to work with arbitrary loss functions.
We evaluate the performance of tpSGD in training deep neural networks and extend the approach to multi-layer RNNs.
arXiv Detail & Related papers (2022-08-17T19:51:06Z) - SESS: Saliency Enhancing with Scaling and Sliding [42.188013259368766]
High-quality saliency maps are essential in several machine learning application areas including explainable AI and weakly supervised object detection and segmentation.
We propose a novel saliency enhancing approach called SESS (Saliency Enhancing with Scaling and Sliding)
It is a method and model extension to existing saliency map generation methods.
arXiv Detail & Related papers (2022-07-05T02:16:23Z) - Activated Gradients for Deep Neural Networks [9.476778519758426]
Deep neural networks often suffer from poor performance or even training failure due to the ill-conditioned problem.
In this paper, a novel method by acting the gradient activation function (GAF) on the gradient is proposed to handle these challenges.
arXiv Detail & Related papers (2021-07-09T06:00:55Z) - Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
Trained by Gradient Descent [95.94432031144716]
We propose a unified non- optimization framework for the analysis of a learning network.
We show that existing guarantees can be trained unified through gradient descent.
arXiv Detail & Related papers (2021-06-25T17:45:00Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network
Optimization [16.85167651136133]
We take a broader view of training sparse networks and consider the role of regularization, optimization and architecture choices on sparse models.
We show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime.
arXiv Detail & Related papers (2021-02-02T18:40:26Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.