Neural Gradient Regularizer
- URL: http://arxiv.org/abs/2308.16612v2
- Date: Wed, 13 Sep 2023 09:11:26 GMT
- Title: Neural Gradient Regularizer
- Authors: Shuang Xu, Yifan Wang, Zixiang Zhao, Jiangjun Peng, Xiangyong Cao,
Deyu Meng, Yulun Zhang, Radu Timofte, Luc Van Gool
- Abstract summary: We propose a neural gradient regularizer (NGR) that expresses the gradient map as the output of a neural network.
NGR is applicable to various image types and different image processing tasks, functioning in a zero-shot learning fashion.
- Score: 150.85797800807524
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Owing to its significant success, the prior imposed on gradient maps has
consistently been a subject of great interest in the field of image processing.
Total variation (TV), one of the most representative regularizers, is known for
its ability to capture the intrinsic sparsity prior underlying gradient maps.
Nonetheless, TV and its variants often underestimate the gradient maps, leading
to the weakening of edges and details whose gradients should not be zero in the
original image (i.e., image structures is not describable by sparse priors of
gradient maps). Recently, total deep variation (TDV) has been introduced,
assuming the sparsity of feature maps, which provides a flexible regularization
learned from large-scale datasets for a specific task. However, TDV requires to
retrain the network with image/task variations, limiting its versatility. To
alleviate this issue, in this paper, we propose a neural gradient regularizer
(NGR) that expresses the gradient map as the output of a neural network. Unlike
existing methods, NGR does not rely on any subjective sparsity or other prior
assumptions on image gradient maps, thereby avoiding the underestimation of
gradient maps. NGR is applicable to various image types and different image
processing tasks, functioning in a zero-shot learning fashion, making it a
versatile and plug-and-play regularizer. Extensive experimental results
demonstrate the superior performance of NGR over state-of-the-art counterparts
for a range of different tasks, further validating its effectiveness and
versatility.
Related papers
- VDIP-TGV: Blind Image Deconvolution via Variational Deep Image Prior
Empowered by Total Generalized Variation [21.291149526862416]
Deep image prior (DIP) proposes to use the deep network as a regularizer for a single image rather than as a supervised model.
In this paper, we combine total generalized variational (TGV) regularization with VDIP to overcome these shortcomings.
The proposed VDIP-TGV effectively recovers image edges and details by supplementing extra gradient information through TGV.
arXiv Detail & Related papers (2023-10-30T12:03:18Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Deep Generalized Unfolding Networks for Image Restoration [16.943609020362395]
We propose a Deep Generalized Unfolding Network (DGUNet) for image restoration.
We integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm.
Our method is superior in terms of state-of-the-art performance, interpretability, and generalizability.
arXiv Detail & Related papers (2022-04-28T08:39:39Z) - Low-rank Meets Sparseness: An Integrated Spatial-Spectral Total
Variation Approach to Hyperspectral Denoising [11.79762223888294]
We propose a novel TV regularization to simultaneously characterize the sparsity and low-rank priors of the gradient map (LRSTV)
The new regularization not only imposes sparsity on the gradient map itself, but also penalizes the rank on the gradient map after Fourier transform.
It naturally encodes the sparsity and lowrank priors of the gradient map, and thus is expected to reflect the inherent structure of the original image more faithfully.
arXiv Detail & Related papers (2022-04-27T12:31:55Z) - TSG: Target-Selective Gradient Backprop for Probing CNN Visual Saliency [72.9106103283475]
We study the visual saliency, a.k.a. visual explanation, to interpret convolutional neural networks.
Inspired by those observations, we propose a novel visual saliency framework, termed Target-Selective Gradient (TSG) backprop.
The proposed TSG consists of two components, namely, TSG-Conv and TSG-FC, which rectify the gradients for convolutional layers and fully-connected layers, respectively.
arXiv Detail & Related papers (2021-10-11T12:00:20Z) - Layerwise Optimization by Gradient Decomposition for Continual Learning [78.58714373218118]
Deep neural networks achieve state-of-the-art and sometimes super-human performance across various domains.
When learning tasks sequentially, the networks easily forget the knowledge of previous tasks, known as "catastrophic forgetting"
arXiv Detail & Related papers (2021-05-17T01:15:57Z) - Input Bias in Rectified Gradients and Modified Saliency Maps [0.0]
Saliency maps provide an intuitive way to identify input features with substantial influences on classifications or latent concepts.
Several modifications to conventional saliency maps, such as Rectified Gradients, have been introduced to allegedly denoise and improve interpretability.
We demonstrate that dark areas of an input image are not highlighted by a saliency map using Rectified Gradients, even if it is relevant for the class or concept.
arXiv Detail & Related papers (2020-11-10T09:45:13Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution [70.78655569298923]
Integrated Gradients as an attribution method for deep neural network models offers simple implementability.
It suffers from noisiness of explanations which affects the ease of interpretability.
The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method.
arXiv Detail & Related papers (2020-04-22T10:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.