Learned Gradient of a Regularizer for Plug-and-Play Gradient Descent
- URL: http://arxiv.org/abs/2204.13940v1
- Date: Fri, 29 Apr 2022 08:33:33 GMT
- Title: Learned Gradient of a Regularizer for Plug-and-Play Gradient Descent
- Authors: Rita Fermanian and Mikael Le Pendu and Christine Guillemot
- Abstract summary: The Plug-and-Play framework allows integrating advanced image denoising priors into algorithms.
Regularization by Denoising (RED) algorithms are two examples of methods that made a breakthrough in image restoration.
We show that it is possible to train a denoiser along with a network that corresponds to the gradient of its regularizer.
- Score: 37.41458921829744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Plug-and-Play (PnP) framework allows integrating advanced image denoising
priors into optimization algorithms, to efficiently solve a variety of image
restoration tasks. The Plug-and-Play alternating direction method of
multipliers (ADMM) and the Regularization by Denoising (RED) algorithms are two
examples of such methods that made a breakthrough in image restoration.
However, while the former method only applies to proximal algorithms, it has
recently been shown that there exists no regularization that explains the RED
algorithm when the denoisers lack Jacobian symmetry, which happen to be the
case of most practical denoisers. To the best of our knowledge, there exists no
method for training a network that directly represents the gradient of a
regularizer, which can be directly used in Plug-and-Play gradient-based
algorithms. We show that it is possible to train a denoiser along with a
network that corresponds to the gradient of its regularizer. We use this
gradient of the regularizer in gradient-based optimization methods and obtain
better results comparing to other generic Plug-and-Play approaches. We also
show that the regularizer can be used as a pre-trained network for unrolled
gradient descent. Lastly, we show that the resulting denoiser allows for a
quick convergence of the Plug-and-Play ADMM.
Related papers
- Plug-and-Play image restoration with Stochastic deNOising REgularization [8.678250057211368]
We propose a new framework called deNOising REgularization (SNORE)
SNORE applies the denoiser only to images with noise of the adequate level.
It is based on an explicit regularization, which leads to a descent to solve inverse problems.
arXiv Detail & Related papers (2024-02-01T18:05:47Z) - End-to-End Diffusion Latent Optimization Improves Classifier Guidance [81.27364542975235]
Direct Optimization of Diffusion Latents (DOODL) is a novel guidance method.
It enables plug-and-play guidance by optimizing diffusion latents.
It outperforms one-step classifier guidance on computational and human evaluation metrics.
arXiv Detail & Related papers (2023-03-23T22:43:52Z) - A Scalable Finite Difference Method for Deep Reinforcement Learning [0.0]
We investigate a problem with the use of distributed workers in some Deep Reinforcement Learning domains.
We produce a stable, low-bandwidth learning algorithm that achieves 100% usage of all connected CPUs under typical conditions.
arXiv Detail & Related papers (2022-10-14T03:33:53Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Learning Sparsity-Promoting Regularizers using Bilevel Optimization [9.18465987536469]
We present a method for supervised learning of sparsity-promoting regularizers for denoising signals and images.
Experiments with structured 1D signals and natural images show that the proposed method can learn an operator that outperforms well-known regularizers.
arXiv Detail & Related papers (2022-07-18T20:50:02Z) - Gradient Step Denoiser for convergent Plug-and-Play [5.629161809575015]
Plug-and-Play methods can lead to tremendous visual performance for various image problems.
We propose new type of Plug-and-Play methods, based on half-quadratic descent.
Experiments show that it is possible to learn such a deep denoiser while not compromising the performance.
arXiv Detail & Related papers (2021-10-07T07:11:48Z) - Preconditioned Plug-and-Play ADMM with Locally Adjustable Denoiser for
Image Restoration [54.23646128082018]
We extend the concept of plug-and-play optimization to use denoisers that can be parameterized for non-constant noise variance.
We show that our pixel-wise adjustable denoiser, along with a suitable preconditioning strategy, can further improve the plug-and-play ADMM approach for several applications.
arXiv Detail & Related papers (2021-10-01T15:46:35Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.