Gradient Step Denoiser for convergent Plug-and-Play
- URL: http://arxiv.org/abs/2110.03220v1
- Date: Thu, 7 Oct 2021 07:11:48 GMT
- Title: Gradient Step Denoiser for convergent Plug-and-Play
- Authors: Samuel Hurault, Arthur Leclaire, Nicolas Papadakis
- Abstract summary: Plug-and-Play methods can lead to tremendous visual performance for various image problems.
We propose new type of Plug-and-Play methods, based on half-quadratic descent.
Experiments show that it is possible to learn such a deep denoiser while not compromising the performance.
- Score: 5.629161809575015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plug-and-Play methods constitute a class of iterative algorithms for imaging
problems where regularization is performed by an off-the-shelf denoiser.
Although Plug-and-Play methods can lead to tremendous visual performance for
various image problems, the few existing convergence guarantees are based on
unrealistic (or suboptimal) hypotheses on the denoiser, or limited to strongly
convex data terms. In this work, we propose a new type of Plug-and-Play
methods, based on half-quadratic splitting, for which the denoiser is realized
as a gradient descent step on a functional parameterized by a deep neural
network. Exploiting convergence results for proximal gradient descent
algorithms in the non-convex setting, we show that the proposed Plug-and-Play
algorithm is a convergent iterative scheme that targets stationary points of an
explicit global functional. Besides, experiments show that it is possible to
learn such a deep denoiser while not compromising the performance in comparison
to other state-of-the-art deep denoisers used in Plug-and-Play schemes. We
apply our proximal gradient algorithm to various ill-posed inverse problems,
e.g. deblurring, super-resolution and inpainting. For all these applications,
numerical results empirically confirm the convergence results. Experiments also
show that this new algorithm reaches state-of-the-art performance, both
quantitatively and qualitatively.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Plug-and-Play image restoration with Stochastic deNOising REgularization [8.678250057211368]
We propose a new framework called deNOising REgularization (SNORE)
SNORE applies the denoiser only to images with noise of the adequate level.
It is based on an explicit regularization, which leads to a descent to solve inverse problems.
arXiv Detail & Related papers (2024-02-01T18:05:47Z) - Ordering for Non-Replacement SGD [7.11967773739707]
We seek to find an ordering that can improve the convergence rates for the non-replacement form of the algorithm.
We develop optimal orderings for constant and decreasing step sizes for strongly convex and convex functions.
In addition, we are able to combine the ordering with mini-batch and further apply it to more complex neural networks.
arXiv Detail & Related papers (2023-06-28T00:46:58Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - Learned Gradient of a Regularizer for Plug-and-Play Gradient Descent [37.41458921829744]
The Plug-and-Play framework allows integrating advanced image denoising priors into algorithms.
Regularization by Denoising (RED) algorithms are two examples of methods that made a breakthrough in image restoration.
We show that it is possible to train a denoiser along with a network that corresponds to the gradient of its regularizer.
arXiv Detail & Related papers (2022-04-29T08:33:33Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.