Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization
- URL: http://arxiv.org/abs/2201.13256v1
- Date: Mon, 31 Jan 2022 14:05:20 GMT
- Title: Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization
- Authors: Samuel Hurault, Arthur Leclaire, Nicolas Papadakis
- Abstract summary: Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
- Score: 7.0226402509856225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Plug-and-Play (PnP) methods solve ill-posed inverse problems through
iterative proximal algorithms by replacing a proximal operator by a denoising
operation. When applied with deep neural network denoisers, these methods have
shown state-of-the-art visual performance for image restoration problems.
However, their theoretical convergence analysis is still incomplete. Most of
the existing convergence results consider nonexpansive denoisers, which is
non-realistic, or limit their analysis to strongly convex data-fidelity terms
in the inverse problem to solve. Recently, it was proposed to train the
denoiser as a gradient descent step on a functional parameterized by a deep
neural network. Using such a denoiser guarantees the convergence of the PnP
version of the Half-Quadratic-Splitting (PnP-HQS) iterative algorithm. In this
paper, we show that this gradient denoiser can actually correspond to the
proximal operator of another scalar function. Given this new result, we exploit
the convergence theory of proximal algorithms in the nonconvex setting to
obtain convergence results for PnP-PGD (Proximal Gradient Descent) and PnP-ADMM
(Alternating Direction Method of Multipliers). When built on top of a smooth
gradient denoiser, we show that PnP-PGD and PnP-ADMM are convergent and target
stationary points of an explicit functional. These convergence results are
confirmed with numerical experiments on deblurring, super-resolution and
inpainting.
Related papers
- Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Convergent plug-and-play with proximal denoiser and unconstrained
regularization parameter [12.006511319607473]
In this work, we present new of convergence for Plug-Play (PGD) algorithms.
Recent research has explored convergence by proofs (DRS)
First, we provide a novel convergence proof for.
DRS that does not impose any restrictions on the regularization.
Second, we examine a relaxed version of the PGD that enhances the accuracy of image restoration.
arXiv Detail & Related papers (2023-11-02T13:18:39Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse
Problems [8.673558396669806]
Plug-noise-and-Play (Play) methods are efficient iterative algorithms for solving illposed image inverse problems.
We propose two.
algorithms based on the Bregman Score gradient Denoise inverse problems.
arXiv Detail & Related papers (2023-06-06T07:36:47Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - Gradient Step Denoiser for convergent Plug-and-Play [5.629161809575015]
Plug-and-Play methods can lead to tremendous visual performance for various image problems.
We propose new type of Plug-and-Play methods, based on half-quadratic descent.
Experiments show that it is possible to learn such a deep denoiser while not compromising the performance.
arXiv Detail & Related papers (2021-10-07T07:11:48Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Plug-and-play ISTA converges with kernel denoisers [21.361571421723262]
Plug-and-play (blur) method is a recent paradigm for image regularization.
A fundamental question in this regard is the theoretical convergence of the kernels.
arXiv Detail & Related papers (2020-04-07T06:25:34Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.